“[Transparency] would say that there is nothing in the state of the system that cannot be inferred from the display. If there are any modes, then these must have a visual indication; if there are any differences in behavior between the displayed shapes, then there must be some corresponding visual difference.”
Dix, et.al. Human-computer interaction. 2004.
One of the most compelling arguments about interactive machine learning with robots is made when you think about the social signalling and gesture required to reveal internal state or “thoughts” that the human can cue upon and respond to before it is too late. Robotic behavior that reveals internal state can be called “transparent” in that they indicate motives and goals before they act, allowing the human to anticipate future actions. This is in contrast with more “opaque” behaviors which reveal little about the robots inner thoughts. The work points out an important observation; namely that we can look to developmental learning as a hint on what kind of cues humans use to inform one another during a social learning episode.
What I show in my master’s thesis is that task-transparency, or the ability to connect and communicate about the task in a fluent way implores the user to shape and correct the learned goal in ways that may not be revealed by more batch type machine learning. Additionally, some participants are shown to prefer task-transparent robots which appear to have the ability of “introspection” in which it can modify the learned goal by other methods than just demonstration. My thesis is available upon request.
Prior work performed for my HRI class, I investigated transparency and the role that it plays in learning prior to my thesis. Transparency in this work was meant to signal bodily intention to the human. We use a number of functional solutions as well as a number of more natural solutions and show that transparency is an important part of learning and interaction and is preferred by a majority of the users of the robot.