Rufus 2

robert.burke@gmail.com
Robert Burke

smeaton@media.mit.edu
Scott Eaton

wdstiehl@media.mit.edu
Dan Stiehl

synthetic characters
Our thanks to the entire group




Last December, we massaged the Synthetic Characters Creature Kernel Framework – traditionally used to build on-screen animated characters – to control an animatronic dog head with eight degrees of freedom.  Built with wiffle balls, armature wire and servo motors, Rufus 1.0 demonstrated that our system could create the illusion of life not only on a screen, but also with hardware. 

Our success with Rufus 1.0 motivated us to consider how far we might take a second version.  We considered a number of directions: environmental perception, a more compelling physical appearance, new types of input and output, and even integrating learning into the system. 

Three months, one new behaviour system, and a whole lot of silicone later, Rufus 2.0 has brought us closer to our vision of a hardware system that conveys emotion, is responsive to its environment, and is capable of learning like a real dog.

Our design goal for Rufus 2.0 was to create a compelling, interactive character that is physically situated and able to respond to and learn from his environment.  While developing Rufus 2.0, the Synthetic Characters group was developing a new version of the Creature Kernel Framework that includes a robust new learning model.



Here's Rufus and his family: his bone, the cat and Spiny Norman (his toy hedgehog).

Rufus is equipped with an onboard CCD camera located in his right eye that provides low-level motion detection and tracking through a utility we call Dog's Eye View.  His left pupil is fitted with an infrared receiver that Rufus can use to detect the cat, the bone, and Norman.

A condenser microphone hidden in Rufus sends acoustic information back to the computer for processing by DogEar, the utility developed for Rufus 1.0 that allows a creature to process, recognize, and categorize similar utterances. 

 





Rufus has several additional sources of input.    A device designed specifically to hear “clicker” clicks has also been added, so that Rufus can be "clicker trained" like the other software-based dogs built by the group.  

To the left is Rufus' fiberglass underskull. Rufus has ten degrees of freedom, and is also capable of expressing himself using a speaker for a voicebox.  His latex/silicone skin is layered over this underskull to hide the hardware, and does a tremendous job of deforming (and performing!) in a skinlike manner under the actions of the underlying skeleton.

 

 




Here, you can see the camera mounted in Rufus' right eye.  The left eye, shown empty here, now contains an infrared sensor. 


This is the back of Rufus' head with his underskull removed.

 


And here's a great big smile from Rufus.

Future work includes plans for a "doghouse" body, additional modes of sensing, and... well, you'll have to wait and see. 

Back to Rob's Home Page