Shortcuts: Creating small Worlds

In the Shortcuts project, we are developing methods to automatically and unobtrusively learn the social network structure that arises within a group based on data collected using the sociometer. The questions we are exploring are:

-  Who are the key players in the community? 
-  How does information propagate within the community?
-  How can we modify group interactions to promote better communication?
 

We have built the sociometer, a wearable sensor package, for measuring face-to-face interactions between people. We are developing methods for learning the structure and dynamics of human communication networks. Knowledge of how people interact is important in many disciplines, e.g. organizational behavior, social network analysis and knowledge management applications such as expert finding. At present researchers mainly have to rely on questionnaires, surveys or diaries in order to obtain data on physical interactions between people. We use noisy sensor measurements from the sociometer  to build computational models of group interactions. Using statistical pattern recognition techniques such as dynamic Bayesian network models we can automatically learn the underlying structure of the network and also analyze the dynamics of individual and group interactions. (Working Paper)

 

 

 

The Influence Model
The Influence Model was developed by Chalee Asavathiratham as a generative mechanism to efficiently model the effects of many interacting Markov processes. His work showed how complex phenomena involving interactions between large numbers of chains could be simulated through this simplified model, such as the up/down time for power stations across the US power grid.In his description, all states were observed, and he did not develop a mechanism for learning the parameters of the model.We extend his model by adding the notion of hidden states and observations. We also develop an approximate algorithm for the parameters. Joint work with Brian Clarkson and Sumit Basu. (Technical Report)
 

 

The Facilitator Room
The facilitator room project is an attempt to observe, model, and affect the interaction patterns of its users. This involves sensing the user's interaction in the room using computer vision and audition technologies, and then interacting with them using active components in the room (speakers, projectors, etc.) in order to facilitate interactions. Joint work with Brian Clarkson and Sumit Basu. (CVPR '01 Cues in Communication workshop paper)
Boosting and Structure Learning in  Bayesian networks
 

Bayesian networks are an attractive modeling tool for human sensing, as they combine an intuitive graphical representation with efficient algorithms for inference and learning. Boosting is a general method of improving the performance of a classifier. This work focuses on developing algorithms for boosting Bayesian networks. By boosting the structure and parameters of Bayesian networks can we build better classifiers. Joint work with Jim Rehg and Vladimir Pavlovic. (ICPR '02 Paper

 

FaceFact: Study of Facial Features for Understanding Expression

A framework for automatic detection, recognition and interpretation of facial expressions for the purpose of understanding the emotional or cognitive states that generate certain expressions. This work focuses on the study and analysis of facial expressions in natural conversations --- starting from data recording to feature extraction and modeling. All the analysis is done for person specific models. To allow the use of person specific models, a multi-modal person recognition system is developed for robust recognition in noisy environments. The study shows that it is very difficult to process and model events from spontaneous and natural interactions. The results show that some expressions are more easily identifiable, such as blinks, nods and head shakes, whereas expressions like a talking mouth and smiles are harder to identify. Data from conversations was recorded under different conditions, ranging from fully natural and unconstrained to having subjects' heads fixed in place. Observations made from comparing natural conversation data with constrained conversation data show that useful expression information can be lost due to imposing constraints on a person's movement. Thus, if automatic expression analysis is to be a useful input modality in different applications, it is necessary to study expressions in a natural and unconstrained environments. (Master's Thesis, ICPR '00 Paper)

 

Multi-modal Person Recognition using Unconstrained Audio and Video

The focus of this work is to develop a person identification technique that can recognize and verify people from unconstrained video and audio. We do not expect fully frontal face image or clean speech as our input. Our  algorithm is able detect and compensate for pose variation and changes in the auditory background and also select the most reliable video frame and audio clip to use for recognition. We also use 3D depth information of a human head to detect the presence of an actual person as opposed to an image of that person. Joint work with Brian Clarkson. (AVBPA '99 paper, Cover Feature from IEEE Computer Magazine)

 

Discriminative Training for Face Feature Classification

Principal Component Analysis (PCA) decomposes high dimensional data into low dimensional sub-space. This decomposition is used for data compression and also widely used for classification tasks. The focus of this work is to derive discriminative principal components that is best suited for the classification task at hand. Performance of discriminative PCA was compared with regular PCA in classifying various facial features such as male/female, smiling/serious, children/teen/adult/senior, white/african-american/asian/hispanic etc. 

 

Wavelet Templates for Face Detection

Oren and Poggio propose using wavelet template generated from an overcomplete basis set for  object detection. In this project we implement the algorithm proposed by Oren & Poggion and compare its performance with existing techniques that use skin color, facial symmetry and principal component analysis for face detection. We measure the robustness of both techniques in presence of illumination, scale and rotation. 

 

Learning Eyebrows for Expression Recognition

This goal of this project is to segment, localize and track eyebrows for recognition of different facial expressions. Color and horizontal edge detection is combined to localize eyebrows. A deformable contour model of the eyebrow is learned and used to generate features points on the eyebrows for expression recognition. Finally hidden Markov models are used to recognize different eyebrow expressions.

 

Face and Facial Feature Extraction and Tracking in Video Sequences

In this project, I worked on face and facial feature detection using color and shape information combined with valley energy functions. Once the features have been detected a generic wire-frame model of the head and shoulder is fitted to the first frame of the video sequence. The head movement is tracked by tracking the mesh node points ad warping the mesh triangles based on node triangle vectors. (Senior Thesis)