I'm currently a senior research scientist at Google, working on foundational models, Bard, and more.
I did my PhD at MIT, where I was advised by Deb Roy in the Media Lab and Jacob Andreas in CSAIL. During my PhD, I spent time at Facebook AI Research with Jason Weston and Stephen Roller, and Google Brain with Peter J. Liu. Before that, I spent one year as a data scientist at Facebook, working on moderation and human-AI interaction. I graduated from UC Berkeley, where I did some research on topological data analysis, bioinformatics, and biomedical imaging.
I work on both capabilities and safety/alignment AI research. I often use real-world applications to motivate advances in methods and understanding of how neural networks work. My ultimate goal is to aid human-AI collaboration at the individual and societal level. My work has sometimes intersected with computational social science, human-AI interaction, and cognitive science.
Most recently, I've been focused on alignment, learning from human feedback, code LLMs, and planning (tool use, adaptive computation, memory).
*** The following was written in 2021. ***
With the goal of making machine learning more capable, flexible, and deployable, I have experience and interest in:
My work is sometimes motivated by the way humans communicate, such as the use of stories and pragmatic implicature, the nature of mass media, social influences on belief formation, and cognitive biases broadly. I started my undergrad as a bioengineering major, and my first stints in research were related to biomedical imaging and protein structure prediction — I remain greatly interested in the potential for machine learning to advance these fields. Finally, I sometimes explore the use of computational tools for art and creative purposes.
As of Fall 2021, some of my active projects were centered around:
I believe PhD students and the research community can benefit from more transparency into the non-linear path of research (both in individual projects and in career trajectories), and discussion of negative results. I'm a fan of efforts such as I Can't Believe It's Not Better ⤷, negative results in NLP ⤷, and ML retrospectives ⤷. In that spirit, some smaller, exploratory, incomplete, or negative-results projects I've worked on during my PhD include: (1) hierarichal natural language plans to improve generative models, applied to SketchRNN, (2) disparate impact on minority communities of classifiers that distinguish between human-written and machine-generated text, (3) detecting the provenance of toxic generations in language models, (4) incorporating symbolic rules in neural text simplification, (5) controllable speech synthesis to toggle between natural and tutoring speaking modes, and (6) semi-supervised satellite image segmentation for resource allocation. Several of these projects are described in more detail below.
Most recent publications on Google Scholar.
Neural Language Models of Media Consumption can Predict Public Opinion
Eric Chu, Jacob Andreas, Steve Ansolabehere, Deb Roy
In submission to Nature Human Behavior.
PaLM 2: Technical Report
Are Visual Explanations Useful? A Case Study in Model-in-the-loop Prediction
Eric Chu, Deb Roy, Jacob Andreas
Preprint.
Games for Fairness and Interpretability
Eric Chu‡, Nabeel Gillani‡, Sneha Priscilla Makini
WWW'20: Proceedings of The Web Conference, Workshop on Data Science for Social Good
ICLR'20: International Conference on Learning Representations, Towards Trustworthy ML Workshop
MeanSum : A Neural Model for Unsupervised Multi-Document Abstractive Summarization
Eric Chu‡, Peter J. Liu‡
ICML'19: International Conference on Machine Learning
Learning Personas from Dialogue with Attentive Memory Networks
Eric Chu‡, Prashanth Vijayaraghavan‡, Deb Roy
EMNLP'18: Empirical Methods in Natural Language Processing
Audio-visual Sentiment Analysis for Learning Emotional Arcs in Movies
Eric Chu, Deb Roy
ICDM'17: International Conference of Data Mining
ICCV'17: International Conference of Computer Vision, Large Scale Movie Description Challenge
Neural Language Models of Media Consumption can Predict Public Opinion
Eric Chu, Jacob Andreas, Steve Ansolabehere, Deb Roy
In submission to Nature Human Behavior.
PaLM 2: Technical Report
Are Visual Explanations Useful? A Case Study in Model-in-the-loop Prediction
Eric Chu, Deb Roy, Jacob Andreas
Preprint.
Evolving Evocative 2D Views of Generated 3D Objects
Eric Chu
NeurIPS'21: Neural Information Processing Systems, ML for Creativity and Design Workshop
Parents’ Online School Reviews Reflect Several Racial and Socioeconomic Disparities in K–12 Education
Nabeel Gillani, Eric Chu, Doug Beeferman, Rebecca Eynon, Deb Roy
AERA Open '21: American Educational Research Association Journal.
Games for Fairness and Interpretability
Eric Chu‡, Nabeel Gillani‡, Sneha Priscilla Makini
WWW'20: Proceedings of The Web Conference, Workshop on Data Science for Social Good
ICLR'20: International Conference on Learning Representations, Towards Trustworthy ML Workshop
DAPPER: Learning Domain-Adapted Persona Representation Using Pretrained BERT and External Memory
Prashanth Vijayaraghavan, Eric Chu, Deb Roy
AACL'20: Asia-Pacific Chapter of the Association for Computational Linguistics
MeanSum : A Neural Model for Unsupervised Multi-Document Abstractive Summarization
Eric Chu‡, Peter J. Liu‡
ICML'19: International Conference on Machine Learning
Learning Personas from Dialogue with Attentive Memory Networks
Eric Chu‡, Prashanth Vijayaraghavan‡, Deb Roy
EMNLP'18: Empirical Methods in Natural Language Processing
Artistic Influence GAN
Eric Chu
NeurIPS'18: Neural Information Processing Systems, ML for Creativity and Design Workshop
Audio-visual Sentiment Analysis for Learning Emotional Arcs in Movies
Eric Chu, Deb Roy
ICDM'17: International Conference of Data Mining
ICCV'17: International Conference of Computer Vision, Large Scale Movie Description Challenge
Human Atlas: A Tool for Mapping Social Networks
Martin Saveski, Eric Chu, Soroush Vosoughi, Deb Roy
WWW'16: International Conference on the World Wide Web. 2016. (Demo)
Full Resume in PDF.