Common Sense Reading Group (Spring 2002)

We are establishing an informal weekly reading group to discuss techniques for giving machines the capacity for human-like commonsense reasoning and ways to incorporate common sense into real-world applications. Topics will include:

Where and When

The meetings will be held Tuesdays from 4-5:30 in the Garden Conference Room on the 3rd floor of the Media Lab. The room might be a bit hard to find for those of you who have never been to Media Lab before, so here is a link with a map (red circle).

Commonsense Bibliography

We have collected a large bibliography of papers on topics related to the problem of giving machines common sense. It is available at:

For the readings we will draw from this bibliography.

If you have any other questions about this reading group, please contact Push Sing, or Barbara Barry, or Stefan Marti.

February 5, 2002- Introduction (Push Singh)

Push: To begin with we are planning to read these two papers. The first is a classic by John McCarthy that was the first paper on logical AI, a popular way of representing commonsense knowledge. The second is a recent paper by Marvin Minsky that summarizes some of his current ideas about how to tackle the common sense problem, such as by developing architectures that can exploit many different kinds of representations and methods of reasoning.

McCarthy, John (1959). Programs with common sense.

Minsky, Marvin (2000). Commonsense-based interfaces. Communications of the ACM, 43(8), 67-73.

Meeting Notes

Push: Thanks to everyone who came! I thought it was a great discussion, with many different and interesting perspectives. Here are some of the questions people raised:

February 12, 2002 - Cyc (Push Singh)

Let's take a more focused look at Cyc. This first article is a decade old but still a good review of what is under the hood of that project.

Guha, Ramanathan, & Lenat, Douglas (1990). Cyc: A midterm report. AI Magazine, 11(3), 32-59.

This second article is a brief criticism of Cyc, which reflects some problems in Cyc that were there seven years ago. The article provides a valuable outsider perspective and some suggestions for how to evaluate Cyc:

If you want to play with a live version of Cyc, circa 1998, I set up a server at:

There is also extensive on-line documentation at:

This version has only a fraction of the knowledge of the full system but it has the full upper level ontology and enough specific knowledge to get a feel for the system. This is a kind of preview of the upcoming OpenCyc distribution.

Meeting Notes

Push: We only scratched the surface today of what's under the hood of Cyc and why it does things the way it does, but I hope it was enough to give people the general idea. Keep your eyes open for OpenCyc (

A few things about OpenCyc:

I ran across this CNN article today about Cyc. Looks like they are definitely moving towards an Open Mind approach:

I'll be seeing Lenat this weekend so I'll have a chance to query him in person about it.

Peter Gorniak: Some people on the (potential) speaker list for the Harvard seminar I mentioned:

Susan Carey; Ray S Jackendoff; Marc Hauser; Lera Boroditsky; Josh Tenenbaum; Jesse Snedeker; James Pustejovsky; Elizabeth Spelke; Deborah Kelemen; Deb Roy; Daniel Simons; Alec Marantz; Fei Xu; Paul Harris

The page for this course is and contains info about location and times.

I mentioned Lera Boroditsky's work. She's at

February 19, 2002 - Open Mind Common Sense (Push Singh)

Push: This week I'll talk about Open Mind Common Sense. There is a general overview of the project (and some high-level observations about common sense) at:

Singh, Push (2002). The Open Mind Common Sense project.

This paper describes the first version of the system:
Singh, Push (2002). The public acquisition of commonsense knowledge. Proceedings of AAAI Spring Symposium on Acquiring (and Using) Linguistic (and World) Knowledge for Information Access. Palo Alto, CA: AAAI.

This second paper describe the newer system that is under development:
Singh, Push, et al. (in submission). Open Mind Common Sense: Knowledge acquisition from the general public.

The system itself is running at:

February 26, 2002 - Commonsense in Applications (Henry Lieberman, Barbara Barry)

Push: We are meeting today to discuss "commonsense in applications". Henry Lieberman and Barbara Barry will talk about their recent work in this area.

Henry Lieberman: For this week's Common Sense group, I will lead discussion on Applications. Here's an abstract of what I'll present:

Beating Some Common Sense Into Interactive Applications. Henry Lieberman, Kim Waters, Hugo Liu.

What kinds of interactive applications are the best candidates for applying "common sense" knowledge? Applications like question answering and general story understanding are tough, because if the system doesn't do as good a job as a human, the users will not find the application successful. However, there are many other situations in interactive interfaces where the user interface is vastly underconstrained, and almost any kind of common sense help from the system would be welcome. The key is to cast the machine in a fail-soft role of giving suggestions or assistance rather than try to completely replace human judgment.

I'll illustrate this with a description of Aria (Annotation & Retrieval Integration Agent), an agent for helping users tell stories with digital photographs. Aria proactively looks for opportunities for annotating and retrieving photographs in the context of a user's ordinary work, such as composing e-mail messages. Common sense knowledge about picture-taking situations such as family events and vacation travel can substantially improve Aria's performance without the danger of raising user expectations to unreasonable levels.

Please read:

An Agent for Integrated Annotation and Retrieval of Images. Henry Lieberman, Elizabeth Rozenzweig, Push Singh.

A Calendar with Common Sense. Erik Mueller.

Meeting Notes

Push: Thanks to Henry and Barbara for a very interesting presentation! They are really pioneering this area of putting some commonsense reasoning into applications. As far as I know there has not been much prior work besides Erik Mueller's and some small demos by Cycorp. It's just very hard to do this kind of work without some sort of commonsense knowledgebase to draw on.

March 5, 2002 - "Classical AI" approach to natural language processing (Martin C. Martin)

This week Martin has offered to lead a discussion on critiquing what has come to be called the "classical AI" approach to natural language processing and commonsense reasoning. The reading will be chapters 5 and 9 from Winograd and Flores:

Terry Winograd and Fernando Flores: Understanding Computers and Cognition: A New Foundation for Design Addison-Wesley, 1987 (I'm afraid we don't have those readings on-line.)

Meeting Notes

Push: Thanks Martin! I thought that was a great discussion.

I think the conclusion is that common sense is still possible :-), but we need to clarify and develop this idea that all knowledge is contextualized in many ways, and it is rarely true that a small item of knowledge can totally stand alone. Henry has offer to lead a discussion later in the semester on this subject, for which there is a actually quite a vast literature (see

March 12, 2002 - Symbolic vs. Connectionist (Jake Beal)

Push: This week's topic will be: Symbolic vs Connectionist and other architectural issues in common sense. Jake has offered to lead the discussion, at least for the first hour. The papers will include the following one, and pieces of two others that Jake will send more information about later.

Minsky, Marvin (1991). Logical vs. analogical or symbolic vs. connectionist or neat vs. scruffy. AI Magazine, Summer 1991.

This is a beautiful discussion of the advantages and disadvantages of different types of representations, by a central figure in the debate—I guess you could say Marvin started off as a connectionist (he built the first neural network, and his thesis was the first on the topic), then he became a powerful developer of symbolic methods (his students built some the first symbolic and analogical heuristic reasoning systems) and critic of simple connectionist methods (including writing Perceptrons), and later he began to develop theories that tried to combine the advantages of both and other approaches (the Society of Mind).

Jake Beal: Push has asked me to lead our discussion this week: the theme will be architectural approaches to common sense.

Idea: Common sense is often treated as an abstract entity—a bloc of explicit knowledge to be added to a system to give it common sense. The architectural approach holds that we should instead build systems in which common sense is implicit in the computational structures we build to represent the external world.

Background Reading: I've included three papers for you to look at: a background paper by Minsky from 1990, and papers from two current projects at the AI lab taking an architectural approach to the problem.

Minsky's paper on Symbolic vs. Connectionist AI

Bob Hearn's master's thesis Building Grounded Abstractions for Artificial Intelligence Programming

"20 Ways to Contribute to the Bridge Project"—a working document from the Genesis Group

Bob's thesis is rather large, so just make sure you've read the Introduction and the sections on Polynemes and Frames.

Meeting Notes

Push: Thanks Jake for leading last meeting! I encourage everyone to take a look at the genesis home page which describes those ideas in greater detail—of finding ways to build common sense on top of more specific cognitive machinery evolved for visual inference, motor control, social reasoning, and many of the other abilities we share with the other mammals.

March 19, 2002 - Story understanding (Jimmy Lin)

Jimmy Lin: Commonsense: field trip this coming week

When: This coming Tuesday, 4:00-5:30pm (usual Commonsense Seminar meeting time)

Where: Harvard Coop (basement, children's book department)

What: We will be reading and discussing children's stories, with the following guiding questions:

  1. How much commonsense is really needed to understand these stories?
  2. How prevalent are the Winograd-like scenarios (e.g., Kite and party, piggy bank and disappointment, etc.)
  3. How complicated is the commonsense necessary to understand these stories? e.g., could we use stuff like hypernyms and holonyms from WordNet?

The literature that I would like to target is books for barely-literature children (beyond picture books, but just a little.) The reason is that I would like to isolate commonsense problems from NLP issues, i.e., I would like to examine sentences that our parsers would have no problem parsing, so as to not get bogged down in NL issues like alternations, complex parallel constructions, etc.

Suggest Readings:

Erik Mueller. Prospects for in-depth story understanding by computer.

Wendy G. Lenhert (1981). A Computational Theory of Human Question Answering. In Aravind K. Joshi and Bonnie L. Webber and Ivan A. Sag, eds., Elements of Discourse Understanding. Cambridge University Press. pp. 145-176 (I don't have an electronic copy... those of you in the AI Lab, please stop by my office 810. Those of you in the Media Lab, I'll try to get Push some physical copies.)

Lynette Hirschman, Marc Light, and Eric Breck and John Burger (1999). Deep Read: A Reading Comprehension System. Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL'99).

Come one, come all - if not anything, it'll be great fun reading children's stories!

Meeting Notes

Push: Thanks Jimmy for leading the discussion today! It's very illuminating looking at actual children's stories, as it makes you realize how sophisticated is even a first graders mind.

Henry Lieberman: Letter with subject: How I spent my Tuesday vacation

Dear Girls and Boys,

Once upon a time, there was a group of girls and boys called Common Sense. They went to spend an afternoon at the Harvard Coop Bookstore's Children's Basement.

Here are some of the things they learned:

We discovered that in many cases understanding the stories was not so simple as in the "Would Mary like a kite?" examples.

Tracy noted that understanding the pictures was often a key. Sometimes the text was not even understandable without the pictures. In stories for young children, where the story forms are simpler, the idea is to get children to associate the text with the pictures so they need to see the pictures to see what it is about.

Where can Spot be?
Is he behind a door? (Picture of Spot behind door)
Is he inside the clock? (Picture of Spot inside clock)
Is he under the bed?
There's Spot, he's under the rug.

Several members noted that the purpose of various stories differed. Some were meant to teach common sense.

.. and put the snowball in his pocket for tomorrow. (Snow melts if you take it indoors)

Others had a "moral" and were meant to teach ethics and social situations.

Shel Silverstein's The Giving Tree There once was a tree who loved a little girl...

The story goes on to show what the tree gave the girl throughout her life as she grows up—apples, branches, shade. Some common sense knowledge about plot, morals of stories, social situations is necessary to understand such stories. Does CYC have the requisite knowledge?

Some stories are meant to be funny, so intentionally show things that are incongruous.

The fox ate on the box The fox jumped over the box ... The box sat on the fox

Some stories are meant to be fantasy, so show things like talking animals. How do you get the idea that animals can talk in stories, but not in real life?

Patty the porcupine and Gertie the Goose were talking. "What is it like to have feathers?" asked Patty. "It's nice" said Gertie. "Feathers are light and fluffy."

Push found a question-and-answer flip-card pack that asked questions about stories like this.

Is Gertie a goose or a porcupine? What are Patty and Gertie doing when the story begins? Which words describe feathers: Sharp, light, spiky, fluffy.

This is at the 2nd grade level. Such questions always ask about exactly what is in the story. By 4th grade, they ask for the implicit knowledge not mentioned in the story.

We analyzed what knowledge is necessary for a simple story.

Cat is sleepy. Where can he sleep?

Animals sleep. When an animal has not been sleeping for a long time, it becomes sleepy.

It's too noisy here It's not comfortable here.

To sleep, you need to find a place that is quiet and comfortable.

At last, a cozy place.

Henry suggested that when knowledge is put into a CS KB in order to enable understanding of a particular story, the system ought to keep the association between the knowledge and the particular stories it was learned from. CYC does not do this. That way, when you have to use the piece of knowledge in the future, you could map it back to the story and try to do an analogy between the stories.

The Common Sense boys and girls did not get milk and cookies, but they did get crepes. And they lived happily ever after.

Your friend,


April 9, 2002 - Spatial and Diagrammatic Reasoning (Tracy Hammond, Olya Veselovawill)

Tracy Hammond and Olya Veselovawill give a brief talk on: Human Perception of Geometric Shapes, or Common Sense in Learning Diagram Symbols, followed by a discussion of the papers.

Mary Hegarty (2001). Capacity Limits in Mechanical Reasoning. Fifteenth International Workshop on Qualitative Reasoning AAAI. San Antonio, Texas. May 17-19, 2001.

Aaron Sloman - Sloman, Aaron (2001). Diagrams in the mind. In M. Anderson, B. Meyer, & P. Olivier (Eds.), Diagrammatic Representation and Reasoning. London: Springer-Verlag.

Meeting Notes

Push: Thanks Tracy and Olya for presenting today! Too bad we didn't get to the Sloman paper but I recommend you all read it if you haven't done so, since it's filled with nice observations about spatial reasoning.

April 23, 2002 - Mapping between different domains of reasoning (Peter Gorniak)

Today Peter will talk about evidence for the Lakoff-ian idea that our understanding of time is built by analogy to our understanding of space.

Peter Gorniak: At this Commonsense AI meeting, we'll talk about mapping between different domains of reasoning. I believe this to be an absolutely essential issue for any commonsense reasoning system with the flexibility and generativity of human reasoning.

Let's talk about:

I'll initiate with a short overview of the papers below and some of my current thoughts on these questions.

Boroditsky, L. (2000). Metaphoric Structuring: Understanding time through spatial metaphors. Cognition, 75(1), 1-28

Jackendoff, R. (2002) Foundations of Language. The link below points to the second half of chapter 11. Just look at 11.7 and 11.8.

Minsky, M. (1988) Society of Mind. I'm assuming people are familiar with this one.

Some of my thoughts on generativity in reasoning systems:

Meeting Notes

Push: Thanks to Peter giving a great talk! We had a good discussion about the nature of "cognitive domains" and the correspondences that exist between them.

April 30, 2002 - Semantic Web (Stefan Marti)

Stefan Marti will talk about the growing connection between Common Sense and the Semantic Web, as both require representing and reasoning about a wide range of things about the everyday world. Stefan will send out the readings later today or tomorrow.

Stefan Marti: Here are the suggested readings for next week (and some comments):

Berners-Lee, Tim (1998). Semantic Web Road map. Short (ca. 8 pages), early paper/draft from the "inventor" of the Semantic Web.

Berners-Lee, Tim (1998). What the Semantic Web can represent. Even shorter (ca. 6 pages), still old, Q&A like, rather unpolished, and sometimes unnecessary details, but interesting because of its inverse approach: it's about what the Semantic Web is NOT.

Berners-Lee, Tim, Hendler, James, & Lassila, Ora (2001). The Semantic Web. Scientific American, 284(5), 34-43. A bit more polished. Not too deep.

Koivunen, M.-R. & Miller, E. (2001). W3C Semantic Web Activity. Finally, a non-Berners-Lee paper; about the six main Principles of the Semantic Web. A rather fast read, I would guess.

If you have more time to read:

Fensel, Dieter, & Musen, Mark A. (Eds.). (2001). The semantic web. IEEE Intelligent Systems, 16(2), 24-79. A nice collection of very recent articles, some of them very in-depth. Worth browsing through!
Zipped file of separate articles:
Pages 24-79 in one go:

W3C website. And finally: all the gory details... Probably just interesting for the aficionado.

Meeting Notes

Stefan's presentation (powerpoint slides).

Push: Thanks to Stefan for a stimulating discussion!

Here are some pointers to work that was brought up:

The AI/LCS Haystack project home page. Includes pointers to information on RDF, Notation3, and the Dublin Core Metadata Initiative.

This year's KR conference.

A dynamic prototype-based language with an XML syntax.

Also, here is an announcement for a course being taught next fall by Benjamin Grosof about knowledge representation and the Semantic Web:

Announcing a new advanced info technology course in fall 2002 taught by Prof. Benjamin Grosof. Majority of the focus will be on AI knowledge representation, including the Semantic Web, and how to use them for e-business. Students will do a term paper which can be original research or literature survey.

May 7, 2002 - Analogical reasoning and knowledge acquisition from the public (Tim Chklovski)

Tim Chklovski: For the meeting on Tuesday, please take a look at:

My thesis proposal, off of (the background and the chapter on the basic mechanisms are the most relevant parts)

For a closer look at using natural language ontologies to reason, take a look at: Enriching WordNet with Qualia Information, by Mendes and Chaves

For the discussion, I'd like to focus on alternatives to the full-blown logic approach.

There should also be a live demo of my "Learner" system that reasons over parsed natural language assertions collected by Push's Open Mind Common Sense, makes guesses about what else may be true, formulates questions about it, and asks you.

Meeting Notes

Push: Thanks Tim! I encourage people to visit Tim's learner site as soon as it is launched, to support his thesis work.

May 14, 2002 - Commonsense Thinking Machine (Push Singh)

Push: This will be the final meeting of the semester. I thought I would take the opportunity to describe a project Marvin and I have been working to get started, to build an architecture for a commonsense thinking machine. We arranged a remarkable symposium a few weeks ago, to discuss the idea with a number of other AI researchers who have worked in the commonsense area. A brief report about the symposium is available at:

You might also look at these slides by Aaron Sloman:

Meeting Notes

Stefan: Thank you Push! This approach looks very promising. The symposium seems to reflect the increased interest in commonsense reasoning.

Fall 2002 semester

Push: I was thinking we should continue the commonsense reading group this semester, with more of a focus on specific methods of commonsense reasoning. The spring term was a good introduction to the topic for people, but this fall it would be nice to look more deeply at known techniques. Here's an initial list of topics:

  1. Statistical inference - Judea Pearl on bayesian networks (is there a discussion of bayesian versus logical?)
  2. Judea Pearl on causality
  3. Analogical reasoning - Jaime Carbonell on derivational analogy (is there a more recent discussion?)
  4. Ernest Davis (The Na´ve Physics Perplex) on difficult issues in the Hayes & Moore program
  5. Ernest Davis on the advantages/disadvantages of 'Lucid representations'
  6. Probabilistic frame systems (Koller)
  7. McCarthy on elaboration tolerance
  8. Lenat & McCarthy on contexts
  9. Kuipers and Fahlman on 'networks of reasoning'
  10. Someone on neural network approaches to reasoning?
  11. Saul Amarel on reformulation in reasoning
  12. Logical formulations - Shanahan's solution to the frame problem
  13. Repair in case-based reasoning (possibly Hammond's CHEF)
  14. Blackboard systems (evolution of blackboard systems, Hayes-Roth)
  15. Reflection in reasoning ( )
  16. Distributed and multi-agent approaches to reasoning
  17. Learning rules (statistical - ???)
  18. Learning rules (symbolic)
  19. Reasoning in spatial representations?
  20. Reasoning about social situations?

Commonsense Reading Group, Fall 2002