Introduction to Knowledge Representation


Slides are at the bottom of the page! until I correct it—there’s a typo on slide 46, the conjunction and disjunction symbols are backwards (true: ^ = and ^ v = or). Latex should have caught this error :)

This section is largely based on Sowa’s Conceptual Structures; Sussman and Abelsons’s SICP, Suppe’s Introduction to Logic, Partee et al’s Mathematical Methods in Linguistics, Cruse’s Lexical Semantics and Thinking about Android Epistemology.

The big questions

We’re concerned with the problem with getting knowledge into computers 1

Getting the knowledge

We can load the knowledge into the computer, or we can build a learning system that builds its own representation from examples.

Using the knowledge

Assuming we’ve got the knowledge, how do we use it to solve problems? (And what problems are worth solving?)

Knowledge is more than just a static set of fats; it involves the ability to use those “facts” to interact with the world.

Teddy Ruxpin needs an interface with the environment.

Information retrieval problem. Which knowledge is relevant


First combination of symbols: Ramon Lull

George Boole’s algebra of logic

Representing Items


Math is useful because it distills the idea to its essence (simplification) and is a precise and universal way of describing structures. For example, which would you prefer?

An a collection of any arbitrary elements. In math, often they are numbers.

U=\{x|x ∉ x\}

U ∈ U?

These make up Boolean algebra.

Describing properties of sets

All, every, each, not

Logic has a long history, beginning with Aristotle ( 4th BC). First began to be combined with math by Leibniz (17).

There are many good books on the subject 2.

Sentential Connectives

Sentences have truth values, and map to {True, False}. For example:

P = “it will rain today”

Q = “I will wear my umbrella hat”


    IF it is raining                        |       P -> Q
    THEN I will wear an umbrella hat        |

Could be complete irrelevant:

Inference in logic

Here is inference.

Truth tables

    P  Q  |  P -> Q | !P v Q
    T  T  |    T    |   T
    T  F  |    F    |   F
    T  T  |    T    |   T
    F  F  |    T    |   T


Applying truth-preserving transformations

Either a) check consistency—try to derive a contradiction (?) or b) deriving conclusion from premises.

History: Defining concepts

Computer Science

Object oriented programming

From Sun’s Java OOP Tutorial:

“Software objects are conceptually similar to real-world objects: they too consist of state and related behavior. An object stores its state in fields (variables in some programming languages) and exposes its behavior through methods (functions in some programming languages). Methods operate on an object’s internal state and serve as the primary mechanism for object-to-object communication. Hiding internal state and requiring all interaction to be performed through an object’s methods is known as data encapsulation — a fundamental principle of object-oriented programming.”

Linguistics and Cognitive Science

Natural languages, as opposed to programming languages, are used for communication between people. Linguists have divided these into different levels: 1. Prosody 2. Phonology 3. Morphology 4. Syntax 5. Semantics 6. Pragmatics

( Endel Tulving (1972) classified memories into: episodic and semantic. Episodic is about individual activities and events. Semantic is dictionary-like knowledge like “All dogs have paws.”)

Cognitive Science

Categorization and nouns People group objects into the world into categories on the basis of their “similarities”.

Lexical Semantics

Combinations of words are not just constrained by the meanings but grammatical properties.

Learning Items

Francis Bacon’s Novum Organum

Computer Science

Find-S algorithm

Problems: (From Tom Mitchell’s lecture slides ) - "Can’t tell whether it has learned concept - fails with inconsistent training data - picks maximally specific h - there many be several concepts in H

Cognitive Science

Philosophy / Math

Representing Relations


Ordered tuples.


Kinds of relations

Computer Science


  1. Lexicalist hypothesis (Chomsky 1968)
  2. Deep case hypothesis of (Fillmore 1967)

Representing Processes


Learning Relations

Learning Procedures

Combining Knowledge


Generative grammars Hierarchy

Computer Science


Semantic Networks


Functions are a relation between two sets A and B, where: 1. A is the domain, and 2. Each element in A (domain) is paired with only one element in B (range).


Note: Slides are available online.

  1. As we will see later, if we want to do a good job on either of these tasks we will need to “tighten the loop” and build systems that actively learn in order to solve problems.

  2. Patrick Suppes: Introduction to Logic