LambdaProlog differs from Prolog in three major ways:
x\(x+1)
, including unification of abstractions.
=>
)
and explicit quantification operators (pi
meaning "for
all" and sigma
meaning "there exists").
true
or false
(aka fail
).
5
and 2.5
x\ (x + 1) % the successor function; cf Haskell's \x > (x + 1) a\b\ (a + b) % the makeadder function ((a\b\(a + b)) 1) % another version of the successor function ((a\b\(a + b)) 1 2) % answers 3
[1, 2, 3] (1 '.' 2 '.' 3 '.' []) % same thing, using explicit "cons" operator '.' % and nil '[]'; cf Haskell's 1:2:3:[]
kind product3 type > type > type > type. % these will be type tuple3 A > B > C > (product3 A B C). % explained later (tuple3 3 2.5 (x\(x+1))) % tuple of (int, float, (int > int))
Expression  Type 

true  o

5  int

2.5  float

x\(x+1)  int > int

a\b\(a+b)  int > int > int

[1, 2, 3]  (list int)

(tuple3 3 2.5 (x\(x+1)))
 (product3 int float (int > int))

a\([a, a])  A > (list A)

list
is a typefunction of one argument;
product3
takes three.
Valuefunctions are declared using type
:
type is_two int > o. is_two X : X = 2. % returns true iff X is equal to 2.Named types and typefunctions are declared using
kind
:
kind int type. kind list type > type.Thus LambdaProlog makes the notion of typefunction explicit, paralleling that of valuefunction. However, typefunctions can only take regular types as arguments, not other typefunctions. This makes LambdaProlog's type system weaker than Sather's.
The lambda a\([a, a])
demonstrates a polymorphic
function: given a value of any type, it returns a list of that type.
As in Haskell, the types of polymorphic functions use variables to be
automatically filled in at pointofcall. Also like Haskell,
LambdaProlog can infer this polymorphic type automatically.
However, LambdaProlog provides no "partialpolymorphism" feature like Haskell's typeclasses. Hence LambdaProlog does not support overloading. This has serious ramifications on naming, e.g.
+
and assign with is
.
$+
and assign with fis
.
is_two
used a variable for its argument. If the definition were
is_two x : x = 2.then this function is only defined on the symbol x, where it returns
false
. Capitalization is really just syntactic sugar
for a variable declaration, though Prolog does not allow variable
declaration to be explicit.
LambdaProlog, however, can express variable declarations explicitly,
via lambdas and quantifiers. Since lambdas explicitly declare their
arguments, the names of their arguments need not be capitalized. For
example, we could define is_two
with
pi x\ (is_two x : x = 2).Which reads: "for all x, (is_two x) is true if x equals 2."
length :: [a] > int length [] = 0 length (h:t) = 1 + length t sign :: int > int sign x  x < 0 = 1 sign x  x == 0 = 0 sign x  x > 0 = 1Prolog allows the same, and much more. In fact, Prolog's pattern matching mechanism subsumes conditionals and return values so completely that the latter features are not even offered in the language.
Prolog uses a bigger, brawnier form of pattern matching, called "unification", which is bidirectional. Unification tries to find the simplest variable binding, on either side, required to bring two expressions together. Examples:
Expression  Unified with  Produces binding 

2  X  X : 2 
X  2  X : 2 
[1,2,3]  [HT]  H : 1, T : [2,3] 
2  [HT]  fail 
[1,2,1]  [X,Y,X]  X : 1, Y : 2 
[1,2,3]  [X,Y,X]  fail 
[X, 2]  [2, X]  X : 2 
[X, 3]  [2, X]  fail 
[X,Y,Z] : X = Z
. It is helpful to interpret this as
exactly equivalent to [X,Y,X]
.
length :: [a] > Int > Bool length [] 0 = True length (h:t) (y+1)  (length t y) = True  n+k sugar, and a guard length _ _ = False  catchall sign :: Int > Int > Bool sign x (1)  (x < 0) = True sign x 0  (x == 0) = True sign x 1  (x > 0) = True sign _ _ = FalseFor example,
length [1,2,3] 3
returns True
,
and sign 1 (1)
returns False
. This is rather
restrictive, since all predicates will let us do in Haskell is
check a result, not generate one. But not so in Prolog. Here are the same predicates, in LambdaProlog:
type length (list A) > int > o. % Bool is "o" length [] 0. % "= True" is automatic length [HT] (Y+1) : length T Y. % use ":" for a guard % "[HT]" instead of "(h:t)" % catchall is automatic type sign int > int > o. sign X (1) : X < 0. sign X 0 : X = 0. sign X 1 : X > 0.Again,
length [1,2,3] 3
returns true
, and
sign 1 (1)
returns false
. Note that the
only significant syntactical difference is the implicit "=
True
" after every line, and the implicit catchall false.
Because of this, the only functions we can define in Prolog
are predicates. Seems rather limiting, indeed.
sign
:
sign X 0 : X = 0.As reasoned above, the guarded expression
X 0 : X = 0
is
the same as 0 0
. Unifying, say, 0 Z
with
this expression will bind Z to 0, according to the rules
given above. Thus the call
sign 0 Zwill bind Z to 0 in the caller. Similarly,
sign 4 Z % binds Z to 1 sign (1) Z % binds Z to 1 length [] Z % binds Z to 0 length [1] Z % binds Z to 0+1, i.e 1 length [1,2] Z % binds Z to 0+1+1, i.e. 2For example, here is what happens in the secondtolast case:
length [1] Z
with length [HT] (Y+1)
.
Result is H : 1, T : [], Z : Y+1
.
length T Y
, i.e. length [] Y
.
This yields Y : 0
.
Y : 0
and Z : Y+1
yields Z : 0+1
.
sign Z 1
? It would appear
that any positive integer could be returned for Z.
However, if the guard were more complicated, like f X
,
this would involve finding a Z for which f is
true, which is impossible in general. So the answer is that Prolog
knows how to solve some fs, like equality, but not others.
Evaluating sign Z 0
will bind Z to 0 but
sign Z 1
will fail. A little kludgy and not very general, but it works. Variants of Prolog generally focus on this problem, by extending the range of fs the language can handle. For example, CLP can handle inequalities. However, these variants add no new syntax; they only give meaning to what were already syntactically wellformed expressions. The challenge in logic programming seems to be catching up with the syntax.
The tricky part here is "goes back". Bindings resulting from unification can be unwound. Sideeffects, however, like writing to the screen, cannot. Thus Prolog, like Haskell, shuns sideeffects, though for a different reason. In Prolog it is the only way to make backtracking sane. The only common "effects" in Prolog are variable bindings.
Logic languages thus use names in the style of placeholders, not
commands or tags (as defined in the linguistics lecture). A name is an
environment slot, which gets filled in by unification and emptied by
backtracking. Names are bound only by constraints, like "the number
X such that sign X 0
is true". Names do not
denote exactly one thing, as would commands, nor can they be
reassigned at will, as can tags.
Nondeterminism allows simple expression of generateandtest algorithms. For example, placing 8 queens on a chessboard such that they are not attacking eachother. (This is a standard example in every Prolog text.)
sign Z 1
. This is not
because there are multiple solutions; we now know that Prolog can
return multiple solutions. The reason is just that the particular
interpreter used in Standard Prolog and LambdaProlog doesn't know how
to handle inequalities.
type append (list A) > (list A) > (list A) > o. append [] X X. append [EX] Y [EZ] : append X Y Z. append [1] [2] Z. % binds Z to [1,2] append X [2] [1,2]. % binds X to [1] append X Y [1,2]. % three solutions: % X : [], Y : [1,2] % X : [1], Y : [2] % X : [1,2], Y : [] append [1] Y Z. % Z : [1Y] (Y remains unbound) append X [2] Z. % infinite number of solutions: % X : [], Z : [2] % X : [a], Z : [a,2] % X : [a,b], Z : [a,b,2] % etc.This unique feature stems from Prolog's use of predicates, which define relations, rather than functions.
However, as pointed out above, there is some confusion about when a function is multiway, because some interpreters can do more than others. Since there is no indication from the syntax that a guard is unsolvable, it is easy to inadvertently write oneway functions.
Expression  Unified with  Produces binding 

x\x  x\(F x)  F : x\x 
1  (F 1)  F : x\1 or x\x 
z\z  z\(F (x\x) z)  F : s\x\x or s\x\(s x) or s\x\(s (s x)) or ... 
A => B
we assume
A
and then prove B
. Thus A
plays the role of a local declaration, in effect until B
is solved. Here is another implementation of list append, using a local declaration:
% match is an auxiliary function which eats the longest common prefix % of X and Z type match (list A) > (list A) > o. match [EX] [EZ] : match X Z. type append (list A) > (list A) > (list A) > o. append X Y Z : (match [] Y => match X Z).The definition of
append
reads: "show that match X
Z
is true, given that match [] Y
is true." The
result is that match
will eat the common prefix of
X and Z, until X is exhausted. At
that point, Y will be unified with the remainder of
Z. It is not possible to make the declaration match
[] Y
global because Y is the argument to
append
.
It is also possible to put the entire definition of match
into append
, making match
an "invisible"
helper function. That is,
type match (list A) > (list A) > o. match [EX] [EZ] : match X Z. type append (list A) > (list A) > (list A) > o. append X Y Z : (pi \e\x\z(match [ex] [ez] : match x z)) => (match [] Y => match X Z).Now there are two local declarations:
pi \e\x\z(match [ex] [ez] : match x z) match [] YThe first is an explicitly quantified version of the previous global declaration. (Global declarations are implicitly quantified, local declarations are not.)
However, these declarations are not local in the sense of an Algol
block, because any predicate which is called will see that assertion.
That is, if match
called the global predicate
foo
, then foo
would be able to call
match
, until append
exits and retracts the
local assertions. In this sense, the declarations are local, but
dynamically scoped. This gives up the encapsulation that would be
possible with true Algol blocks, as we will explore in the treasure hunt.
type two int. two = 2 type succ int > int. succ = \x > x + 1But Prolog only allows the definition of predicates. So to get the same effect we must use a oneargument predicate:
two 2. succ (x\(x+1)).To get the value of
succ
we must bind it to a variable:
succ X, two Y, print (X Y). % prints 3This is neither comfortable nor readable. And since LambdaProlog has no typepredicates, we cannot use this mechanism to declare type synonyms:
type string type. % invalid predicate type string (list int). % LambdaProlog rejects this(In all fairness, Haskell can't declare type synonyms with '=' either; it needs a special
typesyn
syntax.) To get a local name, we can use implication to define a local oneargument predicate. But as we noted above, these predicates are dynamically scoped, so our name will be visible to callees.
However, while many tasks can be viewed as equation manipulation, such as expert systems and natural language processing, for others it is an unnatural fit. For example, we have even more of a rift between the language and its implementation than we had in Haskell. While the interpreter has extremely powerful mechanisms for unification and nondeterminism, neither of these are exposed in the language, so that they may be enhanced or applied to alternative uses. This is in addition to the tragedies of Haskell: no access to the typegraph or naming environments.
The nongenerality of pure unification in Prolog shows through in the
additions required by LambdaProlog. Generalpurpose programming
requires a notion of function and local scope; this is evident just
from looking at the implementation of Prolog. But while imperative
languages have always had these features, logic languages rejected
them, out of ideology. We hope that this discussion demonstrated that
unification is not in opposition to functional programming,
but augments it. For example, unify
could just be a
function which returns a binding. Hence Prolog could have started as
an extension to Algol. Then functions and local scope would not have
to be reinvented. Thus is it partially exhilirating but partially
unfortunate that combining features from different ideologies
(e.g. objectoriented, functional, and logic programming) is a current
research topic.