PLE lecture notes -- Haskell

Haskell is a purely functional language: data structures cannot be modified and names cannot be rebound. This is in stark contrast to object-oriented languages in which mutation is a central operation. In response to this restriction, Haskell emphasizes features that support "functional style", e.g. higher-order functions, lazy evaluation, and pattern matching.

Focal points


True or False

e.g. 'a'

e.g. 5

The backslash replaces Scheme's "lambda". Free variables are lexically scoped. Application is juxtaposition.
\n -> n + 1           -- the successor function
succ = \n -> n + 1    -- bind "succ" to the successor function
succ n = n + 1        -- sugar for above; similar to Self
\a -> \b -> a + b     -- the make-adder function
(\a -> \b -> a + b) 1 -- another version of the successor function

Heterogeneous tuples
(5, 'a', succ)

Homogeneous lists
[1, 2, 3]
1:2:3:[]    -- same thing, using explicit "cons" operator ':' and nil '[]'
[1..3]      -- also the same thing (Haskell is heavily sugared)


Every expression in Haskell yields a value; there are no statements executed for side-effect. Furthermore, all values are statically typed. Like Sather, Haskell has a type language distinct from the value language. A type declaration is written using '::':
          True :: Bool
           'a' :: Char
             5 :: Int
          succ :: Int -> Int
       [1,2,3] :: [Int]
(5, 'a', succ) :: (Int, Char, Int -> Int)
The arrow operator is used for functions. The make_adder function would have type Int -> Int -> Int, i.e. it takes an integer and returns a function from integer to integer. This is also the type of add, a function that takes two integers and produces an integer. The reason is that multiple-argument functions are curried, to be explained below.

Implicit types

However, when we write Haskell programs, we usually don't need the '::' operator. This is because Haskell automatically reconstructs the types of functions and their arguments at compile-time. For example, this is factorial:
fact n = if (n == 0) then 1 else n * fact (n-1)
We can ask what type this function has, and Haskell will reply Int -> Int:


When Haskell cannot restrict the type of an argument, the function automatically becomes polymorphic. For example, this function makes a pair of the item it receives:
duplicate x = (x, x)
What is the type of duplicate? Haskell says a -> (a, a), where a is a universal quantifier. This is a type-function, just like Sather's parameterized types. The difference is that Haskell automatically fills in the type-argument by inspecting the argument to duplicate. For example, these calls are both legal:
duplicate 1
duplicate ['a','b','c']


Sometimes we want a function to be polymorphic over just a set of types, not all types. We may also want a function to have different implementations for different types, i.e. an overloaded function. Haskell supports these through type-classes and type-instances, respectively. A type-class describes the functions a type must support; a type-instance provides the implementations of these functions. These concepts correspond to Sather's class types and classes, respectively. For example,
class Eq a where
  (==) :: a -> a -> Bool 
defines a type-class where every instance supports the equality operator '=='. The operator takes two arguments of the same type and returns a boolean. Type-classes can inherit from each other.

Instantiating a type-class means labeling an existing type, e.g. Int, as belonging to that class, and providing implementations of the class methods, e.g.

instance Eq Int where
  x == y = intEq x y
Where intEq is a primitive for integer equality. Each instance can provide its own implementation of '=='; this is how we overload it.

Now we can declare polymorphic functions only over types which support equality:

duplicate :: (Eq a) => a -> (a, a)
duplicate x = (x, x)
Note that the type of duplicate must now be declared. Type classes are not inferred automatically.

Whereas a -> (a, a) was an untyped type-function, that could take any type as argument, (Eq a) => a -> (a, a) is a typed type-function, whose arguments are themselves typed (by type classes). Haskell's '=>' corresponds using the '<' operator in Sather for restricting the parameters of parameterized types.

Haskell thus supports object-orientation based on traits. The data held by a value is defined by its type, but the operations on the value are defined by functions in an instance declaration (the traits). Furthermore, traits are themselves typed, by type-classes.


Haskell also has a module construct for building a closed namespace, and an interface construct for typing modules. No connection is drawn between this mechanism and type-classes, which are viewed as an overloading, not a naming, mechanism.

Functional style

The expressiveness lost by immutability is partially recaptured by rich facilities for higher-order programming. Higher-order functions allow complex behavior to be generated by composing a few simple primitives. They can also make code more reusable, since behavior can be customized by substituting a different function in the composition.


Multiple-argument functions in Haskell are actually single argument functions which return a function for the rest of the arguments. This is known as "currying". In Scheme, the definition of such a function would be verbose, e.g.:
(define (make-linear-xform a)
  (lambda (b)
    (lambda (x) 
      (+ (* a x) b))))

(((make-linear-xform 2) 3) 3)   ; prints 9
In Haskell, it looks normal:
linear-xform a b x = (a * x) + b

linear-xform 2 3 3              -- prints 9
f = linear-xform 2 3            -- currying
f 3                             -- prints 9

Even operators, which are just infix functions, can be curried:

succ = (+1)
succ 2                          -- prints 3
double = (*2)
double 3                        -- prints 6

Currying prevents awkward Scheme constructions like (lamdba (y) (f x y)) when you really just want to say f x. This is known as eta reduction (a term from the lambda calculus).


Another useful application of functions is delayed evaluation. For example, the delay form in Scheme wraps an expression into a function (with no arguments) which is evaluated when needed by calling force. As described in SICP, delayed evaluation is instrumental for streams and data-flow programming, and is a good alternative to explicit mutation in many programs. Here are some examples.

Like currying, delayed evaluation in Haskell is implicit. All functions in Haskell execute before their arguments are fully evaluated. This is known as non-strictness. In particular, Haskell is lazy, meaning that it postpones argument evaluation to the last possible moment: when the argument is used in the function. This mirrors the semantics of delay and force. (In contrast, the Id language is non-strict but eager: arguments evaluate in parallel with the function.)

For example,

const1 x = 1     -- function which always returns 1
const1 (1/0)     -- prints 1; the division never happens

ones = 1:ones    -- infinite list of 1s
                 -- ':' is just an infix function; it is lazy, too
take 10 ones     -- prints [1,1,1,1,1,1,1,1,1,1]
                 -- "take n list" returns the first n elts of list

Thus non-strict languages use names in the style of commands, not tags (as defined in the linguistics lecture). A name is bound to an arbitrary computation, which is executed when the name's value is needed (lazy evaluation) or concurrently (eager evaluation). A name cannot be rebound to something else; it "means" the result of that computation. For example, in English we use the noun phrase "the list containing 1 followed by the contents of the same list" to denote what in Haskell we called ones. Just as this noun phrase only means the infinite list of ones (it makes no sense to "rebind" it), the ones variable also cannot be rebound.

Declarative syntax

Haskell supplies various syntactic shortcuts so that functions can be defined in a declarative, mathematical style, i.e. without using "if".

Pattern matching

Pattern matching is sugar for packing and unpacking data structures. We first saw pattern matching in Python. For example:
x, y = 1, 2
def sum_pair((a,b)):
  return a + b
sum_pair((1,2))        # prints 3
We can do the same in Haskell:
(x, y) = (1, 2)
sum_pair (a,b) = a + b
sum_pair (1,2)         -- prints 3
Haskell allows pattern matching with arbitrary nesting of tuples, lists, and user-defined data structures.

For example, by using argument patterns we can define several versions of a function, as long as they have the same type. Here are two implementations of list length:

len1 x = if (x == []) then 0 else (1 + len1 (tail x))

len2 [] = 0
len2 (h:t) = 1 + len2 t
Both have type [a] -> Int. Which do you find more readable?


For more sophisticated functions, we can use guard predicates:
sign1 x = if (x < 0) then -1 else if (x == 0) then 0 else 1

sign2 x | x < 0  = -1
sign2 x | x == 0 = 0
sign2 x | x > 0  = 1

fact n | n == 0 = 1
fact n | n >  0 = n * fact (n-1)

Haskell even lets us make definitions like

fact 0 = 1
fact (n+1) = (n+1) * fact n
which is just sugar for our original definition of fact.

Sweet tooth

We wonder why Haskell likes sugar so much. While useful for an expert, having several ways to express the same thing makes it hard to master the language. Perhaps the designers wish programs could just be mathematical equations.

Computers can do a lot of things; sometimes what they do is well described as solving equations. However, traditional mathematics has no representations for I/O, naming environments, sequentiality, and time, which are all important in programming. No wonder languages based on math, such as Haskell, have poor support for these, much less any capability for reflection, i.e. representing how the language itself operates.

Functional ideology

Can a language be improved by taking something out? Haskell's designers seem to think so. Functional programming is all about coping with the elimination of side-effects. To understand why this could be useful, consider the restrictions we've seen before: A benefit of these was efficiency; the same holds for functionalism. At first, it seems paradoxical; after all, modifying a list in Haskell requires making a brand new list. However, if the list doesn't have to be copied, and can be merely mutated in-place, many compilers can detect this and produce efficient code.

When evaluation has no side-effects, the evaluation process becomes easier to understand. Conceptually, the evaluator is just applying rewrite rules to the syntactical form of the expression. For example:

In a purely functional language, it doesn't matter which order we apply the rules; we will always get the same answer. Also, applying the evaluator more than once has no effect. (Compare this to quoted expressions in Scheme.) We can thus think of a program as an undeveloped photograph; evaluating the program develops it. Why is this interpretation useful? Because the compiler can apply these rules, too. It's called "equational reasoning," and is part of what makes functional compilers easier to write.

One advantage of functionalism, then, is that it is easier to reason about programs, and therefore make them more efficient. Remember, compilation is like decryption: it's not the possibility that counts, it's the work factor. Functional languages promise to have a smaller work factor for the most difficult optimizations. A good example is parallelism: when expressions cannot have side-effects then it is always safe to run them in parallel. In 6.847, you will learn about a (mostly) functional language called Id where programs run well on both sequential and massively parallel hardware. Declarative programming in general tends to make more things implicit, which puts a burden on the compiler but allows programs to automagically become more efficient as the compiler improves. Hence functional programs can have a longer life span.

Another possible advantage is the new programming style born by the restriction. Functional programming goes even further than static typing in making it hard to make errors. However, this is not strictly a linguistic issue; it is a matter of style. It's just that functional languages happen to be best at functional style, by supporting pattern matching and non-strictness, because they're forced to.

Breakfast cereal analogy

If you're wondering how Haskell fits in to the other languages we've seen, consider this analogy: Both have a similar, staple base: But while Sather adds mutation (raisins), Haskell adds syntax (sugar):
  1. Implicit types
  2. Implicit currying
  3. Implicit lazy evaluation
  4. Implicit selectors and conditionals, i.e. pattern matching
However, sugar doesn't always satisfy:
  1. Must declare type classes to do overloading
  2. Functions cannot take a variable number of arguments, optional arguments, or keyword arguments
  3. Strict variables must be annotated; implicit laziness implies explicit strictness
  4. Patterns must be linear: no repeated variables to enforce equality

PLE Home page
Last modified: Sun Oct 5 18:09:07 EDT 1997