PLE treasure hunt -- Python


Many languages which don't use "endif", e.g. C, have a "dangling else" ambiguity when if statements are nested. How does Python avoid this ambiguity?


Change the complex number class to support addition of complex numbers using the normal plus operator "+". Can you add an integer to a complex number in this way?


Since Python doesn't check the types of function arguments, it is easy to write polymorphic functions. Write a function flatten(s) which takes a sequence of sequences of elements and produces a list of just the elements, in the order that they originally appeared. Another name for this function might be "in-order traversal". For example, flatten(([1,2],(3,4),"foo")) should be [1, 2, 3, 4, 'f', 'o', 'o']. It should also work on sequence objects, e.g.
from array import array
flatten([array('l', [1, 2]), array('d', [1.0, 2.0])])
# prints [1, 2, 1.0, 2.0]

Can you write a superflatten function which works no matter how deeply or unevenly nested the sequences are?


Fix the addition bug with pos_complex, by modifying the complex class to be more general.

Meta-question: how much should programmers have to plan for reuse? Can/should the language provide assistance? Could Python be designed so that classes designed and understood in isolation, like complex, are automatically reusable, without unexpected mishaps like this one?

Nested scopes

Python is lexically scoped. Unfortunately, it does not allow arbitrarily nested scopes. For example, if you define a function inner inside the function outer, you cannot access the locals of outer from within inner. For example, say you would like to translate the following Scheme code:
(define make-adder
  (lambda (a)
    (lambda (b)
      (+ a b)))

(define add-two (make-adder 2))

(add-two 1)
; Value: 3
The straightforward translation into Python code fails:
def make_adder_bad(a):
	return lambda b: a+b

add_two = make_adder_bad(2)
# error, variable a is unknown
However, it is possible to get around this limitation by using default arguments.
def make_adder(a):
	return lambda b, aa=a: aa+b

add_two = make_adder(2)
## returns 3
Now try using default arguments to achieve three levels of nesting to compute y = a*x + b, as in this Scheme code:
(define make-linear-xform
  (lambda (a)
     (lambda (b)
	(lambda (x)
	    (+ (* a x) b))))
Rewrite make_linear_xform in Python, so that you can say:
doubler = make_linear_xform(2)
double_add1 = doubler(1)
y = double_add1(3)
# Now, y should equal 7

Shadow inheritance

Python allows both objects and the classes which define them to change at runtime. What effect on the instantiated objects do class changes have? For example:
class c:
	i = 3

x = c()
x.i           (prints 3)
c.i = 4
x.i           (???)
What if we assign x.i to 3 (a no-op, right?) before we change c? What happens if we delete x.i (using the del operator)? Think of a few reasonable possibilities, then investigate what Python does. Can you speculate on how Python implements inheritance? We think "shadow inheritance" is a good term for it. Can you see why?

See if you can confirm your model with this experiment:

class d(c):

y = d()
y.i          (prints 3)
d.i = 4
c.i = 5
del d.i
y.i          (???)

Speculate on how Python's classes, instances, modules, and tables could be unified into one mechanism, while preserving this behavior.

More Scoping

The Python tutorial says that variable reads are resolved by probing three successive scopes:
  1. Local: The current function block (if any).
  2. Global: The module of that function.
  3. Built-in: The __builtins__ of the module.
However, variable writes always go to the local scope, unless the special "global" declaration is used. Why do you think this is so? Could Python have used a "local" declaration instead? What are the tradeoffs here? (You may want to examine how other scripting languages deal with this issue, e.g. Tcl, Perl, Awk, and Rexx.)

Internally, Python has two kinds of assignment statement: assign-local and assign-global. The "global" declaration instructs the parser to switch between generating code for these two. Would it be an improvement if Python made this distinction explicit, by using two different syntaxes for assignment, rather than the "global" declaration? What about variable references?

Does the name lookup behavior of a block remind you of classes in Python? Do you think Python could use classes as a model of program structure? How might the assign-local/assign-global distinction be interpreted in terms of class inheritance?

Last modified: Tue Jan 30 22:43:07 EST 1996