Lambda Calculus 1

THE LAMBDA CALCULUS

Our Goals:

Today: Chap12.pdf in BB content ``lambda calculus''.

FUNCTIONAL LANGUAGES


* From Wikipedia (with thanks!): "Anonymous functions have been a feature of programming languages since Lisp in 1958. An increasing number of modern programming languages support anonymous functions, and some notable mainstream languages have recently added support for them, the most widespread being JavaScript,[1] C#,[2] Ruby[3] and PHP[4]. Anonymous functions were added to C++ in C++11. Some object-oriented programming languages have anonymous classes, which are a similar concept, but do not support anonymous functions. Java is such a language (although support for lambdas is on the roadmap for Java 8[5])."

EXECUTION ORDER

In evaluating arguments, want NO side-effects! Thus there is no fixed command execution order in (pure) FLs.

REPETITION

Command repetition (for, while, do, until,...) basic in ILs. FLs use recursion ...new versions of formal parameters are bound to new actual parameter values.

New names associated with new names through function call nesting.

DATA STRUCTURES

In ILs, fields (record fields, array and list elements) changed by assignment.

In FLs, usually have explicit representations for data structures (often lists)...NOT global!

In FL, pass in whole data structure which gets returned as value --- less confusion, easier debugging...

FUNCTIONS AS VALUES

Many ILs allow functions to be passed in as actual parameters, but often don't allow them to be returned as values. (C has function pointers, Matlab has function handles...).

But both capabilities basic to FLs.

THEORY OF COMPUTATION

PURE LAMBDA CALCULUS (CHURCH 1930s)

ABSTRACTION and SPECIALIZATION

LAMBDA EXPRESSIONS

Only objects are functions that take function arguments and return function results (!). Only three types of expression. In BNF,
< expression > ::= < name > | < function > | < application >.

We won't let a λ expression be a name for technical reasons: we use them in function bodies. a < name > is a sequence of non-blank characters, e.g. Chris sqrt(69) my-long-var-name 432 + -->

A function abstracts over an expression (introduces a formal parameter) and has this form: < function > ::= λ < name > . < body > here
< body > ::= < expression >, so it's nothing new. Three example functions (we'll see them again) are:
λx. x λ first. λ second. first λf. λa. (f a)

LAMBDA EXPRESSIONS

The λ comes just before the ``formal parameter'', or bound variable: the name used for abstraction.

The . separates this name from expression over which abstraction w.r.t. that name happens.

That expression is the body of the function. It can be any expression, including another function; thus we have functions that can return functions immediately.

Pure λ calculus functions don't have names.

BEGIN OPTIONAL:FUNCTION APPLICATION

< application > ::= ( < function expression > < argument expression > ) < function expression > ::= < expression > < argument expression > ::= < expression >

E.g. (λx. x   Joey)
or (λx. x   λa.λb. b)
A little confusing, no? Only one rule to remember, but there's little redundancy.

FUNCTION APPLICATION

More...

Full beta reductions: Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy with regard to reducibility; "all bets are off".

Call by name: As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.

Call by value: Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction).

Call by need: As in normal order, but function applications that would duplicate terms instead name the argument, which is then reduced only "when it is needed". Called in practical contexts "lazy evaluation". In implementations this "name" takes the form of a pointer, with the redex represented by a thunk (code to perform a delayed computation).

Most programming languages (including Lisp, ML and imperative languages like C and Java) are described as "strict", meaning that functions applied to non-normalising arguments are non-normalising. This is done essentially using applicative order, call by value reduction (see below), but usually called "eager evaluation".

Applicative order is not a normalising strategy. The usual counterexample is as follows: define Ω = ωω where ω = λx.xx. This entire expression contains only one redex, namely the whole expression; its reduct is again Ω. Since this is the only available reduction, Ω has no normal form (under any evaluation strategy). Using applicative order, the expression (select-first(identity(Ω))) (λx.λy.x) (λx.x)Ω is reduced by first reducing Ω to normal form (since it is the rightmost redex), but since Ω has no normal form, applicative order fails to find a normal form for KIΩ.

In contrast, normal order is so called because it always finds a normalising reduction, if one exists. In the above example, (select-first(identity(Ω))) reduces under normal order to I, a normal form. A drawback is that redexes in the arguments may be copied, resulting in duplicated computation (for example, (λx.xx) ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y) using this strategy; now there are two redexes, so full evaluation needs two more steps, but if the argument had been reduced first, there would now be none).

END OPTIONAL: Function Application Diagram

EXAMPLES

Examples: Apply identity to self-application:
(λx. x   λs. (s s))

Or reverse that:
(λ s.(s s)   λ x.x)

Or self-application to self:
(λ s.(s s)   λ s.(s s)).

Function application of Ident to Self-Apply:
((λ func. λ arg. (func arg) λ x.x) λ s.(s s))

NEW NOTATION

Since we're working toward a higher-level language, we'll use syntactic sugaring to give us new, more user-friendly, shorter notation. Can always be undone.

Substitution rules (Unambiguous): only need pure λ calculus for evaluation.

To assure time-independence of evaluation, substitutions must all be possible statically --- done before evaluation.

NAMING FUNCTIONS and β REDUCTION

Pure λ calculus functions don't have names but we'll name them.
def < name > = < function >
Let's name the functions we have so far: def identity = λ x. x def self-apply = λ s.(s s) def apply = λ func. λ arg. (func arg)

Only replace a name by its associated function when the name is the function expression of an application.

Thus we can see what the variable binding should be. Use == notation for any such substitution. (< name > < argument >) == ( < function > < argument >) --- Replacing a bound variable in a function body with an argument is called β reduction....this is what happens after binding.

We use ==> notation for normal-order β reductions we don't write out in detail: (< function > < argument >) => < expression > When obvious chain of them, use => ... =>

When an expression cannot be (further) β - reduced, it is in β-normal form or just normal form.

QUESTIONS

A student wondered, you might have too: What about
(λx.x x) ??

First legalistic answer: we're not going to be using names as λ expressions. I'm not sure what the technical hangup is. We're used to using names as data objects in Prolog: [a,b,c,d] as a list, for example. Also no reason why
(λx.x fred) shouldn't evaluate to fred. It enforces "everything is a function", which otherwise wouldn't be true. Thus for mysterious or religious reasons we won't be seeing just this sort of example, BUT we see that it's an important issue for us.

Pretending it was "OK" to write (λx.x x), what happens with it? We remember from previous experience and understanding of scoping that the argument is "another x" from the one in the function definition. But our simple-minded rewriting method doesn't give us a way to manage "another x", so we're in trouble here. To be continued...

WARNING

(< name > < argument >) == ( < function > < argument >)

We'll soon have names for functions like identity and concepts like true two,...

For your sanity and success remember NEVER to expand a name until it is the function you want to apply, as above!

This saves you from useless and dangerous copying of complex substructure that is best notated by its simple name.

E.g.
(self-apply ((apply identity) apply)) == (λs. (s s) ((apply identity) apply))

Not
(self-apply ((apply identity) apply)) == (self-apply ((λ func. λ arg. (func arg) identity) apply)

BUT...! With applicative order, we'll have the luxury of evaluating arguments before functions. Then it's a judgement call, but often useful to do that. Here, clearly, (with -> as a new notation for applicative order reduction):
(self-apply ((apply identity) apply)) -> (self-apply apply)

FUNCTIONS FROM FUNCTIONS

Remember using append in Prolog to build other functions like first? Here are some re-definitions of functions we know:

def identity2
= λ x.((apply identity) x)

Applying it to, say, the identity function itself:

(identity2 identity) == (λ x.((apply identity) x) identity) => ((apply identity) identity) == ((λ func.λ arg.(func arg) identity) identity) => (λ arg.(identity arg) identity) => (identity identity) => ... => identity

It evaluates to its argument every time. Important Technique! Show for arbitrary argument <arg>.

(identity2 < arg >) == (λ x.((apply identity) x) < arg >) => ((apply identity) < arg > ) => ... => (identity < arg >) => ... => < arg >

MORE FUN WITH FUNCTIONS

Use function application function to get a function that is the same as the self-application function.

def self-apply2 = λ s. ((apply s) s)

Applying this to any argument < arg >:

(self-apply2   < arg >) == (λ s.((apply s) s)   < arg >) => ((apply < arg >)   < arg >) => ... => (< arg > < arg >)

ARGUMENT SELECTION, ARGUMENT PAIRING

Basic, to be used in near future for operating on pairs, which are the only "data structure" we need to build a compiler in pure lambda calculus. Maybe not surprising given LISP.

Selecting First of Two Arguments:

def select-first = λ first. λ second. first

Bound Var. first and body
λ second.first.
When applied to an argument, returns a new function which, when applied to another argument, returns the first argument.

Thus (easier?) when applied to two arguments, returns the first and tosses out the second.

Applying select-first to any two arguments returns the first one:

((select-first < arg1 >)   < arg2 >) == ((λ first. λ second. first  < arg1 >) < arg2 >) => (λ second. < arg1 > < arg2 >) => < arg1 > (Just ignore < arg 2 >.)

SELECTING SECOND OF TWO ARGUMENTS

def select-second = λ first. λ second. second

Note body is just identity fn.

((select-second < arg1 >) < arg2 >) == ((λ first. λ second. second < arg1 >) < arg2 >) => ((λ second. second  < arg2 >) => < arg2 >

The first argument is thrown away on the second line since in the second line the bound variable first doesn't appear in the body
λ second. second.

Cutely, select-second applied to anything returns a version of identity.

Cute, cute...

A secret identity... (select-second < arg >) == (λ first. λ second. second < arg >) => λ second. second) ≡ λ x. x)

Second is "first" applied to "sameness"? (select-first identity) == (λ first. λ second. first identity) => λ second. identity == λ second. λ x. x ≡ % rename variables λ first. λ second. second ≡ select-second

MAKING PAIRS FROM TWO ARGUMENTS

Pairs: In 173's treatment, the only "trick" and "data structure" we'll use (for Boolean logic, integer arithmetic, lists and graphs, typing...).

def make-pair = λ first. λ second. λ func. ((func first) second)

Here first is the bound variable and the body is everything from the second λ to the end. It applies argument func to argument first to create a function that may be applied to argument second.

Note the arguments first, second are used before argument func to build a function:
λ func.((func first) second).

Thinking about pairs.

Notice that instead of directly "applying a function to a data structure", if we think of pairs as sort of data structures like a Scheme pair, we apply the pair to the function, which in turn will bind the function to func , and upon evaluation our function IS applied to the pair.

So if the pair above is applied to select-first then argument first is returned, and if it is applied to select-second then the second argument is returned.

Some Examples:

Make a pair of familiar functions: ((make-pair identity) apply) == ((λ first. λ second. λ func. ((func first) second) identity) apply) => (λ second. λ func. ((func identity) second) apply) => λ func. ((func identity) apply)

3rd, 4th lines by binding identity to first, then apply to second. So our pair is a function. Apply it to select-first:

(λ func. (( func identity) apply) select-first) == ((select-first identity) apply) == ((λ first. λ second. first identity) apply) => (λ second. identity apply) => identity

FREE AND BOUND VARIABLES --- SCOPING

What if bound variables in different fns have same name, e.g. these should give same result,
identity, or λ x.x. (λ f. (f λ x. x) λ s. (s s)) (λ f. (f λ f. f) λ s. (s s))

This issue SHOULD be familiar by now from your experience. In λ < name >. < body > the bound variable < name > may correspond to instances of
< name > in < body > but nowhere else. We say the scope of the bound variable < name > is < body >.

See the Notes for details. Variables can be bound and free at different places in same expression.

Scoping rules allow us to defiine formally:

NAME CLASHES AND α CONVERSION

For convenience, like standardizing apart in FOPC (giving unique names to all universally quantified variables). Maybe a bit worse, simple-minded and literal β reduction can do the wrong thing with free variables that also appear in the function definition. This is a name clash E.g., here's apply:

def apply = λ func. λ arg. (func arg)

consider ((apply arg) boing) == ((λ func. λ arg. (func arg) arg) boing)
Here, arg is used as a function bound variable name and as a free variable name in the leftmost application (which should not get replaced). But simple β reduction: ((λ func. λ arg. (func arg) arg) boing) => (λ arg. (arg arg) boing) => (boing boing),
not (arg boing) as we were hoping.

The only cure is consistent re-naming of variables, which means the different variables (free and bound) don't look alike. This is only good sense if you want to stay sane, and is called α conversion (or reduction ). (See Notes).

SIMPLIFICATION THROUGH η REDUCTION

Just for completeness, here's a simplification that has a name and sometimes is handy: Consider
λ < name >. (< expression > < name >),

which is similar to the function application function after it has been applied only to a single function expression.

Claim: it's equivalent just to
< expression >.

Demonstration: Applying the function above to an argument is the same as just applying the
< expression > to the argument.
For an arbitrary < arg >: ( λ < name >. (< expression > < name >) < arg >) => (< expression > < arg >)

This simplification of
λ < name >. (< expression > < name >)
to
< expression >
is called eta reduction or η reduction.