Check out my first novel, midnight's simulacra!
Programming Language Theory
The Church-Turing Thesis equates a vaguely-defined set of "computable" functions with the partial recursive functions. Several systems are only as powerful as the partial recursives (Turing-complete): Turing machines and the λ-calculus are two. Programming languages provide further syntaxes and semantics, but all you really need in life is combinatory logic. Peter J. Landin's 1965 ACM report, "A correspondence between ALGOL 60 and Church's Lambda-notation", jump-started most of this.
There may, indeed, be other applications of the system than its use as a logic. -- Alonzo Church, 1932
Applicative/Functional Programming
Expressions compose functions rather than values. Backus proposed three tiers of complexity in his Turing Award lecture:
- Simply functional language (fp): No state, limited names, finitely many functional forms, simple substitution semantics, algebraic laws
- Formal functional system (ffp): Extensible functional forms, functions represented by objects, translation of object representation to applicable form, formal semantics
- Applicative state transition system (ast): ffp plus mutable state and coarse-grained operations thereupon
Higher-order functions map one or more functions to a function. First-order support of functions provides composition of functions at runtime.
Combinatory Logic
Built from only variables, a set of primitive combinators (closed λ-expressions with their own reduction rules), and function application, combinatory logic provides no abstraction mechanism. A Turing-complete kernel of primitive combinators is complete. The SKI calculus requires only the K and S combinators; Chris Barker's Iota and Jot require only one each.
SKI Calculus
- I ≡ λx. x (identity)
- K ≡ λx, y. x (generation of constant functions)
- S ≡ λx, y, z. (x z (y z)) (generalized function application)
Grammar
C-term ::= C-app | C-prim | var C-app ::= C-term C-term
Fixed-Point Combinators
For a function F, its set of fixed points are those inputs which map to themselves. Provided algebraic functions, for instance, fixed points of a function F(x) would be found by setting F(x) = x and solving for the roots. The fixed-point combinators compute the fixed points of their functional inputs. Curry's Y-combinator was the first:
- Y ≡ λf. (λx. f (x x)) (λx. f (x x)) (untyped λ-calculus)
- Y ≡ S (K (S I I)) (S (S (K S) K) (K (S I I))) (SKI calculus)
Divergence-free evaluation of the Y-combinator requires call-by-name semantics. Call-by-value semantics can make use of the Θv (Turing) or Z-combinators:
- Θv ≡ (λx. λy. (y (λz. x x y z))) (λx. λy. (y (λz. x x y z)))
- Z ≡ λf. (λx. f (λy. x x y)) (λx. f (λy. x x y)) (via η-expansion on Y)
The infinitely many fixed-point combinators of untyped λ-calculus are recursively enumerable.
Untyped λ-calculus
Two operators (function definition and application) upon one operand type (λ-expression).
- Function definition: (λboundparam. body)
- Function application: function(actualparam)
The body is made up of free and bound variables. Those not present in the λ's list of bound variables are free. A λ-expression with no free variables is closed (closed expressions are equivalent in power to combinatory logic). A λ-expression enclosed in scopes of its free variables is meaningful. Distinct free variables specify a λ-expression's distinct substitution classes.
Grammar
λ-term ::= λ-app | λ-abst | var λ-app ::= λ-term λ-term λ-abst ::= 'λ'var'.' λ-term
This abstract grammar can be augmented with associativity rules and grouping syntax (parentheses) to provide a concrete grammar. If verbosity is no issue, no associativity rules need be specified for the following grammar:
λ-term ::= λ-app | λ-abst | var λ-app ::= '(' λ-term λ-term ')' λ-abst ::= '(' 'λ'var'.' λ-term ')'
Evaluation via Substitution
We'll use a substitution syntax to rewrite λ-calculus strings. {N / X}M substitutes expression N for free instances of X in expression M:
- If M is a variable ({N / X}V), replace M with X if and only if M is N.
- If M is a λ-app ({N / X}(A B)), {N / X}M = (({N / X}A) ({N / X}B))
- Otherwise, M is a λ-abst ({N / X}(λV. B)).
- If X is V, M is unchanged (X is bound in M).
- If X does not appear in B, M is unchanged (no free X in M)
- If V occurs free in N, substitute N for X throughout B.
- Otherwise, the substitution in this form is invalid (but it can be made valid...)
With this syntax, we define evaluation rules. Application of the a rule preserves its equivalence.:
- ɑ-conversion renames bound variables with any variable y which is not free, preserving ɑ-equivalence:
- λx. m →ɑ λy. {y / x}m
- β-reduction replaces a formal parameter with an actual parameter, preserving β-equivalence:
- (λx. m) n →β {n / x}m
- An expression is in β-normal form if no β-reductions are possible. It is otherwise a β-redex.
- η-reduction eliminates bindings unused within a scope, preserving η-equivalence:
- (λx. (f x)) →η f
- An expression is in β-η-normal form ("Beta-eta-normal form") if neither β- nor η-reductions are possible
- If a η-reduction is applicable, the expression is a η-redex.
According to the Church-Rosser Theorem, all terminating evaluations will compute the same function. Certain application orders might not terminate for a given expression, though, despite termination via other orders. This gives rise to the core difference between call-by-value and call-by-name semantics:
- leftmost-innermost application evaluates arguments as soon as possible, and is equivalent to leftmost call-by-value ("applicative order")
- leftmost-outermost application substitutes prior to evaluation, and is equivalent to lazy call-by-name (call-by-necessity, "normal order")
Call-by-name terminates if any order does, but might require more total reductions. Call-by-value only β-reduces abstractions, not applications. Call-by-need memoizes the results of evaluated functions, and is equivalent to call-by-name if side-effects are disallowed. Call-by-name is text substitution augmented by capture avoidance. Wikipedia's evaluation strategy page is pretty thorough.
Numerics
The integers (or any countably infinite set) can be represented via the Church encoding (or Mogensen-Scott, or others):
- 0 ≡ λf. λx. x
- 1 ≡ λf. λx. f x
- 2 ≡ λf. λx. f (f x)
- 3 ≡ λf. λx. f (f (f x))
- n ≡ λf. λx. fnx
Some basic arithmetic operations:
- plus ≡ λm. λn. λf. λx. m f (n f x) (from f(m + n)(x) = fm(fn(x)))
- succ ≡ λn. λf. λx. f (n f x) (β-equivalent to (plus 1) for our 1 from above)
- mult ≡ λm. λn. λf. n (m f) (from f(m * n)(x) = (fm(x))n)
- pow ≡ λm. λn. n m
- pred ≡ λn. λf. λx. n (λg. λh. h (g f)) (λu. x) (λu. u) (returns 0 when applied to 0)
- sub ≡ λm. λn. (n pred) m (using pred from above)
Logic
The Church booleans evaluate to one of their two arguments:
- true ≡ λa. λb . a
- false ≡ λa. λb . b
Using them, we implement basic conditional logic:
- if[A, B, C] ≡ A B C (use true/false for A. Evaluates to B on true, C on false)
- not ≡ λt. t false true (β-equivalent to if[A, false, true])
Syntactic sugar
- Left-associative application as implicit parentheses
- Use of definitions (allowing identifiers to stand in as λ-expressions)
- Currying: (λx, y. x + y) rather than (λx. (λy. x + y))
- Numeric literals rather than Church encoding
Programming Language Design
Syntax
Pierce describes three means for describing language syntax, using terms to refer to language definitions involving at least one terminal:
- Induction on successive terms (of which EBNF is a compact notation)
- Inference from axioms (terminals) and successive inference terms
- Concrete definition (generative grammar) placing successive sets of alternate terms in bijection with the ordinals
Semantics
Pierce describes three techniques of formalizing semantics:
- Operational semantics define one or more formal abstract machines, and attempt to exhaustively describe the language constructs in these machines' terms. Emphasis is on proof of various abstract machines' equivalence (and correspondence to actual implementations), and thus proof of an implementation's correctness. When finite state machines are used (this is only possible for simple languages), this is Plotkin's small-step or structural operational semantics; more powerful languages might make use of Kahn's big-step or natural semantics.
- Denotational semantics map between different semantic domains (the investigation of which constitutes domain theory) using interpretation functions. Domains are interesting if results have been proven for their properties, such as termination and safety results. Denotational semantics can thus lead to proof schemas for a language, but meaningful interpretation functions seem to largely require domain-driven language design.
- While operational and denotational semantics define the behaviors of programs, and then prove laws about them, axiomatic semantics define the behavior of the language in terms of these laws themselves. As Pierce says: "The meaning of a term is just what can be proved about it."
Sources
- Professor Spencer Rugaber's CS6390 "Theory and Design of Programming Languages". Georgia Tech, Fall 2009.
- Hindley and Seldin. Lambda-Calculus and Combinators: An Introduction, Second Edition. Cambridge University Press, 2008.
- Goldberg. "On the Recursive Enumerability of Fixed-Point Combinators". BRICS Report RS-05-1, January 2005.
- Van Roy and Haridi. Concepts, Techniques and Models of Computer Programming. MIT Press, 2004.
- Pierce. Types and Programming Languages. MIT Press, 2002.
- Felleisen. "A Lecture on the Why of Y". 1997.
- Winskel. Formal Semantics of Programming Languages. MIT Press, 1993.
- Hennessy. Semantics of Programming Languages. Wiley, 1990.
- Barendregt. The Lambda Calculus: Its Syntax and Semantics, Second Edition. North Holland, 1985.