Check out my first novel, midnight's simulacra!

Programming Language Theory: Difference between revisions

From dankwiki
No edit summary
No edit summary
 
(155 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Applicative/Functional Programming==
[[CATEGORY: CS GRE Prep]]
Expressions compose functions rather than values. Backus proposed three tiers of complexity in his Turing Award lecture:
The Church-Turing Thesis equates a vaguely-defined set of "computable" functions with the [http://en.wikipedia.org/wiki/Computable_function partial recursive functions]. Several systems have been proven equivalent to the partial recursives in power, such as Turing machines and the λ-calculus (practical programming languages generally provide further syntaxes and semantics, but a [[Real Programmer]] is perfectly happy with combinatory logic and a beer). Peter J. Landin's 1965 ACM report, "A correspondence between ALGOL 60 and Church's Lambda-notation", is blamed by most textbooks for jump-starting the study of programming language theory, ensuring computer scientists fifty years later would still be exploring [http://en.wikipedia.org/wiki/Topos topoi], [http://en.wikipedia.org/wiki/Sheaf_theory sheaves], [http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence Curry-Howard correspondences] and fixed-point combinators.<blockquote>''There may, indeed, be other applications of the system than its use as a logic.'' -- Alonzo Church, 1932</blockquote>
 
==Functions and Declarative Programming==
Expressions in functional programs compose functions (as opposed to "values"). Backus proposed three tiers of complexity in his Turing Award lecture:
* Simply functional language (<tt>fp</tt>): No state, limited names, finitely many functional forms, simple substitution semantics, algebraic laws
* Simply functional language (<tt>fp</tt>): No state, limited names, finitely many functional forms, simple substitution semantics, algebraic laws
* Formal functional system (<tt>ffp</tt>): Extensible functional forms, functions represented by objects, translation of object representation to applicable form, formal semantics
* Formal functional system (<tt>ffp</tt>): Extensible functional forms, functions represented by objects, translation of object representation to applicable form, formal semantics
* Applicative state transition system (<tt>ast</tt>): <tt>ffp</tt> plus mutable state and coarse-grained operations thereupon
* Applicative state transition system (<tt>ast</tt>): <tt>ffp</tt> plus mutable state and coarse-grained operations thereupon
''Higher-order functions'' map one or more functions to a function. First-order support of functions provides composition of functions at runtime.
===Combinatory Logic===
Built from only variables, a set of primitive combinators (closed λ-expressions with their own reduction rules), and function application, combinatory logic provides no ''abstraction'' mechanism. A Turing-complete kernel of primitive combinators is ''complete''. The SKI calculus requires only the K and S combinators; Chris Barker's [http://semarch.linguistics.fas.nyu.edu/barker/Iota/ Iota and Jot] require only one each.
====SKI Calculus====
*<tt>I ≡ λx. x</tt> (identity)
*<tt>K ≡ λx, y. x</tt> (generation of constant functions)
*<tt>S ≡ λx, y, z. (x z (y z))</tt> (generalized function application)
====Grammar====
<pre>C-term ::= C-app | C-prim | var
C-app ::= C-term C-term</pre>
====Divergent Combinators====
The ''omega'' combinator is divergent. It can be reduced, but has no normal form:
* <tt>omega ≡ (λx. x x) (λx. x x)</tt>
A combinator which is in normal form, but not reduced to a value, is said to be ''stuck''. One of the major objectives of static [[Programming_Language_Theory#Typing|typing systems]] is to detect possible "stuck" reductions.
====Fixed-Point Combinators====
For a function F, its set of ''fixed points'' are those inputs which map to themselves. Provided algebraic functions, for instance, fixed points of a function F(x) would be found by setting <i>F(x) = x</i> and solving for the roots. Provided an ordered, finite domain, looping through the domain will find all solutions. For an infinite ordered domain, looping is a semi-decision: if there is a least fixed point, it will be found, but the loop never terminates otherwise. The ''fixed-point combinators'' compute fixed points of their functional inputs. Curry's ''Y-combinator'' was the first (it always computes the least fixed point):
* <tt>Y ≡ λf. (λx. f (x x)) (λx. f (x x))</tt> (untyped λ-calculus)
* <tt>Y ≡ S (K (S I I)) (S (S (K S) K) (K (S I I)))</tt> (SKI calculus)
Divergence-free evaluation of the Y-combinator requires call-by-name semantics. Call-by-value semantics can make use of the Θ<sub>v</sub> (Turing) or Z-combinators:
* <tt>Θ<sub>v</sub> ≡ (λx. λy. (y (λz. x x y z))) (λx. λy. (y (λz. x x y z)))</tt>
* <tt>Z ≡ λf. (λx. f (λy. x x y)) (λx. f (λy. x x y))</tt> (via [http://mlton.org/EtaExpansion η-expansion] on Y)
The infinitely many fixed-point combinators of untyped λ-calculus are recursively enumerable.
===Untyped λ-calculus===
===Untyped λ-calculus===
Two operators (function ''definition'' and ''application'') upon one operand type (λ-expression).
Two operators (function ''definition'' and ''application'') upon one operand type (λ-expression).
* Function definition: <tt>(λ''boundparam''. body)</tt>
* Function definition: <tt>(λ''boundparam''. body)</tt>
* Function application: <tt>function(''actualparam'')</tt>
* Function application: <tt>function(''actualparam'')</tt>
The body is made up of ''free'' and ''bound'' variables. Those not present in the λ's list of bound variables are free. A λ-expression with no free variables is ''closed''; closed expressions are equivalent in power to [[Architecture|Combinatorial logic]].
The body is made up of ''free'' and ''bound'' variables. Those not present in the λ's list of bound variables are free. A λ-expression with no free variables is ''closed'' (closed expressions are equivalent in power to [[Programming_Language_Theory#Combinatory_Logic|combinatory logic]]). A λ-expression enclosed in scopes of its free variables is ''meaningful''. Distinct free variables specify a λ-expression's distinct [[Programming_Language_Theory#Evaluation_via_Substitution|substitution]] classes.
====Grammar====
<pre>λ-term ::= λ-app | λ-abst | var
λ-app ::= λ-term λ-term
λ-abst ::= 'λ'var'.' λ-term</pre>
This abstract grammar can be augmented with associativity rules and grouping syntax (parentheses) to provide a concrete grammar. If verbosity is no issue, no associativity rules need be specified for the following grammar:
<pre>λ-term ::= λ-app | λ-abst | var
λ-app ::= '(' λ-term λ-term ')'
λ-abst ::= '(' 'λ'var'.' λ-term ')'</pre>
====Evaluation via Substitution====
We'll use a substitution syntax to rewrite λ-calculus strings. Heavyweight [http://en.wikipedia.org/wiki/De_Bruijn_notation de Bruijn indexing] is one substitution semantic, as is the ''Barendregt convention''. The following simple semantic is due Rugaber, possibly through Pierce:


The integers (or any countably infinite set) can be represented via the [http://en.wikipedia.org/wiki/Church_encoding Church encoding]:
''{N / X}M'' substitutes expression N for free instances of X in expression M:
* <tt>n ≡ λf. λx . f<sup>n</sup>x</tt>
* If M is a variable ({N / X}V), replace M with X if and only if M is N.
* <tt>plus ≡ λm. λn. λf. λx. m f (n f x)</tt> (from f<sup>(m + n)</sup>(x) = f<sup>m</sup>(f<sup>n</sup>(x)))
* If M is a λ-app ({N / X}(A B)), {N / X}M = (({N / X}A) ({N / X}B))
* <tt>succ ≡ λn. λf. λx. f (n f x)</tt> (β-equivalent to <tt>(plus 1)</tt> for a defined <tt>1</tt>)
* Otherwise, M is a λ-abst ({N / X}(λV. B)).
* <tt>mult ≡ λm. λn. λf. n (m f)</tt> (from f<sup>(m * n)</sup> = (f<sup>m</sup>)<sup>n</sup>)
** If X is V, M is unchanged (X is bound in M).
The Church booleans take two arguments, and evaluate to one of them:
** If X does not appear in B, M is unchanged (no free X in M)
* <tt>true ≡ λa. λb . a</tt>
** If V occurs free in N, substitute N for X throughout B.
* <tt>false ≡ λa. λb . b</tt>
** Otherwise, the substitution in this form is invalid (but it can be made valid...)
Common syntactic sugar:
With this syntax, we define evaluation rules. Application of the a rule preserves its ''equivalence''.:
* ''ɑ-conversion'' renames bound variables with any variable y which is not free, preserving ''ɑ-equivalence'':
** λx. m →<sub>ɑ</sub> λy. {y / x}m
* ''β-reduction'' replaces a formal parameter with an actual parameter, preserving ''β-equivalence'':
** (λx. m) n →<sub>β</sub> {n / x}m
** An expression is in ''β-normal form'' if no β-reductions are possible. It is otherwise a ''β-redex''.
* ''η-reduction'' eliminates bindings unused within a scope, preserving ''η-equivalence'':
** (λx. (f x)) →<sub>η</sub> f
** An expression is in ''β-η-normal form'' ("Beta-eta-normal form") if neither β- nor η-reductions are possible
*** If a η-reduction is applicable, the expression is a ''η-redex''.
According to the Church-Rosser Theorem, all terminating evaluations will compute the same function. Certain application orders might not terminate for a given expression, though, despite termination via other orders (proofs under arbitrary reduction ordering are said to operate under ''full beta reduction''). This gives rise to the core difference between call-by-value and call-by-name semantics:
* leftmost-innermost application evaluates arguments as soon as possible, and is equivalent to leftmost call-by-value ("applicative order")
** only the outermost redexes are reduced, and the right hand side must be in β-η-normal form before reducing the left
** this is ''strict'' evaluation (arguments are always evaluated, whether used or not)
* leftmost-outermost application substitutes prior to evaluation, and is equivalent to lazy call-by-name sans memoization ("normal order")
Call-by-name terminates if any order does, but might require more total reductions. Call-by-value only β-reduces abstractions, not applications. Call-by-need memoizes the results of evaluated functions, and is operationally equivalent to call-by-name if side-effects are disallowed (it can be more efficient, but also more complicated: reduction is performed on abstract syntax graphs rather than abstract syntax trees). Call-by-name is text substitution augmented by capture avoidance. Wikipedia's [http://en.wikipedia.org/wiki/Evaluation_strategy evaluation strategy] page is pretty thorough.
 
====Numerics====
The integers (or any countably infinite set) can be represented via the [http://en.wikipedia.org/wiki/Church_encoding Church encoding] (or [http://en.wikipedia.org/wiki/Mogensen-Scott_encoding Mogensen-Scott], or others):
* <tt>0 ≡ λf. λx. x</tt> (referred to as <tt>c<sub>0</sub></tt>, and ɑ-equivalent to <tt>false</tt> as defined [[Programming_Language_Theory#Logic|below]])
* <tt>1 ≡ λf. λx. f x</tt>
* <tt>2 ≡ λf. λx. f (f x)</tt>
* <tt>3 ≡ λf. λx. f (f (f x))</tt>
* <tt>n ≡ λf. λx. f<sup>n</sup>x</tt> (...up thorugh <tt>c<sub>n</sub></tt>)
Some basic arithmetic operations:
* <tt>plus ≡ λm. λn. λf. λx. m f (n f x)</tt> (from ''f<sup>(m + n)</sup>(x) = f<sup>m</sup>(f<sup>n</sup>(x))'')
* <tt>succ ≡ λn. λf. λx. f (n f x)</tt> (β-equivalent to <tt>(plus 1)</tt> for our <tt>1</tt> from above)
* <tt>mult ≡ λm. λn. λf. n (m f)</tt> (from ''f<sup>(m * n)</sup>(x) = (f<sup>m</sup>(x))<sup>n</sup>'')
** alternatively, <tt>mult ≡ λm. λn. m (plus n) c<sub>0</sub></tt>
* <tt>pow ≡ λm. λn. n m</tt>
* <tt>pred ≡ λn. λf. λx. n (λg. λh. h (g f)) (λu. x) (λu. u)</tt> (returns 0 when applied to 0)
* <tt>sub ≡ λm. λn. (n pred) m</tt> (using <tt>pred</tt> from above)
 
====Pairs====
* <tt>pair ≡ λf. λs. λb. b f s</tt>
* <tt>first ≡ λp. p true</tt>
* <tt>second ≡ λp. p false</tt>
 
====Logic====
The Church booleans evaluate to one of their two arguments:
* <tt>true ≡ λa. λb. a</tt>
* <tt>false ≡ λa. λb. b</tt> (note that <tt>false</tt> is ɑ-equivalent to <tt>c<sub>0</sub></tt>!)
Using them, we implement basic conditional logic:
* <tt>test ≡ λa. λb. λc. a b c</tt> (using <tt>true</tt>/<tt>false</tt> for a, <tt>test</tt> reduces to <tt>b</tt> on <tt>true</tt>, and <tt>c</tt> on <tt>false</tt>)
* <tt>not ≡ λt. t false true</tt> (β-equivalent to <tt>test(A, false, true)</tt>)
* <tt>iszro ≡ λm. m (λx. false) true</tt>
This definition of <tt>test</tt> will evaluate both arguments in a strict application, which is undesirable. Instead, define <tt>if</tt> as a function taking one argument (<tt>true/false</tt>), which evaluates to either the function <tt>first</tt> or <tt>second</tt>.
 
====Syntactic sugar====
* Left-associative application as implicit parentheses
* Left-associative application as implicit parentheses
* Use of definitions (allowing identifiers to stand in as λ-expressions)
* Use of definitions (allowing identifiers to stand in as λ-expressions)
* Currying: <tt>(λx, y. x + y)</tt> rather than <tt>(λx. (λy. x + y))</tt>
* Currying: <tt>(λx, y. x + y)</tt> rather than <tt>(λx. (λy. x + y))</tt>
* Numeric literals rather than Church encoding
* Numeric literals rather than Church encoding
* Function definition: binding a functional abstraction to an identifier
* Pattern matching (piecewise function definition based on argument structure)
* The [http://en.wikipedia.org/wiki/De_Bruijn_notation de Bruijn notation] (not to be confused with de Bruijn indexing as a substitution scheme) is another, probably superior, syntax. It has an ''abstractor wagon'', which I whole-heartedly approve of.
===Typed Functional Programming===
Two agenda were proposed for adding types to the λ-calculus:
* Church suggested explicit type annotations on all terms (simply-typed λ-calculus (λ<sub>→</sub>), "simple" due to single type constructor →)
* Curry suggested inferred types (now implemented via techniques such as [http://en.wikipedia.org/wiki/Hindley-Milner_type_inference Hindley-Milner])
Either way, we lose some properties of the untyped λ-calculus (there's no type for the Y-combinator, for instance!). Nonetheless, the benefits of [[Programming_Language_Theory#Typing|typing]] (especially in the presence of type inference) lead to most functional programming languages making use of a strict typing system. More complicated systems (''second-'' and higher-''order  λ-calculi'') such as System F (aka Girard-Reynolds polymorphic λ-calculus) and λ<sub>ω</sub> (aka "simply-typed λ-calculus with type operators") exist; they can be explored along Barendregt's [http://en.wikipedia.org/wiki/Lambda_cube λ-cube], should one feel so inclined (unravel the mysteries of "F<sub>&lt;:</sub><sup>ω</sup>"!).
===Logic Programming===
Non-procedural statements of relationships built atop a ''unification'' algorithm and database.
====Prolog====
Language elements include facts, queries, variables, rules and data structures. Queries are made relative to a base of facts expressed as ''functors'' on ''atoms''. Anonymous variables are supported via '_'.
* All satisfactions of a query are returned, as they're discovered.
* Several queries can be conjoined via ','; unification of such a system might require ''backtracking''.
* Rules are dependencies among facts, expressed as unions of intersections (multiple right-hand-sides, each with arbitrarily many conjunctions). For <tt>a :- b, c</tt>:
** Backward chaining -- seeking truth of <tt>a</tt> via proof of <tt>b, c</tt>
** Forward chaining -- concluding truth of <tt>a</tt> via discovery of <tt>b, c</tt>
* Lists are formed via square brackets, and '|' divides the car and cdr of a list
Two semantics: declarative (first-order predicate logic) and procedural (DFS of the database).
* The procedural semantics involve terms (constants, atoms, functors), ground terms (a term containing no variables), and the resolvent (set of all outstanding subgoals).
** Queries with no variables can trivially be decided as true or false
** The result is otherwise the set of all query-satisfying bindings between variables and ground terms
** Resolvent grows when a RHS is substituted for a subgoal
** Resolvent shrinks when a fact matches a subgoal
** ''Logically-incomplete'': DFS might fail on left-recursive rules. BFS takes up a great deal of space.
* The declarative semantics reduce the resolvent processing to modus ponens:
** Any Prolog program can be represented as a set of Horn clauses...which can be transformed into clausal predicate logic...which can be transformed into predicate logic
** A Horn clause is a set of terms, exactly one of which is positive
*** A clause with no negative terms corresponds to a Prolog fact
*** The single positive term corresponds to the LHS of a Prolog rule
*** The zero or more negative terms correspond to clauses on the RHS:
* The ''cut'' operator '!' controls searching: "No further options for the current rule ought be considered"
** The declarative semantics are unaffected
** Whether or not procedural semantics generate solutions might change, but not the solutions generated
{| border="1"
! Prolog statement
! Logical interpretation
! Horn clause (Implicitly-quantified CNF)
|-
| A :- B || B ⇒ A || ~B ∨ A
|-
| A :- B, C, D || B ∧ C ∧ D ⇒ A ≡ ~(B ∧ C ∧ D) ∨ A || ~B ∨ ~C ∨ ~D ∨ A
|}
===Constraint Programming===
'''FIXME'''
===Nondeterminism===
'''FIXME''' impossible in purely declarative programming!
==Typing==
Weak- vs strong-typing refers to the degree to which the language's typing system can be subverted, and whether this is regularly necessary. Typing insecurities include:
* Untyped pointers
* Subrange narrowing / downcasting
* Untagged variant records ("type punning")
* Procedure parameters
Static- vs dynamic-typing refers to (among other possible definitions) whether type is determined at compile-time or run-time. Among dynamic typing, ''duck typing'' types based on (dynamically-bound) method availability; variants of static typing include ''structural typing'' (relations based on form) and ''nominative typing'' (relations based on names). Nominative typing has (predictable) problems with anonymous types, and also name shadowing. Structural typing has issues with recursive declarations, along with ambiguities regarding constituent ordering and nomenclature; it is used primarily in ML and research, with nominative typing dominating the mainstream. The benefits of static typing include:
* Detection of many types of errors at compile-time
** "Safety = Progress + Preservation" (Harper 1994) suggests well-typed expressions ought not get [[Programming_Language_Theory#Divergent_Combinators|stuck]], and that reductions of well-typed expressions are themselves well-typed.
* Improved performance, reduced resource requirements
* Secure data abstractions can be built atop the safety of strong typing systems
Properties of a typing system include:
* Support for ''polymorphism'' (dispatch based on types of actual arguments)
** Parametric polymorphism: Generation of a version for necessary types (Ada generics, C++ templates, SML type parameters)
* Support for ''subtyping'', refining T as T<sub>0</sub> (defining an ordering relation on types: T ''subsumes'' T<sub>0</sub>, x ∈ T<sub>0</sub> ⇒ x ∈ T)
** '''NB''': ''Subclassing'' (inheritance) is independent of subtyping!
** Subsumption properties can't be arbitrarily enforced (decided) when subclasses override the implementation
** ''Soundness'' refers to data structuring operation support for subtyping, such as annotation of type alterations:
*** ''Covariant'' modifications retain the original ordering
*** ''Contravariant'' modifications invert the original ordering
*** ''Invariant'' modifications dissolve the subsumption relation
* Support for ''minimal'' and ''maximal'' types (also known as Bottom and Top)
** Bottom (no values) + Top (all values) + subtyping ⇒ ''bounded type lattice''
===Conversion===
Treatment of a value of a given type as if it had another type, possibly involving modification of the underlying representation, or storage/alignment requirements thereof. When done explicitly, conversion is referred to as ''casting''; when done implicitly, it is called ''coercion''.
==Objects==
'''FIXME''' object calculus woo-hah...
==Concurrency==
*'''FIXME''' π-calculus, CSS, CSP, petri nets, dataflow variables...
* Shared state
* Message passing
* Data parallel processing
* FIXME expand each
==Programming Language Design==
===Syntax===
Pierce describes three means for describing language syntax, using ''terms'' to refer to language definitions involving at least one terminal:
* Induction on successive terms (of which EBNF is a compact notation)
* Inference from axioms (terminals) and successive inference terms
* Concrete definition (generative grammar) placing successive sets of alternate terms in bijection with the ordinals
===Semantics===
Pierce describes three techniques of formalizing semantics:
* ''Operational'' semantics define one or more formal abstract machines, and attempt to exhaustively describe the language constructs in these machines' terms. Emphasis is on proof of various abstract machines' equivalence (and correspondence to actual implementations), and thus proof of an implementation's correctness. When finite state machines are used (this is only possible for simple languages), this is Plotkin's ''small-step'' or ''structural'' operational semantics; more powerful languages might make use of Kahn's ''big-step'' or ''natural'' semantics.
* ''Denotational'' semantics map between different ''semantic domains'' (the investigation of which constitutes [http://en.wikipedia.org/wiki/Domain_theory domain theory]) using ''interpretation functions''. Domains are interesting if results have been proven for their properties, such as termination and safety results. Denotational semantics can thus lead to proof schemas for a language, but meaningful interpretation functions seem to largely require domain-driven language design.
* While operational and denotational semantics define the behaviors of programs, and then prove laws about them, ''axiomatic'' semantics define the behavior of the language in terms of these laws themselves. As Pierce says: "The meaning of a term is just what can be proved about it."
Van Roy and Hiridi also mention ''logical'' semantics, which define a statement as a model of logical theory. This is claimed to work well for declarative and logical programming models.
==Sources==
* Professor Spencer Rugaber's CS6390 "Theory and Design of Programming Languages". Georgia Tech, Fall 2009.
* Hindley and Seldin. [http://www.amazon.com/Lambda-Calculus-Combinators-Introduction-Roger-Hindley/dp/0521898854 ''Lambda-Calculus and Combinators: An Introduction''], Second Edition. Cambridge University Press, 2008.
* Goldberg. [http://www.brics.dk/RS/05/1/BRICS-RS-05-1.pdf "On the Recursive Enumerability of Fixed-Point Combinators"]. BRICS Report RS-05-1, January 2005.
* Van Roy and Haridi. [http://www.amazon.com/Concepts-Techniques-Models-Computer-Programming/dp/0262220695 ''Concepts, Techniques and Models of Computer Programming'']. MIT Press, 2004.
* Pierce. [http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091 ''Types and Programming Languages'']. MIT Press, 2002.
* Culler and Singh. [http://www.eecs.berkeley.edu/~culler/book.alpha/ ''Parallel Computer Architecture: A Hardware/Software Approach'']. Morgan Kaufman, 1999.
* Felleisen. [http://www.ps.uni-sb.de/courses/sem-prog97/material/YYWorks.ps "A Lecture on the ''Why'' of ''Y''"]. 1997.
* Winskel. [http://www.amazon.com/Formal-Semantics-Programming-Languages-Winskel/dp/0262731037 ''Formal Semantics of Programming Languages'']. MIT Press, 1993.
* Hennessy. [http://www.cogs.susx.ac.uk/users/matthewh/semnotes.ps.gz ''Semantics of Programming Languages'']. Wiley, 1990.
* Barendregt. [http://www.amazon.com/Lambda-Calculus-103-Second-Foundations/dp/0444875085 ''The Lambda Calculus: Its Syntax and Semantics''], Second Edition. North Holland, 1985.
* Böhm and Berarducci. "Automatic Synthesis of Typed Lambda-Programs on Term Algebras". ''Theoretical Computer Science'' 39, August 1985.
* Church. [http://www.amazon.com/Calculi-Lambda-Conversion-Mathematics-Studies/dp/0691083940 ''The Calculi of Lambda Conversion'']. Princeton University Press, 1941.

Latest revision as of 21:09, 5 August 2011

The Church-Turing Thesis equates a vaguely-defined set of "computable" functions with the partial recursive functions. Several systems have been proven equivalent to the partial recursives in power, such as Turing machines and the λ-calculus (practical programming languages generally provide further syntaxes and semantics, but a Real Programmer is perfectly happy with combinatory logic and a beer). Peter J. Landin's 1965 ACM report, "A correspondence between ALGOL 60 and Church's Lambda-notation", is blamed by most textbooks for jump-starting the study of programming language theory, ensuring computer scientists fifty years later would still be exploring topoi, sheaves, Curry-Howard correspondences and fixed-point combinators.

There may, indeed, be other applications of the system than its use as a logic. -- Alonzo Church, 1932

Functions and Declarative Programming

Expressions in functional programs compose functions (as opposed to "values"). Backus proposed three tiers of complexity in his Turing Award lecture:

  • Simply functional language (fp): No state, limited names, finitely many functional forms, simple substitution semantics, algebraic laws
  • Formal functional system (ffp): Extensible functional forms, functions represented by objects, translation of object representation to applicable form, formal semantics
  • Applicative state transition system (ast): ffp plus mutable state and coarse-grained operations thereupon

Higher-order functions map one or more functions to a function. First-order support of functions provides composition of functions at runtime.

Combinatory Logic

Built from only variables, a set of primitive combinators (closed λ-expressions with their own reduction rules), and function application, combinatory logic provides no abstraction mechanism. A Turing-complete kernel of primitive combinators is complete. The SKI calculus requires only the K and S combinators; Chris Barker's Iota and Jot require only one each.

SKI Calculus

  • I ≡ λx. x (identity)
  • K ≡ λx, y. x (generation of constant functions)
  • S ≡ λx, y, z. (x z (y z)) (generalized function application)

Grammar

C-term ::= C-app | C-prim | var
C-app ::= C-term C-term

Divergent Combinators

The omega combinator is divergent. It can be reduced, but has no normal form:

  • omega ≡ (λx. x x) (λx. x x)

A combinator which is in normal form, but not reduced to a value, is said to be stuck. One of the major objectives of static typing systems is to detect possible "stuck" reductions.

Fixed-Point Combinators

For a function F, its set of fixed points are those inputs which map to themselves. Provided algebraic functions, for instance, fixed points of a function F(x) would be found by setting F(x) = x and solving for the roots. Provided an ordered, finite domain, looping through the domain will find all solutions. For an infinite ordered domain, looping is a semi-decision: if there is a least fixed point, it will be found, but the loop never terminates otherwise. The fixed-point combinators compute fixed points of their functional inputs. Curry's Y-combinator was the first (it always computes the least fixed point):

  • Y ≡ λf. (λx. f (x x)) (λx. f (x x)) (untyped λ-calculus)
  • Y ≡ S (K (S I I)) (S (S (K S) K) (K (S I I))) (SKI calculus)

Divergence-free evaluation of the Y-combinator requires call-by-name semantics. Call-by-value semantics can make use of the Θv (Turing) or Z-combinators:

  • Θv ≡ (λx. λy. (y (λz. x x y z))) (λx. λy. (y (λz. x x y z)))
  • Z ≡ λf. (λx. f (λy. x x y)) (λx. f (λy. x x y)) (via η-expansion on Y)

The infinitely many fixed-point combinators of untyped λ-calculus are recursively enumerable.

Untyped λ-calculus

Two operators (function definition and application) upon one operand type (λ-expression).

  • Function definition: boundparam. body)
  • Function application: function(actualparam)

The body is made up of free and bound variables. Those not present in the λ's list of bound variables are free. A λ-expression with no free variables is closed (closed expressions are equivalent in power to combinatory logic). A λ-expression enclosed in scopes of its free variables is meaningful. Distinct free variables specify a λ-expression's distinct substitution classes.

Grammar

λ-term ::= λ-app | λ-abst | var
λ-app ::= λ-term λ-term
λ-abst ::= 'λ'var'.' λ-term

This abstract grammar can be augmented with associativity rules and grouping syntax (parentheses) to provide a concrete grammar. If verbosity is no issue, no associativity rules need be specified for the following grammar:

λ-term ::= λ-app | λ-abst | var
λ-app ::= '(' λ-term λ-term ')'
λ-abst ::= '(' 'λ'var'.' λ-term ')'

Evaluation via Substitution

We'll use a substitution syntax to rewrite λ-calculus strings. Heavyweight de Bruijn indexing is one substitution semantic, as is the Barendregt convention. The following simple semantic is due Rugaber, possibly through Pierce:

{N / X}M substitutes expression N for free instances of X in expression M:

  • If M is a variable ({N / X}V), replace M with X if and only if M is N.
  • If M is a λ-app ({N / X}(A B)), {N / X}M = (({N / X}A) ({N / X}B))
  • Otherwise, M is a λ-abst ({N / X}(λV. B)).
    • If X is V, M is unchanged (X is bound in M).
    • If X does not appear in B, M is unchanged (no free X in M)
    • If V occurs free in N, substitute N for X throughout B.
    • Otherwise, the substitution in this form is invalid (but it can be made valid...)

With this syntax, we define evaluation rules. Application of the a rule preserves its equivalence.:

  • ɑ-conversion renames bound variables with any variable y which is not free, preserving ɑ-equivalence:
    • λx. m →ɑ λy. {y / x}m
  • β-reduction replaces a formal parameter with an actual parameter, preserving β-equivalence:
    • (λx. m) n →β {n / x}m
    • An expression is in β-normal form if no β-reductions are possible. It is otherwise a β-redex.
  • η-reduction eliminates bindings unused within a scope, preserving η-equivalence:
    • (λx. (f x)) →η f
    • An expression is in β-η-normal form ("Beta-eta-normal form") if neither β- nor η-reductions are possible
      • If a η-reduction is applicable, the expression is a η-redex.

According to the Church-Rosser Theorem, all terminating evaluations will compute the same function. Certain application orders might not terminate for a given expression, though, despite termination via other orders (proofs under arbitrary reduction ordering are said to operate under full beta reduction). This gives rise to the core difference between call-by-value and call-by-name semantics:

  • leftmost-innermost application evaluates arguments as soon as possible, and is equivalent to leftmost call-by-value ("applicative order")
    • only the outermost redexes are reduced, and the right hand side must be in β-η-normal form before reducing the left
    • this is strict evaluation (arguments are always evaluated, whether used or not)
  • leftmost-outermost application substitutes prior to evaluation, and is equivalent to lazy call-by-name sans memoization ("normal order")

Call-by-name terminates if any order does, but might require more total reductions. Call-by-value only β-reduces abstractions, not applications. Call-by-need memoizes the results of evaluated functions, and is operationally equivalent to call-by-name if side-effects are disallowed (it can be more efficient, but also more complicated: reduction is performed on abstract syntax graphs rather than abstract syntax trees). Call-by-name is text substitution augmented by capture avoidance. Wikipedia's evaluation strategy page is pretty thorough.

Numerics

The integers (or any countably infinite set) can be represented via the Church encoding (or Mogensen-Scott, or others):

  • 0 ≡ λf. λx. x (referred to as c0, and ɑ-equivalent to false as defined below)
  • 1 ≡ λf. λx. f x
  • 2 ≡ λf. λx. f (f x)
  • 3 ≡ λf. λx. f (f (f x))
  • n ≡ λf. λx. fnx (...up thorugh cn)

Some basic arithmetic operations:

  • plus ≡ λm. λn. λf. λx. m f (n f x) (from f(m + n)(x) = fm(fn(x)))
  • succ ≡ λn. λf. λx. f (n f x) (β-equivalent to (plus 1) for our 1 from above)
  • mult ≡ λm. λn. λf. n (m f) (from f(m * n)(x) = (fm(x))n)
    • alternatively, mult ≡ λm. λn. m (plus n) c0
  • pow ≡ λm. λn. n m
  • pred ≡ λn. λf. λx. n (λg. λh. h (g f)) (λu. x) (λu. u) (returns 0 when applied to 0)
  • sub ≡ λm. λn. (n pred) m (using pred from above)

Pairs

  • pair ≡ λf. λs. λb. b f s
  • first ≡ λp. p true
  • second ≡ λp. p false

Logic

The Church booleans evaluate to one of their two arguments:

  • true ≡ λa. λb. a
  • false ≡ λa. λb. b (note that false is ɑ-equivalent to c0!)

Using them, we implement basic conditional logic:

  • test ≡ λa. λb. λc. a b c (using true/false for a, test reduces to b on true, and c on false)
  • not ≡ λt. t false true (β-equivalent to test(A, false, true))
  • iszro ≡ λm. m (λx. false) true

This definition of test will evaluate both arguments in a strict application, which is undesirable. Instead, define if as a function taking one argument (true/false), which evaluates to either the function first or second.

Syntactic sugar

  • Left-associative application as implicit parentheses
  • Use of definitions (allowing identifiers to stand in as λ-expressions)
  • Currying: (λx, y. x + y) rather than (λx. (λy. x + y))
  • Numeric literals rather than Church encoding
  • Function definition: binding a functional abstraction to an identifier
  • Pattern matching (piecewise function definition based on argument structure)
  • The de Bruijn notation (not to be confused with de Bruijn indexing as a substitution scheme) is another, probably superior, syntax. It has an abstractor wagon, which I whole-heartedly approve of.

Typed Functional Programming

Two agenda were proposed for adding types to the λ-calculus:

  • Church suggested explicit type annotations on all terms (simply-typed λ-calculus (λ), "simple" due to single type constructor →)
  • Curry suggested inferred types (now implemented via techniques such as Hindley-Milner)

Either way, we lose some properties of the untyped λ-calculus (there's no type for the Y-combinator, for instance!). Nonetheless, the benefits of typing (especially in the presence of type inference) lead to most functional programming languages making use of a strict typing system. More complicated systems (second- and higher-order λ-calculi) such as System F (aka Girard-Reynolds polymorphic λ-calculus) and λω (aka "simply-typed λ-calculus with type operators") exist; they can be explored along Barendregt's λ-cube, should one feel so inclined (unravel the mysteries of "F<:ω"!).

Logic Programming

Non-procedural statements of relationships built atop a unification algorithm and database.

Prolog

Language elements include facts, queries, variables, rules and data structures. Queries are made relative to a base of facts expressed as functors on atoms. Anonymous variables are supported via '_'.

  • All satisfactions of a query are returned, as they're discovered.
  • Several queries can be conjoined via ','; unification of such a system might require backtracking.
  • Rules are dependencies among facts, expressed as unions of intersections (multiple right-hand-sides, each with arbitrarily many conjunctions). For a :- b, c:
    • Backward chaining -- seeking truth of a via proof of b, c
    • Forward chaining -- concluding truth of a via discovery of b, c
  • Lists are formed via square brackets, and '|' divides the car and cdr of a list

Two semantics: declarative (first-order predicate logic) and procedural (DFS of the database).

  • The procedural semantics involve terms (constants, atoms, functors), ground terms (a term containing no variables), and the resolvent (set of all outstanding subgoals).
    • Queries with no variables can trivially be decided as true or false
    • The result is otherwise the set of all query-satisfying bindings between variables and ground terms
    • Resolvent grows when a RHS is substituted for a subgoal
    • Resolvent shrinks when a fact matches a subgoal
    • Logically-incomplete: DFS might fail on left-recursive rules. BFS takes up a great deal of space.
  • The declarative semantics reduce the resolvent processing to modus ponens:
    • Any Prolog program can be represented as a set of Horn clauses...which can be transformed into clausal predicate logic...which can be transformed into predicate logic
    • A Horn clause is a set of terms, exactly one of which is positive
      • A clause with no negative terms corresponds to a Prolog fact
      • The single positive term corresponds to the LHS of a Prolog rule
      • The zero or more negative terms correspond to clauses on the RHS:
  • The cut operator '!' controls searching: "No further options for the current rule ought be considered"
    • The declarative semantics are unaffected
    • Whether or not procedural semantics generate solutions might change, but not the solutions generated
Prolog statement Logical interpretation Horn clause (Implicitly-quantified CNF)
A :- B B ⇒ A ~B ∨ A
A :- B, C, D B ∧ C ∧ D ⇒ A ≡ ~(B ∧ C ∧ D) ∨ A ~B ∨ ~C ∨ ~D ∨ A

Constraint Programming

FIXME

Nondeterminism

FIXME impossible in purely declarative programming!

Typing

Weak- vs strong-typing refers to the degree to which the language's typing system can be subverted, and whether this is regularly necessary. Typing insecurities include:

  • Untyped pointers
  • Subrange narrowing / downcasting
  • Untagged variant records ("type punning")
  • Procedure parameters

Static- vs dynamic-typing refers to (among other possible definitions) whether type is determined at compile-time or run-time. Among dynamic typing, duck typing types based on (dynamically-bound) method availability; variants of static typing include structural typing (relations based on form) and nominative typing (relations based on names). Nominative typing has (predictable) problems with anonymous types, and also name shadowing. Structural typing has issues with recursive declarations, along with ambiguities regarding constituent ordering and nomenclature; it is used primarily in ML and research, with nominative typing dominating the mainstream. The benefits of static typing include:

  • Detection of many types of errors at compile-time
    • "Safety = Progress + Preservation" (Harper 1994) suggests well-typed expressions ought not get stuck, and that reductions of well-typed expressions are themselves well-typed.
  • Improved performance, reduced resource requirements
  • Secure data abstractions can be built atop the safety of strong typing systems

Properties of a typing system include:

  • Support for polymorphism (dispatch based on types of actual arguments)
    • Parametric polymorphism: Generation of a version for necessary types (Ada generics, C++ templates, SML type parameters)
  • Support for subtyping, refining T as T0 (defining an ordering relation on types: T subsumes T0, x ∈ T0 ⇒ x ∈ T)
    • NB: Subclassing (inheritance) is independent of subtyping!
    • Subsumption properties can't be arbitrarily enforced (decided) when subclasses override the implementation
    • Soundness refers to data structuring operation support for subtyping, such as annotation of type alterations:
      • Covariant modifications retain the original ordering
      • Contravariant modifications invert the original ordering
      • Invariant modifications dissolve the subsumption relation
  • Support for minimal and maximal types (also known as Bottom and Top)
    • Bottom (no values) + Top (all values) + subtyping ⇒ bounded type lattice

Conversion

Treatment of a value of a given type as if it had another type, possibly involving modification of the underlying representation, or storage/alignment requirements thereof. When done explicitly, conversion is referred to as casting; when done implicitly, it is called coercion.

Objects

FIXME object calculus woo-hah...

Concurrency

  • FIXME π-calculus, CSS, CSP, petri nets, dataflow variables...
  • Shared state
  • Message passing
  • Data parallel processing
  • FIXME expand each

Programming Language Design

Syntax

Pierce describes three means for describing language syntax, using terms to refer to language definitions involving at least one terminal:

  • Induction on successive terms (of which EBNF is a compact notation)
  • Inference from axioms (terminals) and successive inference terms
  • Concrete definition (generative grammar) placing successive sets of alternate terms in bijection with the ordinals

Semantics

Pierce describes three techniques of formalizing semantics:

  • Operational semantics define one or more formal abstract machines, and attempt to exhaustively describe the language constructs in these machines' terms. Emphasis is on proof of various abstract machines' equivalence (and correspondence to actual implementations), and thus proof of an implementation's correctness. When finite state machines are used (this is only possible for simple languages), this is Plotkin's small-step or structural operational semantics; more powerful languages might make use of Kahn's big-step or natural semantics.
  • Denotational semantics map between different semantic domains (the investigation of which constitutes domain theory) using interpretation functions. Domains are interesting if results have been proven for their properties, such as termination and safety results. Denotational semantics can thus lead to proof schemas for a language, but meaningful interpretation functions seem to largely require domain-driven language design.
  • While operational and denotational semantics define the behaviors of programs, and then prove laws about them, axiomatic semantics define the behavior of the language in terms of these laws themselves. As Pierce says: "The meaning of a term is just what can be proved about it."

Van Roy and Hiridi also mention logical semantics, which define a statement as a model of logical theory. This is claimed to work well for declarative and logical programming models.

Sources