Table of Contents for Programming Languages: a survey

Also known as alpha-conversion.

- http://en.m.wikipedia.org/wiki/Name_resolution#Alpha_renaming_to_make_name_resolution_trivial
- https://en.wikipedia.org/wiki/Lambda_calculus#Alpha_equivalence

https://en.wikipedia.org/wiki/Lambda_calculus#Capture-avoiding_substitutions

A substitution would change the meaning of an expression if:

- the replacement contains a free variable which is bound in the enclosing expression. For instance, if in (\lambda x.y), you replaced y with x, yielding \lambda x.x, you'd have changed the meaning of the function, because x was already 'bound' in the enclosing expression.

The condition that the replacement must not contain a free variable which is bound in the enclosing expression is sometimes called the 'freshness condition'.

If the desired replacement fails the freshness condition, you can apply alpha-renaming to generate an equivalent replacement with different variable names that satisfied the freshness condition.

Beta-reduction defines the meaning of function application; that is, it says that: ( \lambda x . t) s reduces to the term t [ x := s]

(the latter is read 't with s substituted for x')

- https://en.wikipedia.org/wiki/Lambda_calculus#Beta_reduction
- https://en.wikipedia.org/wiki/Lambda_calculus#.CE.B2-reduction

(Eta is written η)

"Eta-conversion converts between λx.(f x) and f whenever x does not appear free in f." -- https://en.wikipedia.org/wiki/Lambda_calculus#.CE.B7-conversion

A rewriting (reduction) system is confluent if, whenever a term A can be reduced to multiple distinct other terms B and C, there exists a term D such that both B and C can be reduced to D

Links:

A rewriting (reduction) system is weakly normalizing if "for every term, there exists at least one particular sequence of rewrites that eventually yields a normal form, i.e., an irreducible term". -- [1]

A rewriting (reduction) system is strongly normalizing if "every sequence of rewrites eventually terminates to an irreducible term also called a normal form" -- [2]

Beta reduction is confluent but it is not weakly normalizing (nor strongly normalizing).

Also written "combinator calculi", "combinatory logic".

"Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic...combinatory logic can be viewed as a variant of the lambda calculus" [3].

Like lambda calculus and Turing machines, combinatory logic is a (set of) notations allowing you to (potentially) write a definition for any given computable function. Like lambda calculus, the 'values' in combinatory logic are themselves functions. So if combinatory logic is a notation for defining functions, consider the function f(x,y) = y. In lambda calculus you would write the definition of this function as "lambda x . lambda y . y". But how would you write it in combinatory calculus, if combinatory calculus doesn't have variables? How would you say "take the second argument of this function and return it", if you can't refer to "the second argument"?

The answer is that a combinatory calculus pre-defines a primitive/built-in/basis set of known functions. These are "higher-order" functions, meaning that their inputs and outputs can themselves be functions. Another word for these higher-order functions is "combinator". You start with the primitive basis set of combinators, and then apply them to each other to create new combinators. Assuming the basis set is sufficiently complete, for any given computable function, there is some way of combining the basis functions to yield the desired function. For example, one combinatory calculus, SKI combinatory calculus, provides you with a 'basis set' of three functions, S, K, and I. These can be applied to each other to express f(x,y) = y; in particular, this function f can be generated as "S(K(SI))K".

You can imagine that each "combinator" just creates its output expression by accepting a series of input terms and 'swapping them around' or in other words 'recombining them', which may be one reason they are called "combinators". Another reason may be, since there are no variables, the only way you can specify new functions are by combining these "combinators".

Combinator calculi tend to be written with left association and currying. For example, if 'i' is the combinator taking one argument whose definition is 'i(x) = xSK', where x is a variable input to i and S and K are other combinators, then 'iSK' is short for '((i)S)K' and evaluates to '(SSK)K' which is written SSKK.

Sxyz = xz(yz) Kxy = x

See https://en.m.wikipedia.org/wiki/SKI_combinator_calculus#Connection_to_intuitionistic_logic

Variant of the SK combinator calculus, with the addition of the I combinator (identity). 'I' is not primitive and can be derived from S and K.

Instead of being based on S and K, the 'i' ('iota') operator can be the base, and both S and K can be derived from i:

ix = xSK

to derive I:

ii iSK (iS)K (SSK)K SSKK

SSKK is functionally equivalent to I:

SSKKx (SSKK)x (SK(KK))x SK(KK)x Kx((KK)x) Kx(KKx) x

to derive K:

i(i(I) i(ISK) (ISK)SK ISKSK SKSK KK(SK) K

to derive S:

iK KSK S

Another alternate basis for combinatory logic:

- Bxyz = x(yz) (function composition)
- Cxyz = xzy (swap)
- Kxy = x (projection; the same as the K combinator in SK)
- Wxy = x y y (dup)

Links: https://en.m.wikipedia.org/wiki/B,_C,_K,_W_system

https://en.wikipedia.org/wiki/Curry's_paradox

https://en.m.wikipedia.org/wiki/Predicate_functor_logic https://en.m.wikipedia.org/wiki/Cylindric_algebra https://en.m.wikipedia.org/wiki/Polyadic_algebra https://en.m.wikipedia.org/wiki/Relation_algebra

- http://www.cis.upenn.edu/~bcpierce/sf/current/index.html
- history of some formal stuff that provides connections between various proof assistants and languages, and their formal underpinnings: http://groupoid.space/intro.htm

--- alpha-equivalence: the same up to variable renaming [4] beta reduction: substituting into functions eta reduction

eta reduction examples:

lambda f: (lambda a: g (lambda x: (lambda y: (f(x))(y)))(a)) lambda x: lambda y: (f(x))(y) --> f(x) (eta reduction) lambda f: (lambda a: (g(f))(a)) eta reduction lambda f: g(f) eta reduction g

or in ML-ish/Haskell-ish notation: \f a -> g (\x y -> f x y) a \f a -> g (f) a \f a -> g f a g

---

" Coinductive Data Type Recursive (and mutually recursive) composite types for values in an eagerly evaluated language include two basic classes: InductiveDataType? and CoinductiveDataType?. These are closely related but can, in practice, be distinguished based on their constructors. Common examples of each are, respectively, the List (inductive), and the Stream (coinductive).

Here are example types in a notation familiar to me:

type List A = cons:(A,List A)->* | nil:()->* type Stream A = first:*->(A) & rest:*->(Stream A)

The use of '*' above represents the 'instance' of the value.

For the List, the '*' is on the right hand side of each production arrow because you may construct a list by either taking the value 'nil' or applying 'cons' around another list. One thing to note is that, by means of these constructors, one can inductively prove that 'cons:(a,l)' is a bigger list than 'l'. This further implies that lists must be finite. This is true of inductive types in general - by construction, they are always finite in an eagerly evaluated language.

For the Stream, the '*' is on the left hand side of each production arrow because the 'type' is defined in terms of what you can obtain from the instance (apply 'first' to obtain an 'A', apply 'rest' to obtain a 'Stream A') without regard to how it is constructed. Because of this, the Stream achieved from 'rest' could actually possess a wide variety of possible representations, so long as it was type-compatible with 'Stream A'. " -- http://c2.com/cgi/wiki?CoinductiveDataType

---