Table of Contents for Programming Languages: a survey

- always halting languages: epigram, charity (i think agda also?)
- coq
- not decidable (halting problem, actually harder, because the halting problem is only 'halts on one input', and we are asking if it halts on every input)
- "Can every partial function computable by a partial Turing machine be extended (that is, have its domain enlarged) to become a total computable function? Is it possible to change the definition of a Turing machine so that a particular class of total Turing machines, computing all the total computable functions, can be found? The answer to each of these questions is no." -- https://en.wikipedia.org/wiki/Machines_that_always_halt
- primitive recursion
- plus mu (unbounded minimization) = Turing
- https://en.wikipedia.org/wiki/LOOP_%28programming_language%29 , http://www.lacl.fr/valarcher/Publi/fi2011.pdf http://web.archive.org/web/20120311032544/http://loopgotowhile.eugenkiss.com/
- LOOP = addition, subtraction, (concatenation of a sequence of commands), variable assignment, LOOP (which is like a for loop). LOOP is primitive recursion.
- LOOP + while = language called 'WHILE', Turing complete.
- LOOP + if + goto = language called 'GOTO', Turing complete.

- WHILE is like MIN: incr, decr, and while (i != 0) do: http://web.archive.org/web/20060713051910/http://www.dcs.shef.ac.uk/~matt/teaching/04/com2030/lectures/tomlect16.pdf (Turing-complete)
- similar but more complicated languages with the same point as LOOP as WHILE are BlooP? and Floop: https://en.wikipedia.org/wiki/BlooP_and_FlooP

- https://en.wikipedia.org/wiki/Primitive_recursive_functional
- programming languages with only bounded loops: https://en.wikipedia.org/wiki/BlooP_and_FlooP ,
- can add/compose any set of total functions to any other set of total functions and still only get total functions
- "For example, the Ackermann function, which is not primitive recursive, nevertheless is a total computable function computable by a term rewriting system with a reduction ordering on its arguments (Ohlebusch, 2002, pp.67)."
- simply typed lambda calculus
- system F
- calculus of constructions
- "Both Epigram and Charity could be considered total functional programming languages, even though they don't work in the way Turner specifies in his paper. So could programming directly in plain System F, in Martin-Löf type theory or the Calculus of Constructions." -- https://en.wikipedia.org/wiki/Total_functional_programming
- agda is also total http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.Totality
- structural recursion (some sources e.g. https://en.wikipedia.org/wiki/Total_functional_programming say "sub-structural recursion"): in ordinary primitive recursion (i'll call it arithmetic primitive recursion, although i made that term up) , an argument (and according to many definitions, you can't choose any argument, it must be the first one) must be a natural number and you must decrement it upon each call. In structural recursion, the first argument must be a subfield or pattern (sub)match against an ADT, e.g. when defining f(x) you may say 'f(S(y)) = some function of f(y)'. In this case this is the same as decrementing a natural number, but now imagine that the data input is a finite binary tree rather than a natural number in successor notation, and you get the idea.
- walther recursion http://ttic.uchicago.edu/~dmcallester/walther.ps
- the idea is to define whether primitive functions have the property that f(..,x_i,...) < x_i, or the property that f(..,x_i,..) <= x_i, or neither of these properties. Then you infer whether this property holds for other user-defined functions. Then you define an expanded version of primitive recursion where, in the primitive recursive step, you are allowed to use these decreasing functions ("reducers"), rather than only functions that actually subtract one from their first argument. This expanded primitive recursion is called "walther recursion".

- ESFP (elementary strong functional programming): http://www.jucs.org/jucs_6_4/ensuring_termination_in_esfp/Telford_A.pdf
- i didn't even skim this paper very much but a very quick skim showed that the idea was somewhat similar to Walther recursion (see above).
- abstract: "In previous papers we have proposed an elementary discipline of strong functional programming (ESFP), in which all computations terminate. A key feature of the discipline is that we introduce a type distinction between data which is known to be finite, and codata which is (potentially) infinite. To ensure termination, recursion over data must be well-founded, and corecursion (the definition schema for codata) must be productive, and both of these restrictions must be enforced automatically by the compiler. In our previous work we used abstract interpretation to establish the productivity of corecursive definitions in an elementary strong functional language. We show here that similar ideas can be applied in the dual case to check whether recursive function definitions are strongly normalising. We thus exhibit a powerful termination analysis technique which we demonstrate can be extended to partial functions. "

- some arguments for total functional programming are in http://www.jucs.org/jucs_10_7/total_functional_programming/jucs_10_07_0751_0768_turner.pdf (Turner, D.A. (2004-07-28), "Total Functional Programming", Journal of Universal Computer Science 10 (7): 751–768, doi:10.3217/jucs-010-07-0751) (is this the same as http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.364&rep=rep1&type=pdf ? discussion of the latter at https://news.ycombinator.com/item?id=12646390)
- if you have partial functions, then consider the program 'loop :: Int->Int; loop n = 1 + loop n'; if you neglected the possibility of nontermination and used ordinary algebra, you'd get 'loop 0=1+loop 0' and from that, '0 = 1', which is false. partial functions make algebra confusing; total functions have a simpler proof theory
- partial functions force more complications in programming language design;
- if you have partial data types A and B, then should AxB? be lifted or not, that is, should it have its own unique bottom value, separate from bottom_A x bottom_B? this affects pattern matches such as f(x,y) = (something of type AxB?); if bottom_{AxB?} = bottom_{A} x bottom_{B}, then this pattern match is irrefutable, e.g. cannot fail, but otherwise, it can.

- to completely define the 'and' operator, you must define which arguments it is strict on, that is, define the results of 'bottom AND y' and 'y AND bottom'; most languages opt to make it left strict, but this breaks the symmetry of AND in its arguments
- partial functions mean that a choice of evaluation order can determine whether the computation converges or not (reduction has the weak Church-Rosser property, that is that IF a normal form is reached by multiple reduction strategies, that normal form will be the same; but not the strong Church-Rosser property, that is that "Every reduction sequence leads to a normal form and normal forms are unique"). In total functional languages, we have the strong Church-Rosser property, that is, the choice of evaluation strategy doesn't affect the result
- Turner notes that Martin-Lof's intuitionistic type theory (constructive type theory), is an extant powerful model of total functional programming. He notes that it has "Dependent types (types indexed over values), Second order types, An isomorphism between types and propositions, that enables programs to express proof information". However, he notes that it is complex to learn to program in it, and he wants to propose something simpler. In particular, he gives the following simplicity criteria:
- "Type structure no more complicated than Hindley/Milner, or one of its simple variants. So we will have types like Nat → Nat, and polymorphic types like α → α, but nothing worse."
- "Programs and proofs will be kept separate, as in conventional programming. What we are looking for is essentially a strongly terminating subset of Miranda or Haskell (or for that matter SML, since the difference between strict and lazy goes away in a strong functional language)"

- Turner gives 4 (he counts 3) conditions that imply total functionality
- All built-in operations must be total
- suggestions: start with Nat rather than Int; to make Head (of a list) total, either have a mandatory 'return this in case of empty list' parameter, or don't have head at all, only allow the programmer to get at the Head of a list via a pattern match (destructuring bind)

- All case analysis must be complete
- "Type recursion must be covariant. That is type recursion through the left hand side of → is not permitted. "
- For example, 'data Silly = Very (Silly->X)

- All built-in operations must be total

not allowed!; bad :: Silly -> X; bad (Very f) = f (Very f); foo :: X; foo = bad (Very bad);'; trying to evaluate foo leads to an infinite loop. |

** Turner asks, "If all programs terminate, how do we write an operating system?". He answers, "we need codata as well as data. (But unlike in weak functional programming, the two will be kept separate. We will have finite data and infinite codata, but no partial data.)"**

" ODATA What we have sketched so far would make a nice teaching language but is not enough for production programming. Let us return to the issue of writing an operating system. An operating system can be considered as a function from a stream of re- quests to a stream of responses. To progr am things like this functionally we need infinite lists - or something equivalent to infinite lists. In making everything well-founded and terminating we have seemingly re- moved the possibility of defining infinite data structures. To get them back we introduce codata type definitions: 761 Turner D.: Total Functional Programming > codata Colist a = Conil

Codata definitions are equations over types that produce final algebras, in- stead of the initial algebras we get for data definitions. So the type Colist contains all the infinite lists as well as finite ones - to get the infinite ones alone we would omit the Conil alternative. Note that infix <> is the coconstructor for colists. 4.1 Programming with Codata Therulefor primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending on the argument, we ascend on the result. Like this > f :: something->Colist Nat > f args = RHS(f args’) where the leading operator of the context RHS(-) must be a coconstructor, with thecorecursivecallto f as one of its arguments. There is no constraint on the form of args’ . Notice that corecursion creates (potentially infinite) codata, whereas ordi- nary recursion analyses (necessarily finite) data. Ordinary recursion is not legal over codata, because it might not terminate. Conversely corecursion is not legal if the result type is data, because data must be finite. Now we can define infinite structures, such as > ones :: Colist Nat > ones = 1 <> ones > fibs :: Colist Nat > fibs = f 0 1 > where > f a b = a <> f b (a+b) and many other examples which every Miranda or Haskell programmer knows and loves. Note that all our infinite structures are total. As in the case of primitive recursion over data, the rule for coprimitive core- cursion over codata requires us to rewrite some of our algorithms, to adhere to the discipline of total functional programming. This is sometimes quite harda <> Colist a | |

example |

- for example rewriting the well known sieve of Eratosthenes program in this discipline involves coding in some bound on the distance from one prime to the next. 762 Turner D.: Total Functional Programming There is a (very nice) principle of coinduction, which we use to prove infinite structures equal. It can be read off from the definition of the codata type. We discuss this in the next subsection. A question. Does the introduction of codata destroy strong normalisability? No! But you have to have the right definition of normal form. Every expres- sion whose principle operator is a coconstructor is in normal form. (To get confluence as well as strong normalisability requires a little more care. Each corecursive definition is translated into a closed term and an explicit unwind operation introduced – see [Telford and Turner 1997] for details. The scheme in [Wadler et al. 1998] for translating lazy definitions into a strict language is related.) 4.2 Coinduction First we give the definition of bisimilarity (on colists). We can characterise ≈ the bisimilarity relation as follows

x ≈ y ⇒ hd x = hd y ∧ tl x ≈ tl y

Actually this is itself a corecursive definition! To avoid a meaningless regress what one actually says is that anything obeying the above is a bisimulation and by bisimilarity we mean the largest such relation. For a fuller discussion see [Pitts 1994]. Taking as read this background understanding of how to avoid logical regress, we say that in general two pieces of codata are bisimilar if: – their finite parts are equal, and – their infinite parts are bisimilar. The principle of coinduction may now be stated as follows: Bisimilar objects are equal. One way to understand this principle is to take it as the definition of equality on infinite objects. We can package the definition of bisimilarity and the principle that bisimilar objects are equal in the following method of proof: When proving the equality of two infinite structures we may assume the equality of recursive substructures of the same form. For colists we get — to prove g x1 ... xn = h x1 ... xn It is sufficient to show g x1 ... xn = e <> g a1 ... an h x1 ... xn = e <> h a1 ... an There is a similar rule for each codata type.

We give one example of a proof by coinduction. 763 Turner D.: Total Functional Programming The following theorem about the standard functions map , iterate ,isfrom [Bird and Wadler 1988]. We have changed the name of map to comap because for us it is a different function when it acts on colists. > iterate f x = x <> iterate f (f x) > comap f (a <> x) = f a <> comap f x Theorem iterate f (f x) = comap f (iterate f x) Proof by coinduction iterate f (f x)

QED The proof given in Bird and Wadler uses the take -lemma - it is longer than that given above and requires an auxiliary construction, involving the application of a take function to both sides of the equation, and an induction on the length of the take. The absence of a base case in this form of induction is at first sight puzzling. It is important to note that coinduction is valid only for the proof of equations over infinite structures, not of arbitrary properties of the data structure as with ordinary induction. The “strong coinduction” principle illustrated here seems to give shorter proofs of equations over infinite lists than either of the proof methods for this which have been developed in the theory of weak functional program- ming - namely partial object induction [Turner 1982] and the take-lemma [Bird and Wadler 1988]. The framework seems simpler than previous accounts of coinduction - see for example [Pitts 1994], because we are not working with domain theory and partial objects, but with the simpler world of total objects. Moral: Getting rid of partial objects seems an unmitigated blessing - not only when reasoning about finite data, but perhaps even more so in the case of infinite data.

" .

*** The paper also mentions Walther recursion as a generalization of the above**

- within any set of total functions satisfying certain simple closure properties, cannot have a universal function (theorem: http://mathoverflow.net/questions/46350/between-mu-and-primitive-recursion/46359#46359 )

see also https://en.wikipedia.org/wiki/Machines_that_always_halt , https://en.wikipedia.org/wiki/Total_functional_programming

the best explanation i've seen of codata:

http://blog.sigfpe.com/2007/07/data-and-codata.html

notes from that:

consider

sum [] = 0 sum (a:as) = a + sum as

this is recursion, and more specifically we can classify it as "(sub)structural recursion"; because the argument to the recursion is only part of the input argument (that is, after destructuring the input list into a and as, we only pass as in the recursive call), we can prove that it terminates on finite input. however, it doesn't produce any output until it has consumed all of its input.

now consider

sumSoFar x [] = [x] sumSoFar x (y:ys) = x : sumSoFar (x+y) ys

this is recursion, and more specifically we can classify it as "guarded recursion"; because it uses a constructor to produce the output, and only puts the output of the recursive call into one part of the constructor (that is, part of the output is just 'x' and only the other part is a call to sumSoFar), we can prove that it terminates if you only lazily request the head of the list; that is, it produces each part of the output in a finite time even if you give it an infinite list. in this case i think it is also substructurally recursive but could one could imagine something that wasn't?

as the blog post says, "Note the duality: in structural recursion we 'deconstruct' the argument and then we're allowed to recurse. In guarded recursion we recurse first, and then we're allowed to use the constructor."

" But suppose we were to distinguish between these two different solutions. We could use the keyword data to mean the smallest solution and codata to mean the largest solution. The former represents data, and it's always finite. The latter represents what we call codata, and it's possibly infinite. And now we can refine our rules for well-behavedness. Consider data and codata to be distinct types. In a strongly typed language this immediately allows us to restrict sum to data, not codata. The rule is: you're only allowed to use structural recursion with data and guarded recursion with codata. With that rule, we're guaranteed that our recursions will always be safe, and yet that we can still have open-ended loops in our code. Sometimes these are called recursion and corecursion respectively. "