proj-plbook-plChHalting

Difference between revision 1 and current revision

No diff available.

Table of Contents for Programming Languages: a survey

machines that always halt / total functions

* "Each recursive function call must be on a syntactic sub-component of its formal parameter" "This form of recursion, often called structural recursion, sits naturally with function definition by pattern matching." Purely syntactic because we don't want to have to supply a proof that some argument is decreasing on a well-ordered order "n the case of a function of multiple arguments we also permit “nested” structural recursion as in Ackermann’s function: 'ack :: Nat->Nat->Nat; ack 0 n = n+1; ack (m+1) 0 = ack m 1; ack (m+1) (n+1) = ack m (ack (m+1) n)'. "The extension to multiple arguments adds no power, because what it does can be desugared using higher order functions, but is syntactically convenient" "The rule to allow recursion only by syntactic descent on data constructors effectively restricts us to primitive recursion, which is guaranteed to terminate. But isn’t primitive recursion quite weak? For example is it not the case that Ackermann’s function fails to be primitive recursive? No, that’s a first order result - it does not apply to a language with higher order functions. IMPORTANT FACT: we are here working in a higher order language, so what we actually have are the primitive recursive functionals of finite type, as studied by [G ̈ odel 1958] in his System T. These are known to include every recursive function whose totality can be proved in first order logic (starting from the usual axioms for the elementary data types, eg the Peano axioms for Nat). So Ackermann is there, and much, much else. Indeed, we have more than system T, because we can define data structures with functional components, giving us infinitarily branching trees. Depending on the exact rules for typechecking polymorphic functions, it is possible to enlarge the set of definable functions to all those which can be proved total in second order arithmetic."
not allowed!; bad :: Silly -> X; bad (Very f) = f (Very f); foo :: X; foo = bad (Very bad);'; trying to evaluate foo leads to an infinite loop.

Turner asks, "If all programs terminate, how do we write an operating system?". He answers, "we need codata as well as data. (But unlike in weak functional programming, the two will be kept separate. We will have finite data and infinite codata, but no partial data.)"

Codata

" ODATA What we have sketched so far would make a nice teaching language but is not enough for production programming. Let us return to the issue of writing an operating system. An operating system can be considered as a function from a stream of re- quests to a stream of responses. To progr am things like this functionally we need infinite lists - or something equivalent to infinite lists. In making everything well-founded and terminating we have seemingly re- moved the possibility of defining infinite data structures. To get them back we introduce codata type definitions: 761 Turner D.: Total Functional Programming > codata Colist a = Conil

Codata definitions are equations over types that produce final algebras, in- stead of the initial algebras we get for data definitions. So the type Colist contains all the infinite lists as well as finite ones - to get the infinite ones alone we would omit the Conil alternative. Note that infix <> is the coconstructor for colists. 4.1 Programming with Codata Therulefor primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending on the argument, we ascend on the result. Like this > f :: something->Colist Nat > f args = RHS(f args’) where the leading operator of the context RHS(-) must be a coconstructor, with thecorecursivecallto f as one of its arguments. There is no constraint on the form of args’ . Notice that corecursion creates (potentially infinite) codata, whereas ordi- nary recursion analyses (necessarily finite) data. Ordinary recursion is not legal over codata, because it might not terminate. Conversely corecursion is not legal if the result type is data, because data must be finite. Now we can define infinite structures, such as > ones :: Colist Nat > ones = 1 <> ones > fibs :: Colist Nat > fibs = f 0 1 > where > f a b = a <> f b (a+b) and many other examples which every Miranda or Haskell programmer knows and loves. Note that all our infinite structures are total. As in the case of primitive recursion over data, the rule for coprimitive core- cursion over codata requires us to rewrite some of our algorithms, to adhere to the discipline of total functional programming. This is sometimes quite hard
a <> Colist a
example

x ≈ y ⇒ hd x = hd y ∧ tl x ≈ tl y

Actually this is itself a corecursive definition! To avoid a meaningless regress what one actually says is that anything obeying the above is a bisimulation and by bisimilarity we mean the largest such relation. For a fuller discussion see [Pitts 1994]. Taking as read this background understanding of how to avoid logical regress, we say that in general two pieces of codata are bisimilar if: – their finite parts are equal, and – their infinite parts are bisimilar. The principle of coinduction may now be stated as follows: Bisimilar objects are equal. One way to understand this principle is to take it as the definition of equality on infinite objects. We can package the definition of bisimilarity and the principle that bisimilar objects are equal in the following method of proof: When proving the equality of two infinite structures we may assume the equality of recursive substructures of the same form. For colists we get — to prove g x1 ... xn = h x1 ... xn It is sufficient to show g x1 ... xn = e <> g a1 ... an h x1 ... xn = e <> h a1 ... an There is a similar rule for each codata type.

We give one example of a proof by coinduction. 763 Turner D.: Total Functional Programming The following theorem about the standard functions map , iterate ,isfrom [Bird and Wadler 1988]. We have changed the name of map to comap because for us it is a different function when it acts on colists. > iterate f x = x <> iterate f (f x) > comap f (a <> x) = f a <> comap f x Theorem iterate f (f x) = comap f (iterate f x) Proof by coinduction iterate f (f x)

f x <> iterate f (f (f x)) {iterate}

f x <> comap f (iterate f (f x)) {ex hypothesi}

comap f (x <> iterate f (f x)) {comap}

comap f (iterate f x) {iterate}

QED The proof given in Bird and Wadler uses the take -lemma - it is longer than that given above and requires an auxiliary construction, involving the application of a take function to both sides of the equation, and an induction on the length of the take. The absence of a base case in this form of induction is at first sight puzzling. It is important to note that coinduction is valid only for the proof of equations over infinite structures, not of arbitrary properties of the data structure as with ordinary induction. The “strong coinduction” principle illustrated here seems to give shorter proofs of equations over infinite lists than either of the proof methods for this which have been developed in the theory of weak functional program- ming - namely partial object induction [Turner 1982] and the take-lemma [Bird and Wadler 1988]. The framework seems simpler than previous accounts of coinduction - see for example [Pitts 1994], because we are not working with domain theory and partial objects, but with the simpler world of total objects. Moral: Getting rid of partial objects seems an unmitigated blessing - not only when reasoning about finite data, but perhaps even more so in the case of infinite data.

" .

* The paper also mentions Walther recursion as a generalization of the above

see also https://en.wikipedia.org/wiki/Machines_that_always_halt , https://en.wikipedia.org/wiki/Total_functional_programming

the best explanation i've seen of codata:

http://blog.sigfpe.com/2007/07/data-and-codata.html

notes from that:

consider

sum [] = 0 sum (a:as) = a + sum as

this is recursion, and more specifically we can classify it as "(sub)structural recursion"; because the argument to the recursion is only part of the input argument (that is, after destructuring the input list into a and as, we only pass as in the recursive call), we can prove that it terminates on finite input. however, it doesn't produce any output until it has consumed all of its input.

now consider

sumSoFar x [] = [x] sumSoFar x (y:ys) = x : sumSoFar (x+y) ys

this is recursion, and more specifically we can classify it as "guarded recursion"; because it uses a constructor to produce the output, and only puts the output of the recursive call into one part of the constructor (that is, part of the output is just 'x' and only the other part is a call to sumSoFar), we can prove that it terminates if you only lazily request the head of the list; that is, it produces each part of the output in a finite time even if you give it an infinite list. in this case i think it is also substructurally recursive but could one could imagine something that wasn't?

as the blog post says, "Note the duality: in structural recursion we 'deconstruct' the argument and then we're allowed to recurse. In guarded recursion we recurse first, and then we're allowed to use the constructor."

" But suppose we were to distinguish between these two different solutions. We could use the keyword data to mean the smallest solution and codata to mean the largest solution. The former represents data, and it's always finite. The latter represents what we call codata, and it's possibly infinite. And now we can refine our rules for well-behavedness. Consider data and codata to be distinct types. In a strongly typed language this immediately allows us to restrict sum to data, not codata. The rule is: you're only allowed to use structural recursion with data and guarded recursion with codata. With that rule, we're guaranteed that our recursions will always be safe, and yet that we can still have open-ended loops in our code. Sometimes these are called recursion and corecursion respectively. "