notes-computer-programming-programmingLanguageDesign-prosAndCons-tailRecursion

http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html

http://web.archive.org/web/20110716163344/http://projectfortress.sun.com/Projects/Community/blog/ObjectOrientedTailRecursion

"

To me, tail calls are a design decision for each language. The great Python tail-call debate of 2009 nicely showed that with tail calls certain things are easy, and without tail calls other things are easy. Tail calls have a significant influence on other features in the language around function calls, scoping and error handling. There does not appear to be a clear consensus that the trade off should be one way or the other. For now it seems prudent to explore both sides until a clear winner emerges. " -- Jonathan Wright, http://www.acooke.org/cute/GoRocksHow0.html

---

http://programmers.stackexchange.com/questions/157684/what-limitations-does-the-jvm-impose-on-tail-call-optimization

basically says that the JVM doesn't have an instruction that allows you to GOTO from one function into another one without creating a new stack frame

---

SoftwareMaven? 16 hours ago

link

other than proper functional programming, maybe, which isn't about to happen in Python

If we could just have tail call optimization, I think we could make the rest work (well, maybe better lambdas, too).

reply

irahul 8 hours ago

link

> If we could just have tail call optimization, I think we could make the rest work

Manually thunk/trampoline it?

    class trampoline(object):
        def __init__(self, fn):
            self.fn = fn
        def __call__(self, *args, **kwargs):
            ret = self.fn(*args, **kwargs)
            while isinstance(ret, thunk):
                ret = ret()
            return ret
    class thunk(object):
        def __init__(self, fn, *args, **kwargs):
            self.__dict__.update(fn=fn, args=args, kwargs=kwargs)
        def __call__(self):
            if isinstance(self.fn, trampoline):
                return self.fn.fn(*self.args, **self.kwargs)
            else:
                return self.fn(*self.args, **self.kwargs)
    @trampoline
    def fact(n, accum=1):
        if n <= 1:
            return accum
        else:
            return thunk(fact, n-1, n*accum)
    print fact(1000)
    @trampoline
    def is_even(n):
        if n == 0:
            return True
        else:
            return thunk(is_odd, n - 1)
    @trampoline
    def is_odd(n):
        if n == 0:
            return False
        else:
            return thunk(is_even, n - 1)
    print is_even(1000001)

You only have to write thunk/trampoline utility once, and it is similar to how we do it in clojure.

reply

--

http://neopythonic.blogspot.com/2009/04/final-words-on-tail-calls.html

http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html

---

"

Programs that rely on tail

Programs that rely on tail calls need them to be implemented as tail calls for semantic correctness on real machines - they need to be assured that stack usage will be bounded independent of how many calls logically recur. Meanwhile, developers frequently want to avail of debug builds, runtime diagnostic logging, code security based on stack analysis, etc. that don't automatically work well with tail call optimization, which can be surprising to the procedurally-oriented developer.

The optimization is also more important outside procedurally-oriented programming, where mutable state and explicit loops are more frequently used. The runtime and R&D work that would go into analyzing for TCO and implementing it could be spent on other optimizations that give better return for the investment. tail.call on .NET has long been implemented in a relatively simple but slow way (it may have improved in recent years with F# etc., but last I checked it was implemented as a generic assembly routine that unwound the stack, rather than being baked in to the JIT codegen).

As pkhuong says, it's very rare for the activation record to be allocated outside the stack for procedural code oriented VMs like .NET, JVM, LLVM etc.; closures, coroutines etc. that would extend the record's life are rewritten to explicit heap allocation by the front end before they hit the VM layer of abstraction. (Actually, I took for granted that you would think that the GC does not normally collect activation records in industrial VMs, I missed that in your question.) By barrkel at Sun, 2011-05-15 18:09

There is no good reason for it.
login or register to post comments

There is no good reason for it.

The stack trace issue can be solved by simply emitting notes to a debugging log that supports unwinding.

It is essentially a social problem -- I think that Guido exemplifies it nicely, and you can read his reasoning here:

http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html

PS: varargs shouldn't be an issue for TCO in C, either. By brian at Mon, 2011-05-16 03:29

The stack trace issue can be
login or register to post comments
    The stack trace issue can be solved by simply emitting notes to a debugging log that supports unwinding.

"Simply" building and maintaining the notes that make up stack trace messages usually only generated when dumping stack frames to I/O may in fact cause "social problems". So preventing them is a good idea, no? By Kay Schluehr at Mon, 2011-05-16 04:50

This is what debug builds
login or register to post comments

This is what debug builds are for. By brian at Mon, 2011-05-16 08:41

login or register to post comments

"

http://lambda-the-ultimate.org/node/4274

---

how F# does tail recursion on .NET (CLR):

http://blogs.msdn.com/b/fsharpteam/archive/2011/07/08/tail-calls-in-fsharp.aspx

---

richdougherty 175 days ago

link

The stack doesn't have to be in the heap to make things better for recursion* , the JVM just needs to support tail-call optimisation.

See my blog post: http://blog.richdougherty.com/2009/04/tail-calls-tailrec-and...

I'm excited to see tail-call optimisation ("proper tail calls") coming to JavaScript?. Hopefully a bit of competition will encourage the JVM to clean up its act too.

http://bbenvie.com/articles/2013-01-06/JavaScript-ES6-Has-Ta...

(* Heap-allocated frames can help with other things though, e.g continuations.)


http://duartes.org/gustavo/blog/post/tail-calls-optimization-es6/ hn discussion of that: https://news.ycombinator.com/item?id=7793837

---

hibikir 175 days ago

link

The JVM has a couple of pretty significant problems when trying to work with functional-style languages. The biggest one is the way it deals with the stack: It's not in the heap, and it's pretty limited, so with something like scala, and more specifically with scalaz, you have to do some contortions to avoid overflows. 90% of usages of scalaz trampolines are nothing but ways to work around JVM limitations.

I'd argue that today, you are better off with the .NET VM, as far as features are concerned. But being stuck in windowsland is really a non-starter for all kinds of applications. I can send a major task that will take 400 computers for a week to Amazon if my code relies on the JVM. If I write it in .NET, not so much


bad_user 175 days ago

link

In Scala I work with self tail recursive functions all the time. Scala optimizes self tail recursive functions just fine, rewriting them as simple loops.

Yes you do have to use trampolines for mutual tail recursive function, because the JVM currently lacks support for it, however if you take a look at the Da Vinci sub-projects in OpenJDK?, a prototype has been in the works for quite some time, led by none other than John Rose and given the attention that invokeDynamic is getting, I have no doubt that this will also happen on the JVM: http://openjdk.java.net/projects/mlvm/subprojects.html

Also, Scalaz great as it may be, it simply sucks in terms of the actual implementation, as it has loads of functionality that breaks when that shouldn't have been a possibility in the first place. Every time I happened to investigate issues related to Scalaz, I always ended up with a "WTF were they thinking" moment.

> I'd argue that today, you are better off with the .NET VM, as far as features are concerned.

That's bullshit. Speaking of tail calls optimizations, the bytecode available in .NET didn't even work at all on 64 bits .NET, prior to version 4.5 - that's right, before 4.5 the tailcall bytecode was considered more of a guideline. And Because C# doesn't use it, they never bothered to optimize it, so it has a pretty dramatic performance hit in comparison with normal function calls. And Mono probably doesn't do tailcalls properly even today, with F# being unusable on top of Mono for several years.

In terms of features - for example the CLR doesn't optimize virtual method calls. This is killing dynamic languages, or static languages that diverge from the C# OOP flavor in terms of polymorphism. IronRuby? was never even close to what JRuby could do even when Microsoft was funding its development, simply by virtue of JRuby running on top of the JVM. And now ever since JDK 7 with invokeDynamic, the performance boost is so amazing at times compared to Ruby MRI, that people are running apps on top of JRuby for performance reasons ;-) The JVM for example can inline virtual method calls at runtime and can de-optimize those call sites in case invariants change or in case it notices that there are no performance benefits. This is something that really few other VMs can do.

invokeDynamic is amazing. It provides a way to override the default method resolution that the JVM does, while still benefiting from the same optimizations that the JVM does for normal method calls. It even has benefits for static languages such as Scala, for dealing with closures and Java 8 introduces a facility for initializing a value that can afterwards be treated as a constant, so there are big wins ahead for Scala in terms of performance for things like "lazy val", or structural/dynamic types and others.

Functional languages use a lot of persistent data-structures and persistent data-structures are by their nature wasteful in terms of short-term junk. You need a really good garbage collector to not suffer from this and the JVM really has the best garbage collectors available. You should see what it can do in a server-side environment with server instances being hit with thousands of reqs/sec per JVM process of real traffic. I have and it's freaking sweet.

Also, it's actually good that Java's generics aren't reified. Reification is only needed for languages like Java in which the type system sucks and stays in the way for more expressive/advanced type systems. Language implementers have to pull off a lot of tricks in order to work around CLR's reified generics. F# has 2 generics type systems in the same language, as Hindley-Milner can't be piggybacked on top of CLR's reified generics.

.NET still has some niceties, like stack-allocated value types, which is sweet if you want numbers that diverge from the standard primitives offered and to avoid boxing (and JRuby suffers a little because of this). But btw, here's another thing that the JVM can do - it can do escape analysis and for example it can decide to allocate certain short-lived objects straight on the stack if it sees that those objects don't escape their context.