on total functions:

" Girard and Reynolds’ System F (1971): characterization of the provably total functions of second- order arithmetic Coquand’s Calculus of Constructions (1984): extending system F into an hybrid formalism for both proofs and programs (consistency = termination of evaluation) "


    Within each of the two paradigms there are several versions of typed lambda calculus. In many important systems, especially those a la Church, it is the case that terms that do have a type always possess a normal form. By the unsolvability of the halting problem this implies that not all computable functions can be represented by a typed term, see Barendregt (1990), Theorem 4.2.15. This is not so bad as it sounds, because in order to find such computable functions that cannot be represented, one has to stand on one's head. For example in 2, the second order typed lambda calculus, only those partial recursive functions cannot be represented that happen to be total, but not provably so in mathematical analysis (second order arithmetic).



" Thursday, January 5, 2006

Concurrent/Parallel programming is hard. Just listen to comments from industry leaders like Tim Bray (he also provides a good summary of some of the issues associated with this type of programming):

    "But the bottom line is, for application programmers, don't get into this space unless you're pretty sure you know what you're doing and you're willing to wrestle with some seriously difficult debugging." 

or Bill de hÓra:

    "Distributed programming is hard (and with it comes the truly hard matters of cache invalidation and naming)."

or Herb Sutter and James Larus - here:

    "Probably the greatest cost of concurrency is that concurrency really is hard: The programming model, meaning the model in the programmer's head that he needs to reason reliably about his program, is much harder than it is for sequential control flow." 

and here:

    "But concurrency is hard. Not only are today's languages and tools inadequate to transform applications into parallel programs, but also it is difficult to find parallelism in mainstream applications, and-worst of all-concurrency requires programmers to think in a way humans find difficult." 

And yet, almost everyone (see here, here, and here for examples) is saying:

    "Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software." 

So, how are developers going to deliver the next generation of concurrent applications and (most importantly) how will Lisp fit into all of this? First of all, let's start with some basics. In my previous articles on this subject (see here, here, here, here, here), I've gone through some background and some definitions and I won't re-hash any of that again. However, it is probably a good idea to go over the different concurrent programming paradigms (as taken from the book Concepts, Techniques, and Models of Computer Programming):

    Shared-state Concurrency: This approach assumes multiple threads of execution with shared data using locks, monitors and transactions. It is the typical Java approach to concurrency and it is also the most widespread form of concurrency. However, shared-state concurrency is also the most complicated model to program to as it requires a fine level of synchronization and scheduling between threads.
    Message-passing Concurrency: Uses asynchronous messaging to communicate between multiple threads over a specified port. This is the model used (very successfully) by Erlang. This model provides for a coarser level of interaction between threads but is easier to program to.
    Declarative/Dataflow Concurrency: This approach is data-driven and extends sequential declarative-type programming with multiple threads of execution. It is a simpler approach then approaches #1 & #2 and it is very useful for certain types of problems; however, it is not widely used."

then goes on to mention map-reduce, which he placed in the Declarative/dataflow section


" At the end of Part I of Java Concurrency in Practice, there is a nice "cheat sheet" summary of the main concepts and rules related to writing good concurrent code:

    t's the mutable state, stupid. All concurrency issues boil down to coordinating access to mutable state. The less mutable state, the easier it is to ensure thread safety.
    Make fields final unless they need to be mutable.
    Immutable objects are automatically thread-safe. Immutable objects simplify concurrent programming tremendously. They are simpler and safer, and can be shared freely without locking or defensive copying.
    Encapsulation makes it practical to manage the complexity. You could write a thread-safe program with all data stored in global variables, but why would you want to? Encapsulating data within objects makes it easier to preserve their invariants; encapsulating synchronization within objects makes it easier to comply with their synchronization policy.
    Guard each mutable variable with a lock.
    Guard all variables in an invariant with the same lock.
    Hold locks for the duration of compound actions.
    A program that accesses a mutable variable from multiple threads without synchronization is a broken program.
    Don't rely on clever reasoning about why you don't need to synchronize.
    Include thread safety in the design process-or explicitly document that your class is not thread-safe.
    Document your synchronization policy." ---

on what makes a programming language succeed:


To summarize, Erlang supports a unified approach to both distributed and non-distributed concurrency while Clojure focuses on non-distributed concurrency. One could find plenty of valid reasons to prefer either Erlang's or Clojure's approach to concurrency. The fact is, although both languages have strong support for concurrency as a guiding principle, they were both influenced by different design criteria so their concurrency support emphasizes different things. But, even though Erlang has a lot of really nice features for concurrency, it has not really caught on as a "mainstream" programming language (which is a shame as Erlang is a really nice language and superb for concurrency-oriented programming). Clojure seems much more likely to attract a larger following of developers due to a combination of different (but related) reasons:

    With the recent proliferation of multi-core CPUs, there are a growing number of developers who are developing applications that use concurrency.
    Clojure simplifies the process of developing a typical concurrent application.
    Many "modern" legacy systems are written in Java. Clojure runs on the JVM which makes it more likely to get a "foot into" existing corporate development shops that have a large investment in Java.
    Clojure interops seamlessly with Java and therefore provides easy access to a huge range of Java libraries that Clojure programmers can use.

Java was never originally designed as a language for concurrent programming, it was designed as a language for embedded device programming with good OOP support. If you compared Java to C++ when Java came on the scene in the 1990's, it was less powerful, wasn't as fast, didn't have as many libraries, and it's support for Object-oriented programming (OOP) was not as featureful (and much less "complete" than some other languages). However, Java grabbed both market and mind share by providing the right combination of language and OOP features in a familiar package (familiar, at least, for C/C++ programmers) and in a way that was easier to use and "get right" than in C++. In many ways, Clojure has the potential to do for concurrency-oriented programming what Java did for object-oriented programming a decade ago: make it simpler to do properly using a language (or, in Clojure's case, a "language environment") that is similar to what programmers are already used to. "