Table of Contents for Programming Languages: a survey

Part : Various people's random ideas on the nature and history and future of programming languages


Lambda the Ultimate discussion: "What Are The Resolved Debates in General Purpose Language Design?"

The next big thing

"We all thought that the next level of programming language would be much more strategic and even policy-oriented and would have much more knowledge about what it was trying to do." -- Alan Kay,

"The future of computing depends on parallelism (for efficiency), distribution (for scale), and verification (for quality)." --

" most successful languages have had some pretty serious corporate backing ... NBL is garbage collected. ... Rule #2: Dynamic typing with optional static types. Rule #3: Performance ... 'It turns out that eval() is one of the key things that gets in the way of performance optimizations. It can easily invalidate all sorts of otherwise reasonable assumptions about variable bindings, stack addresses and so on. It's also pretty important, so you can't just get rid of it. So NBL will have to do something clever about eval. ...

Rule #4: Tools ... Here's a short list of programming-language features that have become ad-hoc standards that everyone expects:

    Object-literal syntax for arrays and hashes
    Array slicing and other intelligent collection operators
    Perl 5 compatible regular expression literals
    Destructuring bind (e.g. x, y = returnTwoValues())
    Function literals and first-class, non-broken closures
    Standard OOP with classes, instances, interfaces, polymorphism, etc.
    Visibility quantifiers (public/private/protected)
    Iterators and generators
    List comprehensions
    Namespaces and packages
    Cross-platform GUI
    Operator overloading
    Keyword and rest parameters
    First-class parser and AST support
    Static typing and duck typing
    Type expressions and statically checkable semantics
    Solid string and collection libraries
    Strings and streams act like collections

Additionally, NBL will have first-class continuations and call/cc. I hear it may even (eventually) have a hygienic macro system, although not in any near-term release.

Not sure about threads. I tend to think you need them, although of course they can be simulated with call/cc. I've also noticed that languages with poor threading support tend to use multiprocessing, which makes them more scalable across machines, since by the time you've set up IPC, distributing across machines isn't much of an architectural change. But I think threads (or equivalent) are still useful. Hopefully NBL has a story here. ....a truly great language would support Erlang-style concurrency, would have a simpler syntax and a powerful macro system, and would probably have much better support for high-level declarative constructs, e.g. path expressions, structural dispatch (e.g. OCaml's match ... with statement) and query minilanguages. "


"While I'd prefer something semantically and syntactically clean and beautiful like Ioke, my real concerns center around performance, libraries, expressivity, and community. If a language is too lacking in one of those, I'll look somewhere else." --

"...don't rely on a VM, statically typed and safe...The language that succeeds in striking the right balance between minimalism and features with an excellent support for concurrency will be the next big thing. " -- [1]

Big vs. small languages

" AK In a history of Smalltalk I wrote for ACM, I characterized one way of looking at languages in this way: a lot of them are either the agglutination of features or they’re a crystallization of style. Languages such as APL, Lisp, and Smalltalk are what you might call style languages, where there’s a real center and imputed style to how you’re supposed to do everything. Other languages such as PL/I and, indeed, languages that try to be additive without consolidation have often been more successful. I think the style languages appeal to people who have a certain mathematical laziness to them. Laziness actually pays off later on, because if you wind up spending a little extra time seeing that “oh, yes, this language is going to allow me to do this really, really nicely, and in a more general way than I could do it over here,” usually that comes back to help you when you’ve had a new idea a year down the road. The agglutinative languages, on the other hand, tend to produce agglutinations and they are very, very difficult to untangle when you’ve had that new idea.

Also, I think the style languages tend to be late-binding languages. The agglutinative languages are usually early-binding. That makes a huge difference in the whole approach. The kinds of bugs you have to deal with, and when you have to deal with them, is completely different." -- Alan Kay,

Type systems

" Some people are completely religious about type systems and as a mathematician I love the idea of type systems, but nobody has ever come up with one that has enough scope. If you combine Simula and Lisp—Lisp? didn’t have data structures, it had instances of objects—you would have a dynamic type system that would give you the range of expression you need.

It would allow you to think the kinds of thoughts you need to think without worrying about what type something is, because you have a much, much wider range of things. What you’re paying for is some of the checks that can be done at runtime, and, especially in the old days, you paid for it in some efficiencies. Now we get around the efficiency stuff the same way Barton did on the B5000: by just saying, “Screw it, we’re going to execute this important stuff as directly as we possibly can.” We’re not going to worry about whether we can compile it into a von Neumann computer or not, and we will make the microcode do whatever we need to get around these inefficiencies because a lot of the inefficiencies are just putting stuff on obsolete hardware architectures. " -- Alan Kay,

"I'm not against types, but I don't know of any type systems that aren't a complete pain, so I still like dynamic typing." -- Alan Kay,


" But the flip side of the coin was that even good programmers and language designers tended to do terrible extensions when they were in the heat of programming, because design is something that is best done slowly and carefully.

SF And late-night extensible programming is unsupportable.

AK Exactly. So Smalltalk actually went from something that was completely extensible to one where we picked a syntax that allowed for a variety of forms of what was fixed, and concentrated on the extensibility of meaning in it.

This is not completely satisfactory. One of the things that I think should be done today is to have a fence that you have to hop to forcibly remind you that you’re now in a meta-area—that you are now tinkering with the currency system itself, you are not just speculating. But it should allow you to do it without any other overhead once you’ve crossed this fence, because when you want to do it, you want to do it. "

Alexey Radul's Propagation Networks

" I suggest that we can build general-purpose computation on propagation of information through networks of stateful cells interconnected with stateless autonomous asynchronous computing elements. ... a cell should not be seen as storing a value, but as accumulating information about a value. The cells should never forget information ...except when the system can prove that particular things will never matter again. This is analogous to garbage collection in modern memory-managed systems.... -- such monotonicity prevents race conditions in the behavior of the network. Monotonicity of information need not be a severe restriction: for example, carrying reasons for believing each thing makes it possible to explore but thenpossibly reject tentative hypotheses, thus appearing to undo something, while maintaining monotonicity. ... The key idea of propagating mergeable, partial information allows propagation to be used for general-purpose computation. ... high-level languages follow the expression paradigm, assembly languages follow another paradigm. The assembly language paradigm is a loop that executes instructions in sequence (and some instructions interrupt the sequence and cause the executor to jump somewhere else)...the only difference between instructions and expressions is that instructions are all atomic...expressions generalize instructions...propagation subsumes evaluation (of expressions) the same way that evaluation subsumes execution (of instructions). " --

The "stateless autonomous asynchronous computing elements" are called "propagators".

Each individual cells updates atomically, only in the sense that, if inconsistent states are physically possible, then each cell has a local lock that prevents any connected propagators from observing it in an inconsistent state.

Propagators run until steady-state

Reminiscent of the Glitch concurrency framework, (todo; i think:) each run of a propagator means running its program as many times as necessary, until it stops mutating its connected cells.

Note that this means that a system implementing Radul's system probably is explicitly notified by each cell upon each state change.

Cells Accumulate Information

"The unquestioned assumption of previous generations of propagation systems has been that cells hold values...I propose, in contrast, that we should think of a cell as a thing that accumulates information about a value. The information can perhaps be incomplete: some propagator may tell a cell something that is on the one hand useful but on the other hand does not determine that cell’s value unconditionally...Accumulating information is a generalization of the idea of holding values, because a value can always be interpreted as the information that says “I know exactly what this value is, and it is x;” and the absence of a value can be interpreted as information that says “I know absolutely nothing about what this value is.”

Radul's framework does not specify exactly what sort of partial information each cell holds; it is a generic framework; however particular instantiations of this framework would have to choose this.

"each cell must be able to accept and merge all the contributions of all the propagators that might write to it."

note that 'accumulating information' does not necessarily mean 'accumulating a list of every message sent to the cell by every propagator' (depending on what sort of partial information cells hold in a particular instantiation). For example, if the cell contains the information "my value might be 1, 2, or 5", and a propagator sends the message "your value is not 3", the receipt of that message does not alter the content in the cell (because it is redundant with what the cell already knows).

related probabalistic programming systems

As of this writing, Radul currently works on Venture:

I don't know if Venture is closely related to Radul's propagator networks, or if this is just where Radul happens to be currently employed.

Chapter : Neural architecture

for now see jasperBrain.txt, todo move most of that here

Chapter: Misc Advice