notes-computer-programming-programmingLanguageDesign-prosAndCons-lisp


landoflisp's 10 reasons why lisp reduces bugs:

http://landoflisp.com/#guilds

notes: mentions dialects Common Lisp, Scheme, Arc, Clojure

---

"What Makes Lisp Different?":

-- http://books.google.com/books?id=QzGuHnDhvZIC

---

http://www.paulgraham.com/diff.html

" 1. Conditionals. A conditional is an if-then-else construct. We take these for granted now. They were invented by McCarthy? in the course of developing Lisp. (Fortran at that time only had a conditional goto, closely based on the branch instruction in the underlying hardware.) McCarthy?, who was on the Algol committee, got conditionals into Algol, whence they spread to most other languages.

2. A function type. In Lisp, functions are first class objects-- they're a data type just like integers, strings, etc, and have a literal representation, can be stored in variables, can be passed as arguments, and so on.

3. Recursion. Recursion existed as a mathematical concept before Lisp of course, but Lisp was the first programming language to support it. (It's arguably implicit in making functions first class objects.)

4. A new concept of variables. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to.

5. Garbage-collection.

6. Programs composed of expressions. Lisp programs are trees of expressions, each of which returns a value. (In some Lisps expressions can return multiple values.) This is in contrast to Fortran and most succeeding languages, which distinguish between expressions and statements.

It was natural to have this distinction in Fortran because (not surprisingly in a language where the input format was punched cards) the language was line-oriented. You could not nest statements. And so while you needed expressions for math to work, there was no point in making anything else return a value, because there could not be anything waiting for it.

This limitation went away with the arrival of block-structured languages, but by then it was too late. The distinction between expressions and statements was entrenched. It spread from Fortran into Algol and thence to both their descendants.

When a language is made entirely of expressions, you can compose expressions however you want. You can say either (using Arc syntax)

(if foo (= x 1) (= x 2))

or

(= x (if foo 1 2))

7. A symbol type. Symbols differ from strings in that you can test equality by comparing a pointer.

8. A notation for code using trees of symbols.

9. The whole language always available. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.

Running code at read-time lets users reprogram Lisp's syntax; running code at compile-time is the basis of macros; compiling at runtime is the basis of Lisp's use as an extension language in programs like Emacs; and reading at runtime enables programs to communicate using s-expressions, an idea recently reinvented as XML. "

--- Readable Lisp S-expressions Project http://readable.sourceforge.net/

---

See also pros and cons for specific Lisps, also in this directory ([Self-notes-computer-programming-programmingLanguageDesign-prosAndCons]).


" Old LISPer that I am, I also looked at various current dialects of Lisp and Scheme—but, as is historically usual for Lisp, lots of clever design was rendered almost useless by scanty or nonexistent documentation, incomplete access to POSIX/UNIX facilities, and a small but nevertheless deeply fragmented user community. " -- http://www.linuxjournal.com/article/3882


http://funcall.blogspot.sg/2009/03/not-lisp-again.html

---

" Tac-Tics said...

    The sad reality.
    Tail recursion is a pain in the butt to debug. It turns out while creating stack frames is slow, it's really USEFUL when stepping through a program. If a procedure tail-recurs to another, the stack trace shows a miracle has happened: a procedure is being called from the one right above it, but the one above it never even mentions it. Those kinds of miracles are Bad News.
    On top of that, being forced to create a function inside a function is not as natural. It is recursive, so it needs to be given a name, but the name is always something lame like "iter" or "_fact" or something dumb. In practice for-loop style constructs are more visible and easier to follow.
    First-class functions are very important. However, that particular example isn't a good one. You can do the same in assmebly or C, albeit the syntax in C is gimped. The real power of first-class functions in Lisp comes from the ability to close over local variables in the outer scope.
    March 5, 2009 at 12:18 PM "

---

tutorial: http://cs.ucla.edu/~rosen/161/notes/lisp1.html

--

" [–]rukubites 7 points 2 years ago

Why don't you agree with the article? Lisp (and here I mean Common Lisp), is full of hacks and inconsistencies. It comes with a framework to arbitrarily generate code using the full power of the language (macros), and goes so far as to allow you to go even deeper and alter how the language itself is parsed through read-macros.

You can statically type things in lisp but it is damn messy, and the typing isn't really meant for correctness, but rather as compiler advice for optimization.

The most used part of the language for me - loop - is a hacked together sublanguage in itself, as is the format system for text output.

There are other things too, such as packages and features and readtables, but that is probably enough for now.

Full disclosure: I am a paid CL programmer.

    permalink
    parent

[–]Squirrel_of_doom 10 points 2 years ago

You start out declaring that CL is full of hacks and inconsistencies, but fail to list any. A Turing complete macro system is not a hack, nor is this method of compile time AST manipulation "inconsistent" or theoretically unsound.

Why is static typing messy? (the type value) and (declare (type my-type y z)) are syntactically lengthy, which is why the compiler has a macro system and extensible parser in the first place. Look at GBBopen's typed numerics system for syntax ideas. For example, (& val) => (the fixnum val) and (+& val-1 val-2) => (the fixnum (+ (the fixnum val-1) (the fixnum val-2))). Or roll your own typed-let.

Your attacks on loop and format are silly. You don't have to use either one, and 99% of the time (loop for i from 1 to 10 do (print i)) or (format t "~A" val) will suffice.

All languages have "first 5 minute annoyances", things that bug you in your first 5 minutes of playing with them. If you really are a [smart] paid CL programmer, you should be able to dig much deeper than that.

    permalink
    parent

[–]rukubites 10 points 2 years ago

I love lisp. It fits my mind like no other programming language ever did. I've programmed it - mainly for pay - for about 10 years.

Macros are awesome and complicated and powerful. However there are numerous traps such as compile-time evaluation versus runtime evaluation, variable capture, correct use of gensyms, etc. Read macros are great but frightening too. #. has a specific special variable to turn it off (*read-eval*)! Once you scratch the surface, they are very inelegant, and what is a hack if not an inelegant solution?

The static typing is messy because it is hard to know what to declare to actually get the performance speedups needed. In practice, you have to keep recompiling a function and guess which declarations would be needed for a given speedup. When I did this, it was particularly hard because I was using an alisp system, but alisp's compiler advice was useless, so I had to do a mini port of that part to sbcl just to optimize.

What I tend to actually use for typing is clos and defmethods. I have encountered the gbbopen numerics. They didn't impress me because you could just do (+& val-1 val-2) and think you've optimized (and be wrong). Someone did that to me once.

None of what I said was actually an attack. I love Common Lisp as much as anyone. I have used the full syntax (including typing) of loop and also used just about every feature of format, including even ~. (Did you know that you have to specify the package of the function you put in between the slashes?)

    Your attacks on loop and format are silly. You don't have to use either one, and 99% of the time (loop for i from 1 to 10 do (print i)) or (format t "~A" val) will suffice.

I would (dotimes (i 10) (print (1+ i))) or (princ val) every time. loop is great so you don't have to use five levels of let* indentation or the abominable do/do*. Your trivial examples are trivial.

Format is awesome, but I have to look up http://gigamonkeys.com/book/a-few-format-recipes.html or dig in the hyperspec far too often.

I love loop and format, but they are still hacked together and the result of design by committee. I did mention packages and readtables and features as hacked together things. Also - pathnames, shadowing symbols, the inconsistency of CLOS with respect to core types, etc. etc.

I'm reminded of someone else on reddit who remonstrated me because I said that genetic algorithms were hard. They are conceptually simple (like common lisp), but scratch the surface and try to solve real, difficult problems - you'll find a whole lot of messy, harsh compromises (like what is in the core of common lisp).

Peace. :-)

    permalink
    parent

[–]lvaruzza 1 point 2 years ago

On top of my head comes the Hash API and the lack of a consistent sequence API, like closujure and also the object system, you can't create a generic version of the function + for example (even C++ allows that).

Lisp and haskell also lack a generic stream IO API (java done it right).

    permalink
    parent"

--

http://readevalprintlove.fogus.me/sakura/index.html

in addition to being a good read itself,

has links for:

" The Nature of Lisp by Slava Akhmechet What Made Lisp Different by Paul Graham Why Ruby is an Acceptable Lisp by Eric Kidd Lisp is Not an Acceptable Lisp by Steve Yegge Why I Ignore Clojure by Manuel Simoni Why Scala is an Acceptable Lisp by Will Fitzgerald Plotting and Scheming the Ubiquitous LISP by André van Meulebrouck What can Lisp do that Lua can’t?

.. There was an epic thread on the comp.lang.lisp Usenet list circa 2002 involving part trolling, part rage and part wisdom entitled Why Scheme is not a Lisp?. It’s well worth exploring that thread for a deeper understanding of just what constitutes a Lisp and how Internet communications will be the death of us all. ↩ "

-- lisp and common lisp history:

http://www.lispworks.com/documentation/HyperSpec/Body/01_ab.htm

--

upvote

light3 1 day ago

link

>Lisp is a language that was ahead of its time, but there are language features now that seem beyond Lisp's grasp.

Interested to hear your views, can you provide some examples?

reply

upvote

reikonomusha 1 day ago

link

Type systems: This is the biggest issue in my opinion. Most Lisps don't really have any formal notion of a type system. Common Lisp kind of does; it's pretty baroque, but if you look deep enough, you'll see it's way behind the systems offered by ML derivatives, Scala, or Haskell. Such a thing would be incredibly hard to bolt-on. Shen sort of offers a richer system in very weird syntax, but the compiler just throws that info away and doesn't make it useful. Typed Racket is another approach.

Polymorphism: In Common Lisp, I can't really make efficient, generic data structures. In Haskell, I can, by making the data structure polymorphic. Haskell will know the types at compile time and can optimize accordingly. In CL, I must do ugly things like provide equality predicates to functions, as opposed to having them associated to the data structure itself. François René Rideau has been trying to patch this up by something called the "Lisp Interface Library".

Functional optimizations: In any Lisp, you typically need a special library for doing optimization of functional code. Deforestation and so on can only be done with special packages like reducers in Clojure or SERIES in Common Lisp. Again, they aren't broad enough to cover the language as a whole.

Immutable/persistent data structures: Clojure has this pretty covered. It is possible to implement these data structures in other Lisps, like Common Lisp, but they're not bound to be very efficient.

OS integration: Not much of a comment. For Common Lisp at least, the language was designed without POSIX or Windows in mind. So it has really weird pathname conventions, poor ways of targeting the user environment, a weird idea about files, etc.

Code organization and packaging at the language level: This is an issue with CL and Scheme. Lisp doesn't really have the concept of an explicit API, or modules of code. There's no concept of a compiled shared library. Code is almost always distributed and integrated by source.

...

The list goes on. You can implement lazy data structures in Lisp, but it's hard to really integrate them in the language. Lazy data structures provide tons of benefits, especially by moving the boundaries for abstraction, but there seems little hope to make this a part of Lisp.

A big problem is that even if some of the above concepts are implemented in various languages (and as I stated, some of them have), they're usually implemented as a part of a toy language (even if it's not intended to be a toy), and are never really integrated well with what exists. Because of this, I don't think it's fair to say Lisp has all of these features, even if there exists dialects of Lisp that implement some of them.

reply

--

" urbit 3 days ago

link

You can get rid of the whole name reduction system. Which is hardly trivial. If you assume it, though, it's true that everything else is trivial.

Getting symbol tables, functions, environments, free and bound variables, etc, etc, out of the fundamental automaton, frees you up to design them right at the higher layer where they (IMHO) belong.

This philosophical argument has serious practical ramifications, I think, because it leads directly to the Question of Why Lisp Failed. Why did Lisp fail? Many people say, because it couldn't be standardized properly.

Why couldn't it be standardized? Because the Lisp way is not to start with a simple core and build stuff on top of it, but to start with a simple core and grow hair on it. So you end up with a jungle of Lisps that are abstractly related, but not actually compatible in any meaningful sense. This is because the lambda calculus is an idea, not a layer.

Basically the point of Nock is to say: let's do axiomatic computing such that it's actually a layer in the OS sense. The way the JVM is a layer, but a lot simpler. Lambda isn't a layer in this sense, so it doesn't provide the useful abstraction control that a layer provides. "

--

" dlweinreb 2141 days ago

link parent flag

In all fairness, the manuals that filled a whole shelf documented a lot of major applications. On my desk right now, I have a copy of the O'Reilley book on Subversion (a source control system). I have another book on Emacs. And so on. ALL of those things were covered in that shelf.

Regarding simplicity versus complexity, please see http://www.joelonsoftware.com/items/2006/12/09.html. Different people want different things; you can't just provide the common 20%.

Over the last few days, I have been surveying the WWW for criticisms of Common Lisp. The two that I see most often are: (1) it's too big, and (2) it's missing so many important features like threads, sockets, database connectivity, operating system interoperability, Unicode, and so on. Ironic, no?

It is really too bad that Common Lisp was not defined as a language core, plus libraries. We did originally intend to do that (they would have been called the "White Pages" and "Yellow Pages"), but we were under too much time pressure.

There is no question that Common Lisp is a lot less elegant that it could have been, had it been designed from scratch. Instead, it had two major design constraints: (1) it had to be back-compatible with MacLisp? and Zetalisp in order to accommodate the large body of existing software, such as Macsyma, and (2) it had to merge several post-MacLisp? dialects, in a diplomatic process (run magnificently by Guy L. Steele Jr) that made everyone reasonably satisfied. It was quite literally a design by committee, and the results were exactly what you'd expect.

But the imperative was to get all the post-MacLisp? implementations to conform to a standard. If we failed, DARPA would have picked InterLisp? as the reigning Lisp dialect, and we would have all been in a great deal of trouble. (Look where InterLisp? is today; actually there's nowhere to look.)

You wonder how other people learned to use Symbolics machines. Some of them took courses - we had an extensive education department. Before you say "that proves that it was too complicated", keep in mind that the system was very large and functional because that's what its primary target market wanted. We did not get feedback from customers saying "make it simpler"; we got feedback saying "add more features, as follows". I bet the people who maintain the Java libraries are rarely asked to remove large amounts of the libraries.

I'm not sure what the reference to Steve Jobs is about. Look at how many features the Macintosh has now. It takes a long time to learn all of them. Their documentation is much shorter because they don't give you any; you have to go to the book store and buy David Pogue's "The Missing Manual" books.

I admit that some (not most) of the complexity was gratuitous and baroque, but not because we liked it that way. The complexity (mainly the non-uniformity) of Common Lisp was beyond our control (e.g. the fact that you can't call methods on an array or a symbol, and so on). Some of the subsystems were too complex (the "namespace system", our distributed network resource naming facility) comes to mind.

In summary, I'm sympathetic to what you're saying, but the reasons for the problems were more involved.

-- Dan Weinreb "

"

upvote

dlweinreb 2094 days ago

link

It's true that the system was feature-laden. I think this was more true of the API's than the user interfaces, though, and so I'm not sure that the Steve Jobs reference is exactly appropriate. Steve Jobs is making consumer products; most customers don't care much about the API's.

It was also featureful because we didn't know which features were the ones that would turn out to be most useful; if there had been a second generation, we could have pruned out some of the stuff that never really got used. It was something of a "laboratory" that way.

Also, the kind of people who used Lisp machines, generally early adopter types, really did ask for amazing numbers of features. If you had been there, you would have experienced this. We wanted to make all our users happy by accommodating all their requests. It's probably similar to the reason that Microsoft Word has so many features. Everyone thinks there are too many and has a long list of the ones they'd get rid of; but everyone has a different list! I think Joel Spolsky wrote something very convincing about this topic once but I can't remember where.

Lucid on Suns was eventually as fast, if you turned off a lot of runtime checking and put in a lot of declarations. Later it was even fast if you didn't do that; the computational ecosystem changed a whole lot since the Lisp machine was originally designed. You have to remember how old it was. At the time it came out, it was very novel to even suggest that every AI researcher have his or her very own computer, rather than timesharing! That's early in the history of computers, by today's standards.

No, we didn't teach all of our customers personally, although we did have an education department that taught courses, and some of them learned that way. There were classes in Cambridge and in San Francisco. Allan Wechsler designed the curriculum, and he's one of the best educators I have ever met. (My own younger brother worked as a Symbolics teacher for a while.)

Common Lisp is complicated because (a) it had to be upward-compatible with very, very old stuff from Maclisp, and (b) it was inherently (by the very nature of what made it "Common") a design-by-committee. For example, consider how late in the lifetime of the language that object-oriented programming was introduced. (Sequences and I/O streams should obviously be objects, but it was too late for that. CLOS wasn't even in the original CLtL? standard.)

In other words, I'm mainly not disagreeing with your points, just explaining how things got that way. "

--

" ...

Common Lisp is the combined effort of 8 different Lisp implementation groups* aimed at producing a common dialect of Lisp while allowing each group to exploit its own hardware. Common Lisp is a set of documents, a language design, and a common body of code.

[* These group are: Spice Lisp at CMU, DEC Common Lisp on Vax at CMU, DEC Common Lisp on DEC-20 at Rutgers, S-1 Lisp at LLNL, Symbolics Common Lisp, LMI Common Lisp, Portable Standard Lisp at Utah, and Vax NIL.]

The Common Lisp documentation is divided into four parts, known as the white pages, the yellow pages, the red pages, and the blue pages.

The white pages is a language specification rather than an implementation specification. It defines a set of standard language concepts and constructs that may be used for communication of data structures and algorithms in the Common Lisp dialect. This is sometimes referred to as the ``core Common Lisp language, because it contains conceptually necessary or important features. It is not necessarily implementationally minimal. While some features could be defined in terms of others by writing Lisp code (and indeed may be implemented that way), it was felt that these features should be conceptually primitive so that there might be agreement among all users as to their usage. (For example, bignums and rational numbers could be implemented as Lisp code given operations on fixnums. However, it is important to the conceptual integrity of the language that they be regarded by the user as primitive, and they are useful enough to warrant a standard definition.)

The yellow pages is a program library document, containing documentation for assorted and relatively independent packages of code. While the white pages are to be relatively stable, the yellow pages are extensible; new programs of sufficient usefulness and quality will routinely be added from time to time. The primary advantage of the division into white and yellow pages is this relative stability; a package written solely in the white-pages language should not break if changes are made to the yellow-pages library.

The red pages is implementation-dependent documentation; there will be one set for each implementation. Here are specified such implementation-dependent parameters as word size, maximum array size, sizes of floating-point exponents and fractions, and so on, as well as implementation-dependent functions such as input/output primitives.

The blue pages constitutes an implementation guide in the spirit of the Interlisp virtual machine specification. It specifies a subset of the white pages that an implementor must construct, and indicates a quantity of Lisp code written in that subset that implements the remainder of the white pages. In principle there could be more than one set of blue pages, each with a companion file of Lisp code. (For example, one might assume IF to be primitive and define COND as a macro in terms of IF, while another might do it the other way around.)

At present the white pages portion of Common Lisp is nearly complete, that document being edited by Guy Steele Jr. at CMU. Since Guy Steele is taking a leave-of-absence from CMU to work at Tartan Labs, and since Scott Fahlman, the head of the Spice Lisp project and a major contributor to Common Lisp, wants to return to his AI research, the administrative control of the Common Lisp effort is in question with several important parts left undone. Stanford proposes to complete those parts.

In particular we propose to do three things. .... "

--

"

    The white pages is a language specification rather than an implementation specification. It defines a set of standard language concepts and constructs that may be used for communication of data structures and algorithms in the Common Lisp dialect. [...]
    The yellow pages is a program library document, containing documentation for assorted and relatively independent packages of code. While the white pages are to be relatively stable, the yellow pages are extensible; new programs of sufficient usefulness and quality will routinely be added from time to time. The primary advantage of the division into white and yellow pages is this relative stability; a package written solely in the white-pages language should not break if changes are made to the yellow-pages library.
    The red pages is implementation-dependent documentation; there will be one set for each implementation. Here are specified such implementation-dependent parameters as word size, maximum array size, sizes of floating-point exponents and fractions, and so on, as well as implementation-dependent functions such as input/output primitives.
    The blue pages constitutes an implementation guide in the spirit of the Interlisp virtual machine specification. It specifies a subset of the white pages that an implementor must construct, and indicates a quantity of Lisp code written in that subset that implements the remainder of the white pages. In principle there could be more than one set of blue pages, each with a companion file of Lisp code. (For example, one might assume IF to be primitive and define COND as a macro in terms of IF, while another might do it the other way around.)
    [...]
    [W]e will produce the first version of the blue pages. This requires producing a detailed specification of the subset of the white pages that must be written, expanding on the white pages description where necessary. We will also write, test, and document an implementation of Common Lisp in that subset and make that code available to anyone wanting to implement a Common Lisp. Thus, for any group to implement a Common Lisp, all that will need to be done is to write the specified subset language in whatever other language their hardware supports and to then take a copy of the Lisp code we will have produced which will complete the implementation of the white pages language. "

--

" AK Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!” This is the whole world of programming in a few lines that I can put my hand over. " -- http://queue.acm.org/detail.cfm?id=1039523

--

"We did not consider LISP or Scheme because of their unfriendly syntax"

--

http://www.podval.org/~sds/tool.html

---

http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptable-lisp.html

Problem 1: Which Lisp?

" Lisp is not an acceptable LISP. Not for any value of Lisp. There's nothing magical about this, nothing partisan. If Lisp were acceptable, then we'd all be using it.

You've all read about the Road to Lisp. I was on it for a little over a year. It's a great road, very enlightening, blah blah blah, but what they fail to mention is that Lisp isn't the at the end of it. Lisp is just the last semi-civilized outpost you hit before it turns into a dirt road, one that leads into the godawful swamp most of us spend our programming careers slugging around in. I guarantee you there isn't one single Lisp programmer out there who uses exclusively Lisp. Instead we spend our time hacking around its inadequacies, often in other languages. ... There's all this real-life stuff (jobs, family, stability, all the usual suspects) intruding on you as a programmer, demanding that you quit dorking around looking for the One True Language, and settle down on whatever barren rock you happen to be squatting on at the moment, and call it Good. So most Lisp programmers — and that's not many, since not many programmers make it even close to that far down the Road — see that last outpost of technical civilization, peer balefully into the swamp, and decide to check into the Lisp hotel for good. Not realizing, of course, that all its rooms are in the swamp proper. "

 The answer is "it depends", and that's pretty unfortunate, because right there you've just lost users. With Python or Ruby or Java, you've only got one language to choose from. Or at least you can be comfortable that there's a single canonical version, and the rest (e.g. Jython) are highly experimental territory.

Pick Scheme, and you have to pick a Scheme. Pick Common Lisp, and you have to pick a Common Lisp. "

"Most newcomers eventually (and independently) decide the same thing: Scheme is a better language, but Common Lisp is the right choice for production work."

" CL has more libraries, and the implementations are somewhat more compatible than Scheme implementations, particularly with respect to macros. So newcomers heave a deep sigh, and they learn to accept LISP-2, names like rplaca, case-insensitivity, '(ALL CAPS OUTPUT), and all the other zillions of idiosyncracies of a standard Common Lisp implementation. "

Problem 2: Worthless Spec

"the simple fact is that the spec is ancient. Every time someone talks about updating it, someone screams about time or money or whatever."

" But what's wrong with Common Lisp? Do I really need to say it? Every single non-standard extension, everything not in the spec, is "wrong" with Common Lisp. This includes any support for threads, filesystem access, processes and IPC, operating system interoperability, a GUI, Unicode, and the long list of other features missing from the latest hyperspec.

Effectively, everything that can't be solved from within Lisp is a target. Lisp is really powerful, sure, but some features can only be effective if they're handled by the implementation. "

Problem 3: CLOS

" CLOS is icky. I haven't worked with Smalltalk a whole lot, but I've worked with it enough to know that to do OOP right, you have to do it from the ground up. CLOS was bolted on to Common Lisp. Everyone knows it, although not many people want to admit it.

It was bolted on very nicely, and it's not my intention to disparage the efforts of the people who created it. It was an amazing piece of work, and it did a great job of being flexible enough to tie together the conflicting OO systems of existing Lisp implementations.

But let's face it; CLOS has problems. One obvious one is that length isn't a polymorphic function. It's one of the first speed bumps you encounter. You can't create a new kind of measurable object and give it a length method; you have to call it rope-length or foo-length or whatever. ...

Another problem is the slot accessor macros. They're insanely clever, but clever isn't what you want. You want first-class function access, so you can pass the getters and setters to map, find-if, etc. "

" When you work with Ruby or Smalltalk or any suitably "pure" OO language (Python doesn't quite count, unfortunately; its bolts are also showing), you realize there are some distinct advantages to having everything be an object. It's very nice, for instance, to be able to figure out what methods are applicable to a given class (e.g. "foo".methods.sort.grep(/!/) from Ruby), and to be able to extend that list with your own new methods. It's a nice organizational technique.

Of course, that forces you into a single-dispatch model, so it becomes harder to figure out what to do about multi-methods. Some Python folks have implemented multi-methods for Python, and they do it by making them top-level functions, which makes sense (where else would you put them?) I'm not claiming that Smalltalk's object model is going to translate straight to Lisp; you have to decide whether cons cells are "objects", for instance, and that's a decision I wouldn't wish on my worst enemy. I don't envy the person who tackles it.

...

Or maybe you could go the Haskell route and not have OOP at all. That seems to alienate most programmers, though, despite the attractions of not having to create nouns for everything. (Have you ever noticed that turning a non-object-oriented program into an object-oriented one in the same language that does the same thing essentially doubles its size? Try it sometime...) "

" Problem 4: Macros

they're fraught with problems. One is that they're not hygienic. You should at least have the option of requesting hygienic macros. Various papers have been published, and implementations implemented, for hygienic defmacro. Yeah, it's hellishly hard to get right, and it's overkill for many situations, but it really does need to be offered as an option. A portable one.

For that matter, you should also have a choice between Scheme-style pattern-matching macros and Lisp-style code-style macros. They're very different, and each kind is better (cleaner) in some situations. People often act as if hygiene is synonymous with define-syntax, but the pattern-template style is orthogonal to the question of hygiene.

Style considerations aside, macros have tool problems. Macros are notoriously hard to debug, and honestly it needn't be that way. If your editor knows all about macros, then you should be able to click to see the expansion, and click again to see its sub-expansions, all the way down to the primitive functions. Some editors can do this, but none of them (that I'm aware of) handle macros as cleanly or seamlessly as they do normal functions.

Syntax in general is a problem. Lisp has a little syntax, and it shows up occasionally as, for instance, '(foo) being expanded as (quote foo), usually when you least expect it. Truth be told, Lisp should probably have a skinnable syntax. That implies a canonical abstract syntax tree, which of course hasn't been defined (and in many implementations isn't even available to you, the way it is in the Io language, say). Once you've got a canonical AST defined, syntax should, in theory, be like CSS chrome. Of course, there are plenty of bodies left in the trail of this particular theory as well. Someday...

Problem 4: Type System

See, that's just exactly the problem with type systems. They can make sure you use headings, but they can't ensure you get the numbering right.

...

The problem is that the type system has to be extensible and skinnable, and I'm not strictly talking about user-defined types in the sense of OOP or CLOS

...

Lisp, for all the strengths of its flexible type system, hasn't got this issue right either. Otherwise Haskell and OCaml (and C++, gack) wouldn't be kicking its ass all over the performance map. 'nuff said, at least for now. [And no, they don't quite have it right either.]

...

" Anonymous Chuck said...

    Enjoyed this post, and though I haven't read most of the comments on this post, I enjoyed the next post too. Blahblah.
    After first flirting with Common Lisp last July, taking a course at UNI which made quite a bit of use of Scheme, and embarking on an undergrad research project using Common Lisp because I liked Scheme enough to see what doing OO in a Lispy language would be like, I still feel like a Lisp noob, even after messing around with it 3/4 of a year. I find that much of your rant here comports with my experiences. The bit about feeling like you've completely forgotten the language if you're away from it for a week especially hit home and got a laugh out of me. Since I'm working now, I can usually only put significant time in on my project on weeekends, and each time it seems like I'm having to re-learn or re-look-up so much stuff. I have to keep around five tabs of different pages of PCL and CLtL open in Firefox the whole time.
    I've also run into such frustrations as no reflection (not even the like of "instanceof"), no way to get a list of the keys in a hash table (or if there is, I can't find documentation in it), not having quite as much polymorphism as I'd like (your mention of "length" is one example, but also having to remember a bunch of different iterators for different composite data types when what you really want is just something like a polymorphic "each" would be another; and I've found "loop" to be of little help -- it's too complicated a syntax -- practically a whole mini-language -- for the new Lisper to grab hold of, and then when you think you've figured out how to get it to do what you mean it turns out to do something else), etc.

I don't even care that much about macros. Maybe it's a function of my newbness, but I've only had need to write maybe two or three macros of my own. But Lispers seem to get so fanatic about them. I appreciate their usefulness, and the times I've needed them I was glad to have them, or at least to have "symbol-macrolet." Obviously Lisp needs macros because so much of it is built from them, but maybe that's the thing -- most of the macros a Lisp programmer needs are probably already there.

But then there are an awful lot of things I like about Lisp, or that I liked enough about Scheme to make me come back. Which is why it's such a bummer about Arc. From what I've read on it, it seems to address a lot of the little annoyances I have about Lisp while keeping lots of what I like, and this makes me anxious to try it. I appreciate that Paul Graham wants to take his time and get it right, but it's frustrating because if there is such a thing as The Moment for it to emerge, it feels to me like it's right about now.

"

" What's the secret clunkiness of CLOS that I missed that makes Dylan better?

I didn't say that Dylan is better, I said it has a more seamless integration of generic functions. (Didn't make me want to switch yet.) "

" Common Lisp was not designed for the occasional user, but for expert programmers. It has a steep learning curve that only pays off in the long run. "

"

"

"

http://gigamonkeys.com/book/practical-parsing-binary-files.html

todo read the rest of the comments on this page http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptable-lisp.html after "11:39 AM, April 17, 2006"

--

http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/

--

http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/

Ruby vs. Lisp:

[1,2,3].map {

nn*n }.reject {nn%3==1 }

(remove-if (lambda (n) (= (mod n 3) 1)) (mapcar (lambda (n) (* n n)) '(1 2 3)))

--

" Elisp has a couple of features that make it work well as an application extension language: dynamic variable scope and function advice are two that are fairly unique to Lisp (technically, Javascript has dynamic scope via the eval function, but it's not usually a good idea to use it). These things make it easy to tweak behaviour from the outside.

However, contra other people's opinions, Elisp is not fast. It is slow, very very slow, compared to efficient languages. My startup time is just about tolerable on a modern machine, but not tolerable enough to stop me starting emacs --daemon every time I reboot. Scrolling through large files with line numbers enabled is slow, and the UI has a few milliseconds of lag for almost every non-trivial action. Dynamic searches need very noticeable debounce delays for list refreshes or otherwise things get unresponsive. "

apgwoz 1 day ago

link

My (potentially misunderstood) interpretation of why ELisp exists is due to the fact that Scheme was unsuitable because it's lexically scoped (which was potentially less efficient). The dynamic scope of ELisp is advantageous for things like temporary mode mapping changes, buffer local variables, etc (See: https://www.gnu.org/software/emacs/emacs-paper.html#SEC17).

Scheme of course provides `fluid-let` and Common Lisp also allows for dynamic binding

https://www.gnu.org/software/emacs/emacs-paper.html#SEC17

---

why guile:

http://wingolog.org/archives/2011/08/30/the-gnu-extension-language

---

guile has delimited continuations:

http://wingolog.org/archives/2011/08/30/the-gnu-extension-language


" Unfortunately, there is no equivalent to IPython and there will never be, since the language does not have support for docstrings, nor the introspection facilities of Python: you would need to switch to Common Lisp with SLIME to find something comparable or even better.

All the Scheme implementations I tried are inferior to Python for what concerns introspection and debugging capabilities. Tracebacks and error messages are not very informative. Sometimes, you cannot even get the number of the line where the error occurred; the reason is that Scheme code can be macro-generated and the notion of line number may become foggy. On the other hand, I must say that in the five years I have being using Scheme (admittedly for toying and not for large projects) I have seen steady improvement in this area.

To show you the difference between a Scheme traceback and a Python traceback, here is an example with PLT Scheme, the most complete Scheme implementation and perhaps the one with the best error management:

$rlwrap mzscheme Welcome to MzScheme? v4.1 [3m], Copyright (c) 2004-2008 PLT Scheme Inc. > (define (inv x) (/ 1 x)) > (inv 0) /: division by zero

Type "help", "copyright", "credits" or "license" for more information. >>> def inv(x): return 1/x ... >>> inv(0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in inv ZeroDivisionError?: integer division or modulo by zero

I should mention however that PLT is meant to be run inside its own IDE, DrScheme?. DrScheme? highlights the line with the error and includes a debugger. However such functionalities are not that common in the Scheme world and in my own experience it is much more difficult to debug a Scheme program than a Python program.

The documentation system is also very limited as compared to Python: there is no equivalent to pydoc, no help functionality from the REPL, the concept of docstring is missing from the language. The road to Scheme is long and uphill; from the point of view of the tools and reliability of the implementations you will be probably better off with Common Lisp. However, in my personal opinion, even Common Lisp is by far less productive than Python for the typical usage of an enterprise programmer. "


against CL:

http://blog.jacius.info/2012/05/29/a-personal-lisp-crisis/


" Lack of types in scheme has made me day-dream about learning ocaml or haskell. My pet peeve about scheme is that, when maintaining old code, its very very hard to just "know" what type some lambda is expecting. And, to me, this makes scheme sometimes very hard to read. "


" And Guile is quite nice, as a language - but some things still made me miss Racket.

Examples are: local defines, let-values, port->string, Racket's for loop forms, and the quality of the documentation: Guile's docs don't include the return value in the function signature - you have to parse the prose around it for that information. Not nice "


https://news.ycombinator.com/item?id=1373443


" Common Lisp is considered by many to be one of the most expressive programming languages in existence. Individuals and small teams of programmers have created fantastic applications and operating systems within Common Lisp that require much larger effort when written in other languages. Common Lisp has many language features that have not yet made it into the C++ standard. Common Lisp has first-class functions, dynamic variables, true macros for meta-programming, generic functions, multiple return values, first-class symbols, exact arithmetic, conditions and restarts, optional type declarations, a programmable reader, a programmable printer and a configurable compiler. Common Lisp is the ultimate programmable programming language. ' -- http://drmeister.wordpress.com/2014/09/18/announcing-clasp/


krig 3 days ago

link

Considering that SBCL is among the fastest programming language implementations there is, 100x is actually not terrible for a first version.

reply

sigil 3 days ago

link

> Considering that SBCL is among the fastest programming language implementations there is...

Source for this? I'd love to find a really fast lisp or scheme. Hadn't heard SBCL was particularly fast, and the alioth benchmarks don't show anything special there.

http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Edit: actually SBCL stacks up alright against languages like Go or Rust in the alioth benchmarks, so maybe that's what you had in mind.

reply

krig 3 days ago

link

I'd say those numbers you link to are pretty amazing for a garbage collected language. Compared to Java, SBCL is roughly on par, with some benchmarks 2-3x slower and some 2-3x faster.

edit: To clarify, I'm not saying SBCL makes Common Lisp the fastest language (I don't even think that's a meaningful statement). But to be within 2-3x of the JVM or C (and even outperforming C in some scenarios) certainly puts SBCL among the fastest language implementations. All the other ones you mention (C, C++, Go, Java..) are indeed also among the fastest. :)

reply

ohyes 2 days ago

link

Amazing for a garbage collected dynamically typed language.

reply

mrottenkolber 3 days ago

link

> Hadn't heard SBCL was particularly fast

What did you think was particularly faster than SBCL?

reply

sigil 3 days ago

link

> What did you think was particularly faster than SBCL?

Haskell, Scala, Java, Go and of course C, C++, Fortran all outperform SBCL in the alioth benchmarks.

Against the schemes, lisps, and "scripting" languages though SBCL stacks up favorably. I didn't notice that krig's comment was mainly comparing SBCL to this latter category ("programming language implementations").

reply

ohyes 2 days ago

link

Also consider that the c/++ versions use intrinsics which means it's basically a compiler vs random assembly level code. Without that level of optimization they're fairly equivalent.

reply


wtbob 3 days ago

link

This is extremely exciting, particularly if it can be made to work with emscripten. Common Lisp in the browser, here we come!

reply

aidenn0 3 days ago

link

Almost certainly can't. The closest you can get is using clicc or ECL or something similar to generate C code, and then compile that with emscripten.

This invokes llvm at runtime, and AFAIK emscripten isn't ported to emscripten.

FWIW, I've tried other lisps under emscripten:

Most lisps generate machine code, store them in RAM, and then execute that RAM. This is not possible under emscripten.

Clisp is a good candidate, since it's byte-code interpreted rather than generating machine code, but clisp makes so many assumptions about how the machine works (in particular it strongly wants a C style stack and does manual stack-pointer manipulation). I actually got fairly far into the bootstrap process under emscripten, but the minimal lisp interpreter it compiles generated bizarre errors.

reply

drmeister 3 days ago

link

I disagree. Clasp could do this because it compiles Common Lisp to LLVM-IR bitcode files (using COMPILE-FILE) as well as directly to native code using LLVM's MCJIT engine (using COMPILE). emscripten (https://github.com/kripken/emscripten) says that it compiles LLVM-IR to JavaScript?. I haven't used emscripten, but I believe everything I read on the internet :-) and thus Common Lisp --[Clasp]--> LLVM-IR bitcode files --[emscripten]--> run within browsers.

reply

eudox 3 days ago

link

I'm not an expert in Common Lisp internals, but there are some things that don't look like they could be entirely put in an executable without bringing the compiler and runtime along. I'm not just talking about things like a library that outright uses eval, but the more high-level things like macros that call compiled functions and reader macros.

Then again, you might be able to compile the compiler to LLVM IR -> emscripten -> JavaScript?, and use that to compile CL code.

reply

drmeister 3 days ago

link

Agreed, but it's more of a "tree-shaking problem" than anything else. Clasp is written in C++ and Common Lisp. Clasp compiles Common Lisp to LLVM-IR and Clang compiles C++ to LLVM-IR. So theoretically you could compile everything to LLVM-IR and feed that to emscripten. Granted, it's going to be a _huge_ LLVM-IR file. Then you shake out the functions and globals that aren't needed. If the compiler isn't needed then it will shake out (or not be compiled in in the first place). The question for me is "what problem does it solve". I assume there are problems that I'm not aware of that it would solve, otherwise why would someone develop emscripten? Common Lisp is a fantastic language that is really underutilized, it's fun and so expressive. I fell in love with Common Lisp three years ago so deeply that I wrote a new Common Lisp to solve my scientific programming challenges while still being able to make use of powerful C++ libraries. I think everyone should use it everywhere - it's awesome.

reply

aidenn0 3 days ago

link

If you can generate standalone bitcode that doesn't ever generate more bitcode at runtime, then yes emscripten becomes a possibility.

reply

mateuszf 3 days ago

link

ClojureScript? already exists, and even if it's not Common Lisp, its still Lisp with homoiconity, macros and what not.

reply

---

yvdriess 3 days ago

link

Finger crossed then that Azule can get a proper moving GC to work with LLVM then, until then a CL on LLVM (even Julia) are stuck with kinda bad GC.

I wonder why start from scratch for an LLVM backend, can't SBCL be used to generate LLVM code?

reply

drmeister 3 days ago

link

SBCL doesn't generate LLVM code. My primary goal was Common Lisp with C++ interoperation. It seemed easier at the time to start from the ECL Common Lisp code base and write a new C++ core and a Common Lisp interpreter that always interoperated with C++. As I wrote the compiler and expanded the system I maintained C++ interoperation all the time. There were a hundred problems that I had to discover and solve along the way to maintain a Common Lisp that interoperated with C++. You can get some level of interoperation between ECL and C++ up in a weekend but it won't work with C++ exceptions and RAII and there are dozens of other problems. In retrospect I don't think I would have gotten here starting from ECL because I never really understood the ECL code.

reply

aidenn0 3 days ago

link

It wasn't from scratch, it was from ECL. And did you mean Azul?

reply

ioddly 3 days ago

link

Is the compacting garbage collector mentioned in the article not a proper moving GC? I am not familiar with it.

reply

drmeister 3 days ago

link

Yes, the Memory Pool System by Ravenbrook (https://www.ravenbrook.com/project/mps/) is a proper, moving garbage collector. It uses precise GC on the heap and conservative GC on the stack, as does the garbage collector in Steel Bank Common Lisp on x86 chips. I need it because I need my code to run on 100,000 CPU supercomputers with a controlled memory footprint to develop organic nano machines (seriously).

reply

ioddly 3 days ago

link

To be honest I was just kind of thinking "oh great another language implementation," before I read what you're actually doing and why you needed to create clasp. I appreciate the difficulty of writing precisely GC'd C/C++. It's pretty awesome that you were able to use clang to (I assume this is mainly what the analyzer does) track roots in C & C++ code.

Best of luck.

reply

drmeister 2 days ago

link

Thanks - yes the analyzer tracks roots through about 300 C++ classes. It also finds global variables and builds C++ code to interface with the MPS library. I exposed the Clang libraries to search the AST and describe the AST in Common Lisp and then wrote the static analyzer in Lisp. I shudder at the thought of doing this all in C++ and I write a lot of complicated stuff like Common Lisp implementations in C++ :-). Common Lisp is the language of trees and pattern recognition. Common Lisp is the perfect tool for this job.

reply

---

discussion on And, Why Didn't Dijkstra Like Lisp?

https://news.ycombinator.com/item?id=1373443

---

http://lwn.net/SubscriberLink/615220/45105d9668fe1eb1/

 Templeton, who has been working on Guile-Emacs for the past five years in a series of Google Summer of Code projects, listed quite a few other benefits to rebasing Emacs on the Guile engine, including "a full numeric tower, structure types, CLOS-based object orientation, a foreign function interface, delimited continuations, a module system, hygienic macros, multiple values, and threads." As of now, Templeton reports that the vast majority of modules in GNU Emacs, as well as a significant set of popular external extensions, run reliably on Guile-Emacs. 

...

the two projects currently use different internal string representations, which means that text must be decoded and encoded every time it passes in or out of the Guile interpreter. That inefficiency is certainly not ideal, but as Kastrup noted, attempting to unify the string representations is risky. Since Emacs is primarily a text editor, historically it has been forgiving about incorrectly encoded characters, in the interest of letting users get work done—it will happily convert invalid sequences into raw bytes for display purposes, then write them back as-is when saving a file.

But Guile has other use cases to worry about, such as executing programs which ought to raise an error when an invalid character sequence is encountered. Guile developer Mark H. Weaver cited passing strings into an SQL query as an example situation in which preserving "raw byte" code points could be exploited detrimentally. Weaver also expressed a desire to change Guile's internal string representation to UTF-8, as Emacs uses, but listed several unresolved sticking points that he said warranted further thought before proceeding.

...

Templeton, for example, noted that Common Lisp has no feature that corresponds to Emacs's buffer-local variables, which are often used in Emacs extensions. But Monnier pointed out an even thornier problem—while Emacs Lisp regards a boolean "false" and an empty list as being equal, other Lisp dialects do not: Basically, Scheme has #f, (), and nil as 3 distinct objects. So Guile-Emacs picked one of those as being Elisp's nil, so as long as you stay all within Elisp things work just fine (presumably), but as soon as some Scheme gets into the picture you might get new values which are similar to nil but aren't `eq' to it.

---