ideas-computer-jasper-jasperNotes5

The Spirit parser framework - metaprogramming and no macros

On the topic of metaprogramming and macros - there is one excellent example of metaprogramming with minimal macros - the Spirit parser framework (part of the Boost C++ libraries).

The developer who created Spirit is very much against macros - (he says so very forcefully in the docs) and I agree with him.

Anyway - in response to the "let's make a programming language" challenge, rather than my suggesting a completely new language, I'm keen to see Orca (the open-source Rebol clone) developed. At present, it's still pretty basic, but has great potential (given that Syllable will now use it as their default scripting language).

Of all programming languages, only Forth comes close to the "power- to-weight" ratio that Rebol has. Python, Ruby, Perl - all good, but pretty heavyweight. One of Rebol's best aspects is that DSLs are trivial to create, using "dialects". So, there's the "create your own language" thing taken care of ... :-) By obsidian at Sun, 2006-06-04 21:42

Isn't Spirit written with
login or register to post comments

Isn't Spirit written with template metaprogramming techniques? These are C++'s ad hoc version of macros. We aren't talking about the C preprocessor here. Or do you mean the author limited his use of template metaprogramming? By rebooted at Sun, 2006-06-04 22:02

login or register to post comments

---

copx 1 day ago

link

Some of my relatively obscure favorites:

Io: http://iolanguage.org/

Dylan: http://opendylan.org/

Euphoria: http://www.rapideuphoria.com/

reply

zem 1 day ago

link

i've always felt that had i discovered euphoria earlier in my programming life, i would have really liked it. these days i'd miss all the features more recent languages have borrowed from lisp and/or ml, but it would have been an awesome second or third language (after basic and c).

reply

copx 1 day ago

link

I discovered Euphoria back in the DOS days when it was still commercial. I agree that it seems a little dated these days but I still fondly remember it.

Also it is still a great language for non-professional programmers and still has an active community there. Note the recently updated SDL2, SFML2, and Allegro bindings for example.

reply

iso-8859-1 1 day ago

link

MetaML?, a language with as many levels of macros as you'd like: http://www.cs.rice.edu/~taha/publications/journal/tcs00.pdf

(implementation as MetaOcaml?)

reply

zem 1 day ago

link

if you're looking for interesting systems languages, clay looks good too: http://claylabs.com/clay/

reply

InAnEmergency? 1 day ago

link

You may be interested in the Fledgling Languages List: http://fll.presidentbeef.com/

reply

doublec 1 day ago

link

For an interesting systems programming language try ATS [1]. It's an ML style functional programming language with dependent types, linear types, macros, generics and compiles to C giving reasonable portability.

[1] http://www.ats-lang.org/

reply

epsylon 1 day ago

link

Check out fogus' lists of "Perlis languages": http://blog.fogus.me/2011/08/14/perlis-languages/ http://blog.fogus.me/2011/10/18/programming-language-develop...

reply

-- " Frankly, I guess I've become something of a functional zealot, so my problem with Go is that it's so stubbornly imperative. This is why I can't get behind Go, as much as I want to. I feel like it doesn't consider many of the lessons that have been learned about FP making life easier. Set operations (map, filter, reduce) are insanely common, yet doing them imperatively sucks to the point of discouraging them. Excessive mutability is difficult to reason about. Nobody expects the nil/null/none inquisition. We tend to forget about side effects. Without an extensible language, which requires truly first-class functions, you're at the language designer's mercy.

Hell, Scala probably isn't the be-all, end-all answer to all of this. I just don't think that a doggedly-imperative, non-composable, statement-oriented language is the future of programming "in the large", not when the past has shown us that we tend to produce buggy, unmaintainable software this way. I'm pragmatic enough to realize that pure FP isn't a solution, but I feel strongly that sticking to traditional imperative because it's familiar is a costly mistake.

"

---

" Now, static typing is just one kind of static assertion[1] I'd like to have. Side effect isolation (i.e. language-enforced purity) would be another feature I'd like to see become common. For example, the D programming language has a "pure" keyword that lets you mark a function as side-effect free. (In Haskell, all function are pure by default, and you escape the confines of purity via monads.)

I'd like to do research into (inventing) other forms of static assertions. One thing that's been on my mind lately, has been static "complexity assertion" for programs. I don't know if this even possible, but it would be nice to be able to ascertain before running a program, certain time & space complexity bounds on it. This would perhaps require us to drop the programming language itself to some level "slightly below" Turing machines -- but this in itself could be achieved via a keyword like D's "pure" or something more elegant. (Note: my point is not that we're going to make the entire language non-Turing complete -- but rather that we'll have subsets of the language that are not, and this reduction of power would/could bring with it a greater possibility for statically asserting stuff.)

[1] FYI I made this term up. Let me know if there's something better that describes this. "

upvote

sridharvembu 8 hours ago

link

Your point on "slightly below" Turning machines caught my attention - exactly the same terminology I have used. I want as many proofs (assertions) as possible about code, and Rice's Theorem is a problem, so slightly below Turing is on the radar. If you are interested, we can discuss this. Shoot me an email at svembu at zoho ...

reply

upvote

chas 6 hours ago

link

Total functional programming (and the associated analysis of codata has already been touched on), so I'll just address your interest in static guarantees for space usage. The Virgil programming language[1] has been designed with exactly this in mind. It is aimed at embedded systems where memory is extremely constrained and running out of memory could kill people. Heap allocation is not possible in the language and all data structures are initialized during compilation (like C++ templates, but more sophisticated). The compiler can use the initialization information for advanced optimization and analysis as well as serving as a bound on the memory usage. [2] The language also has some interesting static typing ideas, but they are not as distinct from other languages.

Further discussion on LtU?: http://lambda-the-ultimate.org/node/2131

[1] https://code.google.com/p/virgil/ [2] http://compilers.cs.ucla.edu/virgil/overview.html

reply

upvote

e12e 2 hours ago

link

I wonder if a restricted subset of a stack based language like Forth might also work?

reply

winter_blue 8 hours ago

link

I was just going to make the same reply.

> if you limit yourself to a language or a sub-set of a language that is not Turing-complete, then you can make static assertions about things such as halting

There's already something that is non-Turing complete, but comes with a halting guarantee -- it's called total functional programming[1]. This paper (linked below) is actually what got me thinking about this whole thing. The author argues in it how functions in functional programming languages aren't really like mathematical functions, because they have the possibility to execute infinitely (never halt) and thus not have a proper return value. To mitigate that, he creates "total functional programming".

Total functions are guaranteed to have a return value (i.e. terminate eventually), but they could take very long to run. If we could actually have complexity bounds, or be able to determine a given function's complexity, that would be a great boon to software engineering (i.e. assuming they adopt it... which is a whole different matter).

The author also makes a very good point that most parts of your program are meant to terminate (i.e. the vast set of functions and algorithms that comprise you program). Only the top-level needs to be Turing-complete, the rest can at least be total.

I actually want to see if it's possible to push this concept further, to create a ring-like hierarchy of languages -- the lowest one being the weakest and the one we can extract the most static assertions out of, and so on. There's a language called Hume[2] that sounds like it accomplishes at least part of this goal, but I haven't really looked into it.

[1] http://www.jucs.org/jucs_10_7/total_functional_programming/j...

[2] https://en.wikipedia.org/wiki/Hume_%28programming_language%2...

reply

creatio 5 hours ago

link

For the "complexity assertion" I think you may be referring to something like this: http://resourceanalysis.cs.ru.nl/

Maybe its not totally exactly what you meant, but it can do basic analysis for loops.

reply

---

upvote

rprospero 10 hours ago

link

It's not cliche, it's just personal. For instance, most of the code I write is numerical code. To that end, any language without operator overloading is a non-starter, since it's far harder to find a bug in:

div(add(mult(-1,b),sqrt(sub(pow(b,2),mult(mult(3,a),c)))),mult(2,a))

than it is in

(-b+sqrt(b^2-3ac))/(2*a)

On the other hand, if I wrote server code, operator overloading would be far less useful. I'd probably curse any programmer who used it and thank the gods that it was left out of Go.

Conversely, since I write a lot of numerical code, I don't care about generics or typing, which is crucial to many other programmers. Generics don't matter since everything I work with is either a Double or a collection of Doubles. Similarly, static typing doesn't help, since most functions just have the type (Double -> Double) and the type checker can't tell a sine from a logarthim. Of course, the reverse is also true. Since everything is either a double or a collection of doubles, the fancy tricks that dynamic languages offer don't give me a damn thing, so I'm extremely ambivalent about the typing debate.

Of course, on other projects, I've written code that benefited from static typing and I've written valid code that would choke any type checker. I've written code that heavily needed generics. When I did that, I used the languages that had the features I needed.

Go just won't work for the programs I write and it sounds like it doesn't work for yours, either. That's why we won't use Go. I've heard it works wonderfully for a certain class of server software and I'm glad those guys have a useful language for their domain. If I ever have to write another server, I might even teach myself Go. But don't feel guilty that you're using an old hammer instead of the shiny new saw.

---

upvote

termain 9 hours ago

link

Ambivalent or apathetic regarding the typing debate?

Typing can become useful in numerical code when you move past operating on scalars. Column and row vectors needn't be confused, a two-vector and a complex number have different types, etc.

Also, physical quantities can have different types and a type system can be useful there.

I totally agree that for numerical code, operator overloading is of great utility.

reply

--

http://stackoverflow.com/questions/3606591/why-does-intellij-idea-compile-scala-so-slowly

" up vote 177 down vote accepted

There are two aspects to the (lack of) speed for the Scala compiler.

    Greater startup overhead
        Scalac itself consists of a LOT of classes which have to be loaded and jit-compiled
        Scalac has to search the classpath for all root packages and files. Depending on the size of your classpath this can take one to three extra seconds.
    Overall, expect a startup overhead of scalac of 4-8 seconds, longer if you run it the first time so disk-caches are not filled.
    Scala's answer to startup overhead is to either use fsc or to do continuous building with sbt. IntelliJ needs to be configured to use either option, otherwise its overhead even for small files is unreasonably large.
    Slower compilation speed. Scalac manages about 500 up to 1000 lines/sec. Javac manages about 10 times that. There are several reasons for this.
        Type inference is costly, in particular if it involves implicit search.
        Scalac has to do type checking twice; once according to Scala's rules and a second time after erasure according to Java's rules.
        Besides type checking there are about 15 transformation steps to go from Scala to Java, which all take time.
        Scala typically generates many more classes per given file size than Java, in particular if functional idioms are heavily used. Bytecode generation and class writing takes time.
    On the other hand, a 1000 line Scala program might correspond to a 2-3K line Java program, so some of the slower speed when counted in lines per second has to balanced against more functionality per line.
    We are working on speed improvements (for instance by generating class files in parallel), but one cannot expect miracles on this front. Scalac will never be as fast as javac. I believe the solution will lie in compile servers like fsc in conjunction with good dependency analysis so that only the minimal set of files has to be recompiled. We are working on that, too.

"

" The fastest to compile was JUnit, followed closely by TestNG? and then ScalaTest?. specs2 was significantly slower. So far the evidence indicates the differences in test compile time are influenced primarily by the number of implicits in scope, the number of by-names used, and whether or not methods are being mixed into the test class by extending a supertrait versus inherited by extending a superclass.

Most likely the influence of the number of implicits in scope is in some way multiplied by the actual number of implicit applications, because at each implicit application the compiler must ensure one and only one implicit “heals” the candidate compiler error. In other words, the compiler can't stop looking at implicits once it finds one that solves the type error at hand; it must keep on looking to make sure one or more other implicits don't also solve that same type error. Thus the compile time cost of implicit use is likely related to the number of implicits in scope multiplied by the number of times the compiler must check all those implicits (i.e., the number of implicit applications). I say “likely,” because as yet we haven't written a script to verify this theory. If someone wants to contribute a new script to the project, this would be a good one to contribute.

Also, most likely it is the number of function literals in general, not just the use of by-names, that influences compile time. But again, we have not actually written a script that tests that theory. This would be another good candidate for a contribution to the project.

Lastly, the compile time cost of mixing in a trait compared to extending a superclass likely exists because the compiler must insert forwarding methods for all the methods declared in the trait into the body of the subclass when mixing in, but when extending a superclass it does not. Instead, the subclass simply inherits the superclass implementations of those methods. We did observe that the larger the number of methods to be mixed in the more the compile time was increased. As JUnit and TestNG? don't use traits (being Java frameworks), this was a difference observed between ScalaTest? and specs2 only. Because specs2's mutable and immutable Specification traits declare more methods than ScalaTest?'s various style traits, the compilation of specs2 styles are being slowed down more by this than that of ScalaTest?