proj-oot-ootMetaprogrammingNotes1


this blog post claims that haskell obsoletes lisp macros, b/c you only need macros to avoid evaluating arguments, which haskell does by default (laziness):

" Macros

Another beauty of Lisp is its macro facility. I’ve not seen its like in any other language. Because the forms of code and data are equivalent, Lisps macro are not just text substitution, they allow you to modify code structure at compile-time. It’s like having a compiler construction kit as part of the core language, using types and routines identical to what you use in the runtime environment. Compare this to a language like C++, where, despite the power of its template meta-language, it employs such a radically different set of tools from the core language that even seasoned C++ programmers often have little hope of understanding it.

But why is all this necessary? Why do I need to be able to perform compile-time substitutions with a macro, when I can do the same things at runtime with a function? It comes down to evaluation: Before a function is called in Lisp, each of its arguments must be evaluated to yield a concrete value. In fact, it requires that they be evaluated in order1 before the function is ever called.

Say I wanted to write a function called doif, which evaluates its second argument only if the first argument evaluates to true. In Lisp this requires a macro, because an ordinary function call would evaluate that argument in either case:

(defun doif (x y) (if x y)) ; WRONG: both x and y have been evaluated already (defmacro doif (x y) `(if ,x ,y)) ; Right: y is only evaluated if x is true

What about Haskell? Does it have a super-cool macro system too? It turns out it doesn’t need to. In fact, much of the coolness of Haskell is that you get so many things for free, as a result of its design. The lack of needing macros is one of those:

doif x y = if x then (Just y) else Nothing

Because Haskell never evaluates anything unless you use it, there’s no need to distinguish between macros and functions. " -- http://newartisans.com/2009/03/hello-haskell-goodbye-lisp/

is it true that that's all that macros are for? it squares with this:

" 3. Purpose: To control evaluation of the arguments.

Since macros are so much harder to use than functions, a good rule of thumb is: don't use defmacro if defun will work fine. So, for example, there would be no reason to try to use a macro for Square: a function would be much easier to write and test. In Lisp, unlike in C, there is no need to use macros to avoid the very small runtime overhead of a function call: there is a separate method for that (the "inline" proclamation) that lets you do this without switching to a different syntax. What macros can do that functions cannot is to control when the arguments get evaluated. Functions evaluate all of their arguments before entering the body of the function. Macros don't evaluate any of their arguments at preprocessor time unless you tell it to, so it can expand into code that might not evaluate all of the arguments. For example, suppose that cond was in the language, but if wasn't, and you wanted to write a version of if using cond. " -- http://www.apl.jhu.edu/~hall/Lisp-Notes/Macros.html

from the comments of http://newartisans.com/2009/03/hello-haskell-goodbye-lisp/ :

harsha says: March 14, 2009 at 2:18 pm

Well, i like lisp(scheme) & haskell too. But note that the need for macros is only eliminated in some cases. In particular, typeclasses like monads & arrows have special notation which helps a lot in using them. If i am not wrong, i think there is no way for you to directly define something for your own custom typeclass, what the do notation does for monads. So you still need macros, either via something like Template Haskell or Liskell.

Sam says: March 14, 2009 at 5:13 pm

I don’t think you really did CL-style macros justice. They can be used for a lot more than just changing the order that arguments are evaluated in — you can create whole new syntactic constructs at will.

For one thing, this means that CL doesn’t tend to ‘lag behind’ in terms of language design, since if another language ever introduces something innovative then you can easily ‘extend lisp’ with macros to add that functionality. There is no need to wait for an updated compiler.

The other thing is that it allows you to build languages tailored to solving the particular problem at hand. DSLs are cool :-)

Having said that, I have issues with lisps that macros just don’t make up for, and love Haskell more in any case :-p Reply John Wiegley says: March 14, 2009 at 6:13 pm

You’re indeed right, I couldn’t do CL justice in this regard. When I referred to being like a “compiler construction set”, I meant to imply a whole world of goodness. Being able to utilize the entire Lisp runtime at compile-time is something that just can’t be expressed in a few words like this. Reply Peter Seibel says: March 14, 2009 at 6:14 pm

I think you do a bit of a disservice to Lisp’s macros: the more interesting macros are not ones that simply delay evaluation of certain forms. More interesting is when a macro transforms something that doesn’t have any meaning into something that does. I give some examples of such macros in Practical Common Lisp (http://www.gigamonkeys.com/book/), in particular Chapter 24 on parsing binary files. Which is not to say that Haskell isn’t cool too. ;-) Reply John Wiegley says: March 14, 2009 at 6:40 pm

You’re so right about that, Peter. Lisp’s macros can be used to transform arbitrary syntax at compile-time into something legal, which allows for extreme freedoms of expression. You can even implement whole DSLs by macro alone — which is just what `LOOP` does, for instance.

So I take back my assertion that it’s essential purpose is to control evaluation, it’s truly a thing of beauty that other languages should take note of. Reply Sam says: March 14, 2009 at 7:22 pm

I would think that many other languages *have* taken note — the issue is that macros only really work in lisp because of the list based syntax. You can certainly do them in a languages with more ‘normal’ syntax (see Dylan and Nemerle, for example) but they’re far less pleasant to use.

There really isn’t a lot you can do about it, either, since it’s the trivial syntax of CL that makes CL macros so easy to use. I think we’ll eventually see reasonable macro systems for complex syntaxes, but AFAIK they haven’t arrived yet.

So, someone might say, why have complex grammars at all? They obviously aren’t *necessary*, since simple ones like those found in lisps are obviously usable, but by providing special syntax for common operations you can make the language more succinct and expressive. One of CL’s failings, in my opinion, is that although the syntax can more or less be adapted to work with anything, it’s still general and never gives you the optimal solution for anything. More specific syntaxes are less flexible, but usually far more expressive and succinct in their particular problem domain.

One day I hope to see a language which allows for specialised syntax, but still translates in into a clean AST which can be manipulated by macros at eval time. Maybe I should make a demo language… :-p

Daniel Weinreb says: March 15, 2009 at 8:21 pm

I’ve been using Lisp for 33 years, since I wrote system software for the Lisp Machine at MIT, and later as a co-founder of Symbolics. I’m using Common Lisp again now, as part of a big team writing a high-performance, highly-available, commercial airline reservation system, at ITA Software. Recently, I started learning Haskell. It’s fascinating and extremely impressive. It’s so different from the Lisp family that it’s extremely hard to see how they could converge. However, you can make a Lisp that is mostly-functional and gets many of the parallelism advantages you discuss. We now have one that I think is extremely promising, namely Rich Hickey’s Clojure.

If you want to program in Common Lisp, read Practical Common Lisp by Peter Seibel, without question the best book on learning Common Lisp ever written. For Haskell, I’ve been reading Real World Haskell by Bryan O’Sullivan? et. al. It’s excellent and I highly recommend it.

All of the comments that I was going to make have been made very well already, particularly about the power of Lisp macros for language extension and making domain-specific languages.

Sam, above, wonders whether we’ll see reasonable macro systems for complex syntax. I presume he means macro systems that can match the power of Lisp’s macros. There is some progress being made in this area. At the International Lisp Conference next week, there will be an invited talk called “Genuine, full-power, hygienic macro system for a language with syntax”. This is by David Moon, my long-time colleague, who among many other things was one of the designers of Dylan. He has been inventing a new programming language, roughly along the lines of Dylan in some ways, and he’ll be talking about it for the first time. I’m pretty sure he does not claim to have brought the full power of Lisp macros to an infix-syntax language, but I think we’ll find out that it’s another important step in that direction.

By the way, the conference also features a tutorial called “Clojure in Depth”, by Rich Hickey himself, running five hours (in three parts), “The Great Macro Debate” about the virtues and vices of Lisp macros, and all kinds of other great stuff. We’ve closed online registration but you can still register at the door. It’s at MIT (Cambridge MA). See ilc09.org.

Clojure’s being written in terms of the JVM has an extremely important advantage: it lets the Lisp programmer access a huge range of libraries. Although there are a lot more great Common Lisp libraries than most people know about (we’ll be addressing this!), there’s no way Common Lisp can ever keep up with all the specialized libraries being developed for the JVM.

There are also two huge implementation advantages: Clojure’s implementation can ride on the excellent JIT compilers and the excellent garbage collectors of the various JVM implementations (have you tried out JRockit?) rather than having to do this work all over again.

Because your post showed so much depth of understanding, I was very interested to hear how you felt about Clojure. I don’t understand it, though.

It’s always been unclear to me precisely what people mean by “scripts” and “scripting languages” The terms are used widely, but with very different meanings. For example, to some people, it seems that a “scripting language” is one with dynamic typing!

As far as I’m concerned, nobody has a broader and deeper knowledge of computer languages than Guy Steele. (I can back up that claim, if anyone wants me to.) So I asked him, and here’s what he said:

“By me, the term ‘scripting language’ is not intrinsic, but extrinsic: it describes the context and application for the language. That context is typically some large mechanism or collection of facilities or operations that may usefully be used one after another, or in combination with one another, to achieve some larger operation or effect. A scripting language provides the means to glue the individual operations together to make one big compound operation, which is typically carried out by an interpreter that simply ‘follows the script’ a step at a time. Typically scripting languages will need to provide at least sequencing, conditional choice, and repetition; perhaps also parallelism, abstraction, and naming. Anything beyond that is gravy, which is why you can put a rudimentary scripting language together quickly.”

Steele’s answer seems in line with John Hennessey’s explanation of what Tcl was meant for. The idea is that you have two languages. At the lower level, you have something like C: suitable for writing programs that are very fast and work with the operating system, but hard to use for anyone but a professional. At the higher level, you have something like Tcl, which is easy to learn and use and very flexible, and which can easily invoke functionality at the lower level. The higher level acts as “glue” for the lower level. Another example like this is Visual Basic, and the way that you can write C programs that fit into VB’s framework.

In my own opinion, this kind of dichotomy isn’t needed in Lisp, where the same language is perfectly suitable for both levels. Common Lisp, as it is used in practice, is not so dynamic that it cannot be compiled into excellent code, but is easy to write for the kind of simple purposes to which Tcl is typically put. (Particularly for inexperienced programmers who are not already wedded to a C/C++/Java-style surface syntax.)

In your own case, you mention “tiny” and “fast-running” executables. I am not sure why “tiny” matters these days: disk space is very cheap, and the byte code used by the JVM is compact. Common Lisp programs compiled with one of the major implementations, and programs written for the Java Virtual Machine, execute at very high speed.

The fact that you distinguish between server-side and client-side applications suggests to me that what you’re really talking about is start-up latency: you’re saying that a very small program written for the JVM nevertheless has a significant fixed overhead that causes perceived latency to the user. Is that what you have in mind?

The last time this question came up, I did my own very quick and dirty test. I tried running a simple program in Clozure Common Lisp, from the command line, and I saw about 40ms of start-up latency on a not-particularly-fast desktop running an old Linux release. A trivial Python program took about 7ms. That’s better, but 40ms is not very noticeable. (I suppose if you’re writing a long command line piping together many “scripts” or running them in a loop, it would start to add up.)

As a hypothetical question just to clarify your meaning: if there were a JVM implementation that started up instantly, so that the speed of execution of a small program would be the same as the speed of the same code appearing in the middle of a long-running server process, would that answer your objections? Reply

---

http://www.quora.com/Haskell/What-are-the-main-weaknesses-of-Haskell-as-a-programming-language says racket has a state-of-the-art macro system:

http://docs.racket-lang.org/syntax/Parsing_Syntax.html

-- Stepper:

"Reification without evaluation"

this paper gives some more complaints with 3Lisp's infinite tower and continuations:

http://dspace.mit.edu/bitstream/handle/1721.1/6461/aim-946.pdf?sequence=2

basically, they argue that continuations and the infinite tower may both be useful and interesting, but continuations should not have level-shifting properties. continuations should just do stuff within one program, like call/cc in scheme.

they give an alternative called Stepper, which, when passed program source to evaluate, instead of running the program, returns a tuple representing the first step to take while running the program, the current continuation, and arguments to the first step (this reminds me of a Haskell program, which is lazily 'evaluated'). Stepper also exposes two functions, implementationToProcedure and procedureToImplementation. procedureToImplementation can be called on the first element of the tuple to transform it into a function; that function can then be applied to the rest of the tuple to move forward a step in the computation.

implementationToProcedure can be used to define procedures "by specifying their step-by-step behavior in terms of tuples". For example, call/cc is implemented in terms of manipulating the continuation in the tuple.

internally, procedureToImplementation and implementationToProcedure are basically just tags/wrappers; they just help keep the level of things straight.

but they do allow the implementation of call/cc by going 'outside' the context of the executing program.

this is not much more useful that call/cc in the context of that paper (the other useful things it does are (a) allow you to write a step-until debugging function, and (b) allow you to define a procedure to be run one level up the infinite tower, and have it be exposed to the current level as an atomic instruction), but it does seem interesting.

he notes that two other meta things that he left out was reification of expressions, and reification of the environment. he left those out to prove that he could do an infinite tower just with this.

this is some serious meta. i guess if you had this you'd have all the meta you'd ever need. it may be too much though!

---

need a way to optionally pass the current (lexical scoping local variables) environment into an eval

(if the env is first class that's pretty easy)

--

" Current Lisp dialects, among which Scheme and Common Lisp are the most widely used ones, typically provide only restricted subsets of structural re ec- tion: Scheme's eval and Common Lisp's eval and compile can be used to turn a quoted lambda expression into a function (similar to down ), but they can- not be enclosed in arbitrary lexical environments, only in global or statically 11 In Lisp 1.5, only one such environment exists. prede ned environments. There is also typically no construct corresponding to up available that would allow retrieving the original de nition of a function. In terms of procedural re ection, neither Scheme nor Common Lisp allow de ning functions that receive unevaluated arguments as program text, neither Scheme nor Common Lisp specify operators for reifying lexical environments, and only Scheme provides call/cc for reifying the current continuation. Macros were in- troduced into Lisp 1.5 in the 1960's [11], and are considered to be an acceptable and generally preferrable subset of re ecting on source code [12]. The di erence in that regard to re ective procedures, fexpr , and so on, is that macros cannot be passed around as rst-class values and are typically restricted from access- ing runtime values during macro expansion. This allows compiling them away before execution in compiled systems, as is mandated for example by current Scheme and ANSI Common Lisp speci cations [13, 14]. Useful applications of rst-class lexical environments in Scheme have been described in the literature [15, 16], but the only Scheme implementation that seems to fully support rst- class environments at the time of writing this paper is Guile, and the only Lisp implementation that seems to do so is clisp in interpreted mode. 12 "

suggests:

first-class macros, first-class reified continuations, first-class reified environments, "functions that receive unevaluated arguments as program text", macros that can access runtime values during macro expansion, a construct 'up' that "would allow retrieving the original definition of a function"

--- Reflection for the Masses

http://www.p-cos.net/documents/s32008.pdf

" The CLOS MOP can be understood as a combination of procedural re ec- tion as in 3-Lisp together with Smalltalk's approach to object-oriented program- ming, where everything is an instance of a class, including classes themselves. Smalltalk's metaclasses provide a form of structural re ection, which for exam- ple allows manipulating method dictionaries, but lack meta-level protocols that can be intercepted in a procedurally re ective way (with the handling of the \message not understood" exception being a notable exception) [22]. However, Smalltalk provides rst-class access to the current call stack via thisContext , which roughly corresponds to a combination of the environment and the con- tinuation parameters in re ective lambdas [23]. In [24] Ducasse provides an overview of techniques, based on Smalltalk's re ective capabilities, that can be used to de ne a message passing control.

Self provides structural re ection via mirrors [25]. It can actually be argued that mirrors are a rediscovery of up and down from 2-Lisp, but put in an object- oriented setting. However, mirrors provide new and interesting motivations for a strict separation into internal and external representations. Especially, mirrors allow for multiple di erent internal representations of the same external object. For example, this can be interesting in distributed systems, where one internal representation may yield details of the remote reference to a remote object, while another one may yield details about the remote object itself. AmbientTalk?/2 is based on mirrors as well, but extends them with mirages that provide a form of (object-oriented) procedural re ection [26].

Aspect-oriented programming [27] extends existing programming models with the means to \modify program join points". Depending on the aspect model at hand, program join points are de ned as points in the execution of a program, or as structural program entities. In an object-oriented setting, examples of the former are \message sends" and \slot accesses", examples of the latter are classes and methods. The idea is that the programmer can make changes to program join points without having to change their sources, but by de ning distinct program modules called \aspects". This property is called obliviousness and is believed to improve the quality of software in terms of better modularity. One of the most in uential aspect languages is AspectJ? [28], which facilitates adding methods to classes, but also supports advising methods with logging code. Aspects are de ned in terms of pointcut-advice pairs: Pointcuts are declarative queries over program join points, whereas advice consists of pieces of Java code that need to be integrated with the join points matched by a pointcut. AspectJ?'s pointcut language is a collection of predicates for detecting structural patterns in source code, like the names of classes or methods, where code needs to be inserted. AOP is a re ective approach in the sense that aspects are expressed as programs about programs, but unlike re ection, conventional AOP leaves out a model of the language implementation, which greatly reduces its expressiveness "

http://www.p-cos.net/documents/s32008.pdf talks about the Brown interpreter, whose paper may be found at

http://www.cs.indiana.edu/pub/techreports/TR161.pdf

--

" Macros and extensibility See also: Racket language extensions

The feature that distinguishes Racket from other languages in the Lisp family is its integrated language extensibility. Racket's extensibility features are built into the module system to allow context-sensitive and module-level control over syntax.[13] For example, the #%app syntactic form can be overridden to change the semantics of function application. Similarly, the #%module-begin form allows arbitrary static analysis of the entire module.[13] Since any module can be used as a language, via the #lang notation, this effectively means a programmer can control virtually any aspect of the language.

The module-level extensibility features are combined with a Scheme-like hygienic macro system, which provides more features than Lisp's S-expression manipulation system,[34][35] Scheme 84's hygienic extend-syntax macros, or R5RS's syntax-rules. Indeed, it is fair to say that the macro system is a carefully tuned application programming interface (API) for compiler extensions. Using this compiler API, programmers can add features and entire domain-specific languages in a manner that makes them completely indistinguishable from built-in language constructs.

The macro system in Racket has been used to construct entire language dialects. This includes Typed Racket—a statically typed dialect of Racket that eases the migration from untyped to typed code,[36] and Lazy Racket—a dialect with lazy evaluation.[37] Other dialects include FrTime? (functional reactive programming), Scribble (documentation language),[38] Slideshow (presentation language),[39] and several languages for education.[40][41] Racket's core distribution provides libraries to aid the process of constructing new programming languages.[13]

Such languages are not restricted to S-expression based syntax. In addition to conventional readtable-based syntax extensions, Racket's #lang makes it possible for a language programmer to define any arbitrary parser, for example, using the parser tools library.[42] See Racket logic programming for an example of such a language. "

--

monads as interceptors:

note however that the bind of a monads, when evaluating a line, also gets a function representing all of the succeeding lines at once:

http://www.randomhacks.net/articles/2007/03/12/monads-in-15-minutes

aha... monads can be seen as getting the environment and the current continuation, and deciding what to do on the next step (and returning an environment and a current continuation).. no wonder they are so general... no wonder you can implement continuations within them...

recently, i had observed that if you can manipulate the call stack (e.g. with continuations or goto) and you can manipulate the environment, you have most of metaprogramming. monads are that.

---

this is probably how Ruby on Rails accesses the names of variables to make them significant as e.g. table names:

http://www.trottercashion.com/2011/02/08/rubys-define_method-method_missing-and-instance_eval.html

looks like we could do that same stuff by providing metaprogramming that allows one to run a block while replacing and/oraugmenting the lexical env (instance_eval) with one that uses magic __get and __ismember protocol (method_missing, respond_to)

--

iso-8859-1 1 day ago

link

MetaML?, a language with as many levels of macros as you'd like: http://www.cs.rice.edu/~taha/publications/journal/tcs00.pdf

(implementation as MetaOcaml?)

-- quick search related to the above insight that a computation with an environment and a current continuation can be modeled by a monad:

https://www.google.com/search?client=ubuntu&channel=fs&q=monad+%22current+continuation%22+environment&ie=utf-8&oe=utf-8

http://haskell.cs.yale.edu/wp-content/uploads/2011/02/POPL96-Modular-interpreters.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.1656&rep=rep1&type=pdf

www.cs.indiana.edu/~sabry/papers/valuerecursion.ps‎

http://en.wikipedia.org/wiki/Monad_%28functional_programming%29

http://hackage.haskell.org/packages/archive/mtl/2.0.1.0/doc/html/Control-Monad-Cont.html

http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style

http://www.cs.indiana.edu/~sabry/papers/exteff.pdf

http://www.fceia.unr.edu.ar/~mauro/pubs/mmt/mmt.pdf

--

For instance, consider a sub-T uring-computable language with con trolled iteration and selection. This lan- guage can in tegrate a re ectiv e mec hanism (analyzing an input program to extract its elemen ts) with program gen- eration. That is, the language can de ne iterators/cursors o v er existing programs. The iterators can range o v er, sa y , all elds of a class, all argumen ts of a metho d, all classes in a pac k age, etc. All program generation should b e predicated on an iterator: copies of the quoted co de will b e generated for eac h iteration. F or example, w e could ha v e a co de gen- eration expression suc h as:

  1. for[f in Field(c), `[ #[Type(f)] #[Name(f}] ; ]]

(The #for , primitiv e is part of our in v en ted syn tax, as are the usual `[...] and #[...] . Field , Type , and Name are iterator functions.)

---

perlis, languages with self-reference I and languages with self-reference II

--

http://johnlato.blogspot.com/2012/10/runtime-meta-programming-in-haskell.html

---

generalized arrows: a generalization of haskell's arrows which is isomorphic to multistage languages. also contains the inference rules for type system System FC^\alpha, which i guess is System FC with some multistage stuff added?

http://arxiv.org/pdf/1007.2885.pdf

http://www.cs.berkeley.edu/~megacz/garrows/

p.s. Adam Megacz seems like a smart guy into similar things as me (languages, concurrency, http://www.megacz.com/software/wix/ which is like easylatex (i havent looked at it yet but i bet it's better). the part of his paper dealing with category theory means he may be a good person to ask about what to read to catch up with that stuff

see also his short essay: http://www.megacz.com/thoughts/monads.vs.arrows.html

also tangentially relevant: http://www.megacz.com/thoughts/ugpl.html http://www.megacz.com/thoughts/what.is.category.theory.html

--

http://axisofeval.blogspot.com/2013/04/a-quasiquote-i-can-understand.html

--

" "

Saturday, September 8, 2012 Having both fexprs and macros Lexically-scoped fexprs and first-class environments make it simple to do hygienic metaprogramming, using less machinery than hygienic macro systems, and they also require less concepts in the language: there is no need to specify a preprocessing phase.

The downside is that fexprs always incur an interpretative overhead with current technology. Many are convinced that those fexprs that do similar things to what macros do can be partially evaluated: "my research in static analysis leads me to believe that we will be able to erase that overhead for the common case--where first-class macros are used to do the job of compile-time macros" writes Matt Might, for example.

In my new language, Wat, I've implemented both fexprs and macros. Macros are expanded at runtime, when they are first encountered, and their result is memoized in the syntax tree, a technique described in two interesting articles (1, 2) related to the SCM Scheme implementation.

In Wat, MACRO is a special form that can be wrapped around a combiner, and causes calls to that combiner to be memoized in the source tree. An example, LET:

(def let (macro (vau (bindings . body) #ign (cons (list* lambda (map car bindings) body) (map cadr bindings)))))

With a bit of sugar, one can write macros that look almost like in Common Lisp:

(define-macro (until test . body) (list* while (list not test) body))

Macros complicate the language quite a bit; but used in moderation, especially for forms like LET that are basically never changed nor used in a higher-order fashion, they should be unproblematic, and offer a nice speed boost. Fexprs should be used for more complex tasks that require special attention to hygiene, or need to do things that macros can't do, whereas macros could be used for simple transformation and processing tasks, such as UNTIL, above.

Oh, and it should also be noted that these macros enjoy a nice level of hygiene already, by virtue of first-class environments and first-class combiners. For example, UNTIL above doesn't insert the symbols WHILE or NOT into the generated code - it inserts the actual values, therefore being protected from variable shadowing by calling code. Posted by Manuel Simoni at 13:20 No comments: "

" Thursday, September 6, 2012 Mixing first-order and higher-order control It's desirable for a language to support exceptions (preferably restartable ones), unwind protection, dynamic binding, and delimited continuations. [Adding Delimited and Composable Control to a Production Programming Environment, Delimited Dynamic Binding]

I've found a tractable way to implement these features in the language I'm currently working on, Wat.

My approach is to totally separate first-order control from higher-order control.

There is a set of Common Lisp-like first-order forms:

    block and return-from that establish and invoke a lexically-scoped one-shot escape continuation, respectively.
    unwind-protect aka "finally". Notably, unwind-protect is only sensitive to return-from, not to aborts via higher-order control.

These forms are implemented natively using JS try/catch and finally.

Restartable exceptions are implemented in terms of these first-order forms and dynamically-bound variables, which are also provided natively.

In addition there's a completely separate set of higher-order control forms from A Monadic Framework for Delimited Continuations.

Delimited continuations are implemented using a technique similar to Exceptional Continuations: ordinary code paths run on the normal JS stack; when a continuation is captured, the stack is unwound frame by frame up to the prompt, and at each frame, a resumption is added to the continuation that is built up during the unwinding. This technique is ten times faster than a naive scheme with heap-allocated stack frames, but currently doesn't support TCO.

First-order control is used for quotidian control flow, whereas higher-order control is used for heavy control flow lifting, such as making a REPL written in direct style work in the browser's asynchronous environment.

This is a quite intuitive model: in the small, one has the usual Common Lisp control flow, including restartable exceptions, whereas in the large, behind the scenes, control flow may be arbitrarily abstracted and composed with the higher-order control forms. Posted by Manuel Simoni at 00:23 2 comments: "

--

http://axisofeval.blogspot.com/2012/08/an-alternative-api-for-continuations.html

http://axisofeval.blogspot.com/2012/08/understanding-metacontinuations-for.html

--

" otation and terminology 1.3.1 Evaluable expressions A symbol to be evaluated is a variable (occasionally called a symbolic variable to distinguish it from the keyed variable devices of §§ 10–11). A pair to be evaluated is a combination . The unevaluated car of the pair is an operator ; its unevaluated cdr is an operand tree ; and in the usual case that the operand tree is a list, any elements of that list are operands . In the common case that all the operands are evaluated, and all other actions us e the results rather than the operands themselves, the results of evaluating the oper ands are arguments . The result of evaluating the operator is (if type-correct) a combiner , because it specifies how to evaluate the combination. A combiner that acts direct ly on its operands is an operative (or in full, an operative combiner ). A combiner that acts only on its arguments is an applicative (in full an applicative combiner ), because the apply combiner ( § 5.5.1) requires an applicative rather than an operative. Rationale: Most of these basic terms are adopted from [AbSu?96, § 1.1]; those not found there are operand tree , combiner , applicative , and operative . The term procedure is avoided by Kernel because its use in the literature is ambi guous, meaning either what is here called an applicative (in discus sions of Scheme), or what is here called a combiner (in discussions involving both appli catives and operatives). There is an adjective call-by-text in the literature meaning what is here called operative; but the Kernel term applicative has no equivalent of the form call-by-X . Adjectives call-by-X are used in general to specify when the operands are evaluate d to arguments, as call-by- value (eager) or call-by-name (lazy) ([CrFe?91]); but applicative is intended to mean only that the combiner depends on the arguments, without any impl ication as to when the arguments are computed "

kernel operatives = fexprs

--

random thesis

http://www.cs.rice.edu/~taha/publications/thesis/thesis.pdf

-- metaocaml tutorial

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.103.2543&rep=rep1&type=pdf

--

perhaps the compiler could engage in speculative reduction of function arguments, even if it is not statically known if the fn is an fexpr, provided that it had a copy of the original source in case the fn turns out to be an fexpr.

---

metalua

--

see also [1]

---

mb to make noncommutative monads and make the call stack/env idea more visible, have:

this is kinda like that paper of metaprogramming by stepper, i guess

---

perl6 has some interesting stuff with regexs that can call other regexs, and mutable grammars:

http://www.i-programmer.info/professional-programmer/i-programmer/5002-perl-6-and-parrot-in-conversation-with-moritz-lenz.html

---

 Clojure's style of macro system is known as a "procedural" macro system because the macros can be any arbitrary procedures which returns code as data. There are other types of macro systems, like Scheme's syntax-rules, which are more directly embrace the fact that most macros are tree rewrites.

reply

chc 4 days ago

link

Syntax-quote is a little bit different from quasiquote. Clojure's macros look a lot like Common Lisp's unhygienic macros, but Clojure's syntax-quote offers a sort of "hygiene by default" by forcing namespaces on variables and making it hard to introduce non-gensym symbols accidentally.

reply

---

nimrod-lang.org/talk01/slides.html#(24)

Nimrod's focus is meta programming; macros are used

1. to avoid code duplication / boilerplate:

01 template htmlTag(tag: expr) {.immediate.} = 02 proc tag(): string = "<" & astToStr(tag) & ">" 03 04 htmlTag(br) 05 htmlTag(html) 06 07 echo br()

2. for control flow abstraction:

01 template once(body: stmt) = 02 var x {.global.} = false 03 if not x: 04 x = true 05 body 06 07 proc p() = 08 once: 09 echo "first call of p" 10 echo "some call of p" 11 12 p() 13 once: 14 echo "new instantiation" 15 p()

3. for lazy evaluation:

01 template log(msg: string) = 02 if debug: 03 echo msg 04 05 log("x: " & $x & ", y: " & $y)

4. to implement DSLs:

01 html mainPage: 02 head: 03 title "now look at this" 04 body: 05 ul: 06 li "Nimrod is quite capable" 07 08 echo mainPage()

Produces:

<html> <head><title>now look at this</title></head> <body>

</body> </html>

Implementation:

01 template html(name: expr, matter: stmt) {.immediate.} = 02 proc name(): string = 03 result = "<html>" 04 matter 05 result.add("</html>") 06 07 template nestedTag(tag: expr) {.immediate.} = 08 template tag(matter: stmt) {.immediate.} = 09 result.add("<" & astToStr(tag) & ">") 10 matter 11 result.add("</" & astToStr(tag) & ">") 12 13 template simpleTag(tag: expr) {.immediate.} = 14 template tag(matter: expr) {.immediate.} = 15 result.add("<$1>$2</$1>" % [astToStr(tag), matter]) 16 17 nestedTag body 18 nestedTag head 19 nestedTag ul 20 simpleTag title 21 simpleTag li

:

template html(name: expr, matter: stmt) {.immediate.} = proc name(): string = result = "<html>" matter result.add("</html>")

template head(matter: stmt) {.immediate.} = result.add("<" & astToStr(head) & ">") matter result.add("</" & astToStr(head) & ">")

...

template title(matter: expr) {.immediate.} = result.add("<$1>$2</$1>" % [astToStr(title), matter])

template li(matter: expr) {.immediate.} = result.add("<$1>$2</$1>" % [astToStr(li), matter])

01 html mainPage: 02 head: 03 title "now look at this" 04 body: 05 ul: 06 li "Nimrod is quite capable" 07 08 echo mainPage()

Is translated into:

proc mainPage(): string = result = "<html>" result.add("<head>") result.add("<$1>$2</$1>" % ["title", "now look at this"]) result.add("</head>") result.add("<body>") result.add("

") result.add("</body>") result.add("</html>")

Compile time function evaluation optimizes 'mainPage()' into:

"<html><head><title>now look at this</title></head><body>..."

...

You name it, Nimrod got it (except fexprs ;-):

    compile time function evaluation; including staticRead and staticExec
    declarative (template) and imperative (macro) AST based macros: both hygienic and dirty
    term rewriting macros; side-effect and alias analysis constraints
    source code filters
    programmable annotation system ("pragmas")
    quasi quoting:

01 macro check(ex: expr): stmt = 02 var info = ex.lineInfo 03 var expString = ex.toStrLit 04 result = quote do: 05 if not `ex`: 06 echo `info`, ": Check failed: ", `expString` 07 08 check 1 < 2

---

hmm, potential problem with the capitalization though: if you replace and API implementation with another one using methodMissing, you make the client change to capitalized method names. so would have to provide a way to 'break thru' this. Just using the unsafeMethodMissing is dirty. Perhaps provide a way to use normal methodMissing but for the implementor (not the client) to statically assert that certain methods are provided? Or, even better, to allow macros to generate these assertions at compile-time? The latter would allow the implementation of generic methodMissings that can target multiple APIs at compile-time, and which still generate the appropriate assertions so that the client can use lower-case method names. It would not allow you to replace compile-time genericity with run-time, though.

But if we also allow the required assertions to be added by a third-party, then since someone has to give the generic module the necessary parameters to choose or specify the particular API that is being generated in this case, that same party can also attach the required assertions.

So i think the problem is/must be solved in this way, unless we change the capitalization.

---

Long key words discourage metaprogramming

(doug's idea)

--

Csharp dynamicobject (like Python protocols)

--

http://scalamacros.org/

--

http://jlongster.com/Stop-Writing-JavaScript-Compilers--Make-Macros-Instead

--

jlongster 1 day ago

link

The next big thing sweet.js is working on is modules. You will be able to actually pull in macros just like you would ES6 libs:

import { foo, bar } from "macros";

The scoping of macros will all stay in tact; any other macros from "macros.js" will be available, but the code that `foo` or `bar` expands to will be able to access them (this is really nice for helper macros and other things).

So yes, modules are going to be a big part of distributing and composing macros.

reply

6cxs2hd6 1 day ago

link

Racket shows how, with strong module support, a whole "tower of languages a.k.a. macros" can work reliably. Else not so much.

reply

klibertp 1 day ago

link

Also, macros in racket are lexically scoped. Any modern macro system should follow their design, IMHO.

reply

dherman 1 day ago

link

Agreed, and happily sweet.js is very consciously modeled after Racket.

reply

--

smrtinsert 1 day ago

link

unless I can import a macro like clojure then no thanks. javascript debugging is already a major nightmare since any object can be edited anywhere, i dont need macros multiplying that disaster.

reply

klibertp 1 day ago

link

Actually you could easily use macros to reduce this nightmare. And yes, I wish for sane module system in JS too - my ideal is Racket in this regard, especially after recent addition of nested submodules.

reply

--

"Functions compute; macros translate." -- David Moon, via https://news.ycombinator.com/item?id=7028544

--

moron4hire 1 day ago

link

I think the idea would be to do something like Racket's approach to Scheme, wherein the vast majority of the language is implemented via macros on a very small core language.

reply

--

https://github.com/Gozala/wisp

--

the models of computation also suggest extension mechanisms:

turing/imperative: gotos, selfmodifying code, mutable state, call stack manipulation (including continuations) lambda calc/functional: higher order functions combinatorial: reductions grammar? concatenative? logic? mu-recursive? relational?

where do macros come from? grammar? combinatorial? none of these?

--

" Meta means that you step back from your own place. What you used to do is now what you see. What you were is now what you act on. Verbs turn to nouns. What you used to think of as a pattern is now treated as a thing to put in the slot of an other pattern. A meta foo is a foo in whose slots you can put foos. " -- "Growing a Language" by Guy L. Steele Jr

---

Micropython's 'inline assembler' provides a great example for when you might want to redefine semantics while keeping the underlying language's syntax:

https://www.kickstarter.com/projects/214379695/micro-python-python-for-microcontrollers/posts/667580

this sort of thing should be definitely supported in Oot.

---

a somewhat separate inspiration from Micropython's inline assembler is that in this case, the semantics of the inline assembly is not defined in terms of the micropython language, or a micropython core language, or metaprogrammy stuff, but rather directly in terms of the native code to be emitted. i guess we should support this but i'll have to think about how to do it in a 'general way'.

---

note: in Oot, some annotations will be computed, e.g. will use __get protocols to dynamically compute the presence/value of an annotation.

this could be used e.g. within the compiler, to allow complicated stuff to be annotated on the source tree, to allow complex optimization analyses spanning different compile steps to be added, without having to change the basic data structures from the un-optimized, simple compiler.

--

call stack manipulation primitives; goto could be available and avoiding 'goto' could be a mere convention; procedure calls could be a special case of goto (but by convention, you'd almost always use procedure calls, not goto)

---

yeah, just allow direct manipulation of call stack; in place of core stack operations such as push, pop, have list ops, stack = list = graph. like forth. delimited continuations can then be programmed.

---

pdf pg 23 described a problem with macros: http://iris.lib.neu.edu/cgi/viewcontent.cgi?article=1037&context=comp_sci_diss

On the relationship between laziness and strictness stephen chang

basically, you cannot always pull a subexpression out of an expression into its own expression via 'let', because that subexpression might be under a macro!

---

---

i guess both https://github.com/intridea/grape and http://www.claassen.net/geek/blog/2007/02/state-aware-programming-in-c-part-ii.html (and maybe Haskell guards? And so also some uses of Contexts in type declarations, according to https://www.haskell.org/haskellwiki/GADTs_for_dummies; and maybe also Erlang message pattern matching) can be generalized into the following pattern:

Often:

The most easy-to-read way to express this appears to use DSL-specific keywords before each chunk of code to indicate what the condition is (and to do stuff like binding variables), and to use lexical nesting to indicate the conjunction of conditions.

So, oot should support defining this sort of thing easily.

hmm.. after thinking about that for a few minutes, i'm kind of excited about this. This seems to be a good example of what i want when i say i want a ladder of increasingly powerful metaprogramming constructs. By providing a construct specifically for this structure, as opposed to just letting people use generic metaprogramming like for Ruby DSLs or Haskell monads, we could make it more easily readable even by people who aren't familiar with the DSL for a particular domain, by allowing that person to recognize, "oh, these things are metaprogramming keywords specifying some condition, and the code in brackets after it (the nested code) is more conditions, and then at the bottom of the nesting is actual code to be executed", and "oh, this is binding some DSL-computed value to a variable". Note that we could provide support for the common operation of binding to a variable.

the key here may be to really just consider this as defining a language/set of conventions, as opposed to a specific programming construct. Maybe someone will find our construct too restrictive, and try to do it a different way; but if we have conventions to make it easy for the reader to see that the intent of a particular metaprogramming keyword is to be a condition on the nested code, that convention can be supported by the other metaprogramming technique, and then it still makes things easier to read.

---

some results from an interesting search:

https://www.google.com/search?client=ubuntu&channel=fs&q=dsl+patterns&ie=utf-8&oe=utf-8 http://martinfowler.com/dslCatalog/ http://msdn.microsoft.com/en-us/magazine/ee291514.aspx http://en.m.wikipedia.org/wiki/Domain-specific_language http://www.dmst.aueb.gr/dds/pubs/jrnl/2000-JSS-DSLPatterns/html/dslpat.html http://stackoverflow.com/questions/9942155/custom-java-query-class-dsl-builder-pattern-static-imports-or-something-else http://www.infoq.com/articles/internal-dsls-java Design Principles for Internal Domain-Specific Languages: A ... www​.hillside.net/plop/.../gunther-2.pdf

---

READABLE dsl definitions

---

http://scalamacros.org/news/2011/09/18/macros-in-a-typed-language.html

---