Table of Contents for Programming Languages: a survey

Chapter : metaprogramming: macros etc

source filters

C #define

C++ templates

difficult to read errors

type-checked (? but see )and scoped

more expressive than C #define (todo how?) but not in the base language, like macros

todo see also

AST transformations

rules that match on the AST and replace the matched portion with something else. the 'something else' can be constrained to a simple rearrangement of the matched stuff, or a facility could be provided to allow arbitrary computation

non-syntactic macros vs. syntactic macros

"Macro systems—such as the C preprocessor described earlier—that work at the level of lexical tokens cannot preserve the lexical structure reliably. Syntactic macro systems work instead at the level of abstract syntax trees, and preserve the lexical structure of the original program. The most widely used implementations of syntactic macro systems are found in Lisp-like languages such as Common Lisp, Clojure, Scheme, ISLISP and Racket. These languages are especially suited for this style of macro due to their uniform, parenthesized syntax (known as S-expressions). In particular, uniform syntax makes it easier to determine the invocations of macros." --


macros can only 'match' on patterns like function calls, but once matched can arbitrarily compute with the AST provided to them as an argument

this example from Clojure shows how macros are somewhat like functions, but which get evaluated before their arguments. In this case the macro is "->>":

"a threading macro (sometimes called “pipeline operator”) is used to perform a chain of transformations.

(->> (range 5 10) ;; (5 6 7 8 9) List (filter odd?) ;; (5 7 9) (map inc) ;; (6 8 10) (reduce +)) ;; 24 " -- from [1]

what do macros do that higher-order lazy functions can't?

i'm not sure, but here are some notes:

platz 2 hours ago

I wonder how much macros are necessary one you add call-by-name parameters e.g. scala.


vog 1 hour ago

Or, once you have lazy evaluation (like in Haskell), for that matter.


jlarocco 1 hour ago

Laziness can achieve a similar thing in that case (less elegantly, IMO), but there are other uses for macros that can't be solved with laziness.


mbrock 1 hour ago

Look at the uses for Template Haskell, for example.

That's basically Haskell's macro system. And it's used quite a lot, though it's kind of arcane.

If you want to create an abstraction that defines one or several data types, you'll think hard about whether you can use some kind of type-level programming instead—but if that's not possible, or not convenient, you can use TH macros.

For example, the `lens` package defines TH macros for creating special kinds of accessors that are tedious to write by hand.


kazinator 2 hours ago

Ah, but that is not the case in some languages in which arguments are evaluated lazily. Usually the iron-man version of this question is: "if I have a non-strictly evaluated language with higher order functions, do I still need macros?"

A part of the answer is: you probably don't need the kinds of macros which cover up machine-generated lambdas, which simulate non-strict evaluation in strictly evaluated Lisp programs! The compiler for your language already has these "macros" in its compiler.

The argument why "you always need macros no matter what else you have" is that your language has "hard-coded macros": the grammar rules in a compiler, which match patterns and transform bits and pieces to produce AST fragments. If you don't have macros, then that set of "hard-coded macros" is all you have.

So, even in a functional language with nonstrict evaluation, you're using macros. It's hard to make a convincing argument that they provide all the expressivity you would conceivably ever need. (And their existence and use defeats any argument that you don't need macros at all).


 frou_dh 2 hours ago

In my Lisp-esque language I use a temporary macro to automate the creation of some similar standard library procedures. This happens at runtime in the stdlib source file that is loaded:

  1. Define procedures named int? float? etc that test the type of a value.
    (def def-type-predicate (mac (type-name)
         `(def ,(string-to-symbol (join "" $type-name "?"))
               (proc (x) (eq? (type x) ',type-name)))))
    (def-type-predicate int)
    (def-type-predicate float)
    (def-type-predicate bool)
    (def-type-predicate string)
    (def-type-predicate symbol)
    (def-type-predicate file)
    (def-type-predicate nil)
    (def-type-predicate pair)
    (def-type-predicate procedure)
    (def-type-predicate macro)
    (zap def-type-predicate)

The crucial thing about what happens there being that the (def foo? ...) value produced by each macro invocation then gets evaluated in the root/top-level environment and so results in a "global" procedure definition. Using them:

    (string? (add 2 2))
    => FALSE
    (float? 4.7)
    => TRUE
    (procedure? string?)
    => TRUE

I thought it was a nice contained example of "code writing code" in the data realm.


not sure what a 'scope type macro' is but this guy says it's important:

hzhou321 1 hour ago

I agree. Macros are what made lisp, not parentheses.

However, if we broaden the concept a little, we can do this type of macros for any language with a meta-layer such as MyDef?. e.g. if you observe certain foreach pattern in Java, you can make a macro for that pattern and have the meta-layer look out and translate that for you (like having an automatic translator between you and javac. All you need is a scope type macro. Conventional macro packages like M4 does not provide such, but MyDef? does. Example:

    &call each_member, AList
        # java code that work on $(member)

where the macro may be defined as

    subcode: each_member(list)
        Enumeration e = $(list).elements();
        while (e.hasMoreElements()){
            String name = (String) e.nextElement();

Where it is understood that the definition can be any literal block pattern.


macros and static typing

Most implementations of macros are essentially dynamically typed, in the sense that even if the resulting code is in a statically typed language, the macro application itself is not typed, leading to unanticipated errors (at least they are compile-time, though). [2]

macros and error messages

Macros can lead to difficult-to-read error messages [3].

syntax extensions/procedural macros

"Syntax extensions look like macros or annotations, but they cause the compiler to execute custom code to (ideally) modify the AST." [4].

code generation and source filters

Some languages have built-in code generation hooks, where code is run at (before) compile-time and can modify or generate code at the level of entire files, modules, or projects. [5]

hygenic macros


anaphoric macros

"The loop macro in ANSI Common Lisp is anaphoric in that it binds it to the result of the test expression in a clause."

reader macros


run-time macros

(how similar is this to fexprs?)

problem with apply in macros

and Wart's solution...:

lexically scoped macros


applications of macros


quasiquote, unquote, splice: see

reification of expressions, AST

call-by-text / fexprs

wikipedia example: " As a simple illustration of how fexprs work, here is a fexpr definition written in the Kernel programming language, which is similar to Scheme. (By convention in Kernel, the names of fexprs always start with $.)

($define! $f ($vau (x y z) e ($if (>=? (eval x e) 0) (eval y e) (eval z e)))) "

wikipedia disadvantage:


At the 1980 Conference on Lisp and Functional Programming, Kent Pitman presented a paper "Special Forms in Lisp" in which he discussed the advantages and disadvantages of macros and fexprs, and ultimately condemned fexprs. His central objection was that, in a Lisp dialect that allows fexprs, static analysis cannot determine generally whether an operator represents an ordinary function or a fexpr — therefore, static analysis cannot determine whether or not the operands will be evaluated. In particular, the compiler cannot tell whether a subexpression can be safely optimized, since the subexpression might be treated as unevaluated data at run-time.

    MACRO's offer an adequate mechanism for specifying special form definitions and ... FEXPR's do not. ... It is suggested that, in the design of future Lisp dialects, serious consideration should be given to the proposition that FEXPR's should be omitted from the language altogether.[8]"

wikipedia advantage example:

" In the programming language Scheme, and is a macro, because (and #f (/ 1 0)) must not evaluate the division. This means it cannot be used in higher-order functions; it is second-class. In Kernel, one has $and? defined by

($define! $and? ($vau x e ($cond ((null? x) #t) ((null? (cdr x)) (eval (car x) e)) ((eval (car x) e) (apply (wrap $and?) (cdr x) e)) (#t #f))))

which is a first-class object — technically, a fexpr — and can thus be used in higher-order functions, such as map. "

todo call-by-text vs laziness (call-by-need)


macros vs fexprs


instead of macros, could give a custom parser and mark blocks of text for parsing by the custom parser

Applications of macros

"Felleisen conjectures[14] that these three categories make up the primary legitimate uses of macros in such a system":

1. Evaluation order "Macro systems have a range of uses. Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros. For example, Scheme has both continuations and hygienic macros, which enables a programmer to design their own control abstractions, such as looping and early exit constructs, without the need to build them into the language."

2. Data sub-languages and domain-specific languages "Next, macros make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient.[13]"

3. Binding constructs "Macros can also be used to introduce new binding constructs. The most well-known example is the transformation of let into the application of a function to a set of arguments."

"Others have proposed alternative uses of macros, such as anaphoric macros in macro systems that are unhygienic or allow selective unhygienic transformation.



See also M4 section in [Self-proj-plbook-plChPreprocLangs].


Macros can make programs harder to debug




crystal's templateish macro example

" Crystal uses macros to achieve that while reducing boilerplate code. This example is taken from Kemal, an awesome web framework for Crystal.

HTTP_METHODS = %w(get post put patch delete options)

{% for method in HTTP_METHODS %} def [[]](path, &block : HTTP::Server::Context -> _) Kemal::RouteHandler::INSTANCE.add_route([[image:method?]].upcase, path, &block) end {% end %}

Here’s how the DSL declaration is done in Kemal, looping through the HTTP_METHODS array to define a method for each HTTP verb. By the way, macros are evaluated at compile-time, meaning that they have no performance penalty. "