Table of Contents for Programming Languages: a survey
ruby's method_missing, respond_to, instance_eval
OMeta: http://www.vpri.org/pdf/tr2008003_experimenting.pdf
difficult to read errors
type-checked (? but see http://people.cs.uchicago.edu/~jacobm/pubs/templates.html )and scoped
more expressive than C #define (todo how?) but not in the base language, like macros
todo see also http://stackoverflow.com/questions/180320/are-c-templates-just-macros-in-disguise http://programmers.stackexchange.com/questions/53441/are-c-templates-just-a-kind-of-glorified-macros
rules that match on the AST and replace the matched portion with something else. the 'something else' can be constrained to a simple rearrangement of the matched stuff, or a facility could be provided to allow arbitrary computation
macros can only 'match' on patterns like function calls, but once matched can arbitrarily compute with the AST provided to them as an argument
(how similar is this to fexprs?)
macrolet
wikipedia example: " As a simple illustration of how fexprs work, here is a fexpr definition written in the Kernel programming language, which is similar to Scheme. (By convention in Kernel, the names of fexprs always start with $.)
($define! $f ($vau (x y z) e ($if (>=? (eval x e) 0) (eval y e) (eval z e)))) "
wikipedia disadvantage:
"
At the 1980 Conference on Lisp and Functional Programming, Kent Pitman presented a paper "Special Forms in Lisp" in which he discussed the advantages and disadvantages of macros and fexprs, and ultimately condemned fexprs. His central objection was that, in a Lisp dialect that allows fexprs, static analysis cannot determine generally whether an operator represents an ordinary function or a fexpr — therefore, static analysis cannot determine whether or not the operands will be evaluated. In particular, the compiler cannot tell whether a subexpression can be safely optimized, since the subexpression might be treated as unevaluated data at run-time.
MACRO's offer an adequate mechanism for specifying special form definitions and ... FEXPR's do not. ... It is suggested that, in the design of future Lisp dialects, serious consideration should be given to the proposition that FEXPR's should be omitted from the language altogether.[8]"
wikipedia advantage example:
" In the programming language Scheme, and is a macro, because (and #f (/ 1 0)) must not evaluate the division. This means it cannot be used in higher-order functions; it is second-class. In Kernel, one has $and? defined by
($define! $and? ($vau x e ($cond ((null? x) #t) ((null? (cdr x)) (eval (car x) e)) ((eval (car x) e) (apply (wrap $and?) (cdr x) e)) (#t #f))))
which is a first-class object — technically, a fexpr — and can thus be used in higher-order functions, such as map. "
call-by-text vs laziness
instead of macros, could give a custom parser and mark blocks of text for parsing by the custom parser
the models of computation also suggest extension mechanisms:
turing/imperative: gotos, selfmodifying code, mutable state, call stack manipulation (including continuations) lambda calc/functional: higher order functions combinatorial: reductions grammar? concatenative? logic? mu-recursive? relational?
where do macros come from? grammar? combinatorial? none of these?
see Martin's [1]. Martin defines programming paradigms by constraints, and it seem to me that by evading these constraints (e.g. by using GOTO) we get metaprogrammy stuff.
" There is a reasonable fear that attributes will be used to create language dialects. The recommendation is to use attributes to only control things that do not affect the meaning of a program but might help detect errors (e.g. [[noreturn?]]) or help optimizers (e.g. [[carries_dependency?]]) " -- http://www.stroustrup.com/C++11FAQ.html#attributes