Hi, hope all is well in North Carolina! I was wondering if you would be able to make an email-introduction to the neighbors on Travers Way who own the tall eucalyptus tree in the corner of the yard? Thanks, and happy holidays!
-- b
---
" Open implementations
Many older and more "dynamic" high-level languages (Lisps, Smalltalks, Forths) were designed around a kind of uniform programs-as-data model, and the presence / presupposition that the compiler would always be in-process with your program: programs thus more commonly invoked (and extended) the implementation significantly at runtime, did a lot of dynamic metaprogramming, reflection, and so forth. This was maybe a kinder, gentler time when "arbitrary code execution" wasn't quite so synonymous with "security nightmare"; but it also had a sort of internal logic, represents a design aesthetic that puts pressure on the language machinery itself to be programmable: pressure to keep the language, its syntax, its type system, its compilation model and so forth all simple, uniform, programmable.
We've since been through a few different eras of language sensibilities around this sort of thing, including some imaginary mobile-code stories like Telescript, Obliq, and eventually JVM/CLR. These latter were weird since they tried to be mobile (which rarely worked), and tried to have semi-open implementations (at least compilers-as-libraries and some access to the bytecode loaders) but didn't quite make it to the point where it was easy or obvious to do source-level metaprogramming (with the notable exceptions of F# quotations and F# type providers). But through all this, in the background there's been this somewhat unfortunate, competing "grown-up" model that tends to dominate mainstream languages (everything from FORTRAN to C++): a pretty complex grammar and AST, a pretty hairy compilation model, a very heavy batch-compiler that's definitely not part of the normal process runtime, and programs that seldom do any metaprogramming, even in cases where it'd be appropriate. Recent "compiled" languages have adopted this style, I suspect in part because LLVM is simply shaped that way, and I suspect also in part as a response to negative experiences with both JVM/CLR environments and overzealous use of metaprogramming in scripting languages.
I don't think, however, that the baby ought to be thrown out with the bathwater. I don't think a few bad open implementations invalidates the idea, any more than a few bad static type systems invalidates that idea. They can be done well. Julia for example has quite a nice static type system and compiler, but also a uniform syntax that's friendly to dynamic metaprogramming and JIT'ing. There are also several static metaprogramming and staged-programming systems: MetaOcaml, Template Haskell, ScalaMeta and so forth. So .. there's a spectrum, a design space.
I'm not sure exactly where to go with this topic, except to say I'm a bit dissatisfied with how hard it is to do tooling for current languages, how large the feedback cycle is between a language and its own (meta)programming tools, how distant the tools are from the users, and perhaps to point out that dynamic compilation is not entirely dead: we appear to be entering an era with a new high-integrity universal bytecode sandbox, designed for mobile code and dynamic JIT'ing, and with a lot of industrial support. It might be an interesting time to consider projects (even "static" ones) that take a slightly more nuanced view of the code/data relationship, the program/metaprogram/compiler relationship, and make the whole compilation model a little more .. pliant (yes that was a Pliant reference and if you remember what that was, congratulations you've been on the internet too long, here's your TUNES badge of merit). " "What next?" by Graydon Hoare
discussion at: http://lambda-the-ultimate.org/node/5466 https://news.ycombinator.com/item?id=15051645
---
bjoli 6 days ago [-]
One thing that makes racket shine is it's macro facilities. Syntax case is nice and all that, but Jesus Christ in a chicken basket I wish scheme would have standardised on syntax-parse.
Syntax case vs. syntax parse isn't and will never be close to a fair comparison. Not only is it more powerful, it also provides the users of your macros with proper error messages. It blows both unhygienic and other hygienic macro systems out of the water for anything more complex than very basic macros.
reply
agumonkey 6 days ago [-]
Here's the doc for the curious http://docs.racket-lang.org/syntax/stxparse-intro.html
Interesting system indeed
reply
rkallos 6 days ago [-]
100% agreed. After using syntax-parse, it pains me to use anything else. It's a gem.
reply
---
" Compile-time AST Macros
Some variant of Lisp (or Scheme?) was probably one of the first implemented FP languages; and Lisps tend to have compile-time AST macros that allow you to transform sections of the program at compile-time.
But compile-time code-transformations are not unique to Lisp; apart from other FP languages that have them, like Template Haskell or Scala Macros, many languages have some sort of compile-time code transformation. From Java Annotation Processors, to my own MacroPy? project in Python, it turns out that compile-time ast macros are just as feasible in imperative languages, doing imperative programming. "
---
" Compile-time programming
Zig has pretty strong compile-time programming support. For example, its printf formatting capability is all written in userland code [1]. It doesn't at this moment support code-generation like D's mixins but I personally have not found this too problematic.
Generic functions can be written in a duck-typing fashion. With compile-time assertions the inputs can be limited to what they need pretty clearly and the errors during usage are pretty self-explanatory.
error Overflow;
pub fn absInt(x: var) -> %@typeOf(x) {
const T = @typeOf(x);
comptime assert(@typeId(T) == builtin.TypeId.Int); // must pass an integer to absInt
comptime assert(T.is_signed); // must pass a signed integer to absInt
if (x == @minValue(@typeOf(x))) {
return error.Overflow;
} else {
@setDebugSafety(this, false);
return if (x < 0) -x else x;
}
}Zig doesn't have any form of macros. Everything is done in the language itself.
[1]: http://ziglang.org/documentation/#case-study-printf
"
---
should mb support extensions like this:
hyperion2010 111 days ago [-]
It looks like this (e.g. `#2dcond`) implements a way to directly embed other languages in a racket file [0] and avoids the problems encountered when trying to do it using the `#reader` syntax [1] in a source file. Essentially letting you have multiple readtables in a file (probably not nestable though). I could be wrong about this (need to look more carefully when I have more time), but nonetheless could allow direct embedding of completely alternate syntax with the right setup.
[0] https://github.com/racket/2d/blob/master/2d-lib/private/read... [1] https://docs.racket-lang.org/guide/hash-reader.html
gcr 111 days ago [-]
This seems similar to the way the at-exp language is implemented.
at-exp adds support for S-expressions based on braces, and is the foundation of the Scribble markup language.
---
this shows how Forth can be metaprogrammed to support a little FSM (finite state machine) DSL:
http://galileo.phys.virginia.edu/classes/551.jvn.fall01/fsm.html
---
int_19h 130 days ago [-]
R has some really crazy metaprogramming facilities. This might sound strange coming from Python, which is already very dynamic - but R adds arbitrary infix operators, code-as-data, and environments (as in, collections of bindings, as used by variables and closures) as first class objects.
On top of that, in R, argument passing in function calls is call-by-name-and-lazy-value - meaning that for every argument, the function can either just treat it as a simple value (same semantics as normal pass-by-value, except evaluation is deferred until the first use), or it can obtain the entire expression used at the point of the call, and try to creatively interpret it.
This all makes it possible to do really impressive things with syntax that are implemented as pure libraries, with no changes to the main language.
--
" > I won’t pretend to remember Lisp inventor John McCarthy?'s exact words which is odd because there were only about ten but he simply asked if Python could gracefully manipulate Python code as data.
> ‘No, John, it can’t,’ said Peter and nothing more, graciously assenting to the professor’s critique, and McCarthy? said no more though Peter waited a moment to see if he would and in the silence a thousand words were said. "
---
(regarding the previous section)
kennytilton 8 hours ago [-]
Was McCarthy? even thinking of macros when he locked onto code as data? Or was he thinking code generation in pursuit of AI?
dangerbird2 7 hours ago [-]
He was not, macro libraries like defmacro and define-syntax were later addition to lisp. Well before formal macro systems were implemented, code-as-data was particularly useful for implementing the eval function in Lisp (the Lisp 1.5 implementation takes up less than a page), which greatly simplified the process of implementing compilers and interpreters.
reply
---
dwarfylenain 16 hours ago [-]
And what about hylang.org ? I'm surprised it's not even mentioned here : nice lispy syntax with all the python packages = best of both worlds :)
reply
cat199 13 hours ago [-]
+1 - if you ever want macros or syntactic level manipulation in the python world, or just feel like doing something with s-expressions, hy is a great thing.
reply
---
" > ... if you started from scratch, how would you design a language differently so that it doesn't run into these issues?
I don't think there would be a need to start from scratch. Python 3 is a pretty good language! Also, it's not clear that it needs to be fast, it's just fun to think about how it might be fast.
That said, I think there is only one key feature missing to enable aggressive optimizations: An annotation on functions/methods that says that the function may be optimized even at the cost of some loss in reflective capabilities. In current Python any function's code can be inspected and changed at run time, as can its local variables by callees who can walk up the call stack. This means that every optimization that compiles Python code to machine code or even just changes the interpreter's handling of local variables can, in general, change the program's semantics.
I think it would suffice to add a @static decorator to disable these reflective features for individual functions. (And disabling dynamic lookup of globals would be good too.) The interpreter could then recognize that you want those optimized and could do whatever magic it likes without caring about boring corner cases like invalidating its optimizations if someone tries to patch the function's code at runtime.
This would not be a big thing to do, and there would be no real need to restart from scratch; Python 3 is a pretty good programming language!
Everything else would pretty much fall out automatically from that using known/already implemented techniques, especially with the type hints that would allow you to gradually move a fully dynamic, interpreted application to a statically typed, compiled one function by function. Such a Python would still not win the speed crown, but it could beat the current one by miles on a lot of applications. "
---
"Various features of Python like dynamic typing and object/class mutation (via del) preclude many static analysis techniques" [3]
" Cannoli supports two major optimizations that come as a result of applying restrictions to the language. Restrictions are placed on the Python features that provide the ability to delete or inject scope elements and the ability to mutate the structure of objects and classes at run time. The corresponding feature branches are scope-opts and class-opts. The optimizations are built on top of each other, therefore the class-opts branch is a superset of the scope-opts branch. In general, the class-opts branch yields a performance increase of over 50% from the master branch. " [4]