proj-oot-old-150618-ootImplementationThoughts

let's use something similar to the nanopass compiler framework: https://github.com/akeep/nanopass-framework/blob/master/doc/user-guide.pdf?raw=true , https://github.com/akeep/nanopass-framework

static analysis must be able to say whether a given symbol is an fexpr or not, at least in most cases

---

why is it that compiling one HLL to another, e.g. CoffeeScript? -> JavaScript?, causes complaints that it's a hack and it's hard to debug, whereas compiling a HLL to a existing VM, e.g. Clojure -> JVM or Elixir to ErlangVM?, doesn't?

Part of it may be that VMs are designed to be intermediate target languages and have appropriate facilities for that, whereas languages like JavaScript? weren't designed to be intermediate target languages. What are the 'appropriate facilities' that intermediate target languages should have?

Oot should be a good intermediate target language and so should support these features.


callGC, the function used to explicitly invoke the garbage collector, should return a bool to say if it's done or not, so that the program doesn't waste time calling it 100 times when it has nothing to do (a complicated version of a wasteful spinlock, i guess). there should be a function callGCDuration(duration) which calls callGC over and over again until either callGC says it's finished, or the given duration has passed.


4-stage implementation:

What is the different between Oot Core, and Oot assembly (Oot bytecode), and why have both?

Second question first (why have Oot Bytecode in addition to Oot Core?)

In one sense, Oot Bytecode is just an implementation detail. Like Dalvik and the JVM, or Lua and LuaJIT?, one could imagine that some implementations might eventually replace Oot bytecode with something else, especially since the design goal of Oot Bytecode is not efficiency. I am not sure if we even want to support Oot Bytecode as a standard, stable language the way that the JVM supports Java bytecode (although, doing that seemed to work well for Java; it seems like Java has gained various third parties making tooling partially due to this step). But here are some benefits of bytecode:

However, the main reason i'm thinking of Oot bytecode is different. The main reason is that I am trying to think about what the small 'core' of Oot is, and one way to think of that is to think about what an Oot Assembly would look like (what are the primitive instructions? what flags are in the bytecode? do we have addressing modes, and if so, which ones?). In addition, i really just want to explore all of the main computational paradigms, and all of the simplest and most fundamental languages, and in a certain sense assembly is very fundamental; thinking about how an Oot bytecode and Oot assembly would look helps me appreciate the design problems and choices made by assembly languages, which gives me inspiration for Oot.

In addition, bytecode serves as a thought experiment for implementation issues. It helps me clarify my thinking early about issues like what does the stack look like, where are various things stored, what are the internal data representations of common types, etc.

Now, the other question; what is the difference between Oot Core and Oot Assembly?

---

i think we want to support both lexically-scoped closures, and dynamic scoping, and first-class call stacks.

so, instead of 'stack frames' on the stack we want to allocate 'stack frames' on the heap; call them 'activation records' or my preferred term, 'activation frames'

in order to do lexical upvariables, we'll need these closures to remember the lexical context they came from, eg by using a saguaro stack: https://en.wikipedia.org/wiki/Parent_pointer_tree#Use_in_programming_language_runtimes

so even though the call stack (activation frame stack) is just a linear stack (and can handle the dynamic scoping), we still have to use a 'saguaro stack' to remember lexical scoping

(and what about the tree-like 'handler stack'?)

---