proj-oot-ootVirtualizationNotes1

by 'virtualization' i mean that i want the programmer to run an interpreter for another language on top of Oot with minimal slowdown compared to running the other interpreter directly (e.g. 50% slowdown is okay but not 1000%).

---

"GOOD IDEA: SIE instruction

    GOOD IDEA (Glew opinion): user level & highly nested virtual machines "

https://www.semipublic.comp-arch.net/wiki/SIE

(one approach to virtualization)

i can't help but feel like SIE, particularly his proposal to use SIE for new instruction sets:

" You want to execute a new instruction set - e.g. a VLIW on a RISC? SPARC on a MIPS? Vice versa?

    Traditional proposals to do this involve a lot of OS work. 
    Execute a flavor of SIE where the guest state includes a new instruction set mode.
    No need to change the OS, to save modes: the SIE XIT saves the new ISA mode, restores the old, and then allows the existing interrupt handling mechanisms to run.
    Note that this can be arbitrarily nested. ISA0 can cal a function, that SIEs to ISA1. That ISA1 function can call another function, that SIEs to ISA0, or to a nw ISA2. "

relates to computed gotos and to my desire for oot to allow an interpreter for another language to run on top of it with minimal slowdown

also, perhaps you could let the supervisor provide a list of oot bytecode opcodes to trap on when run within the SIE environment; when they are called, they trap, and there is a SIE Exit.

this would seem like a good fit for Parrot-like custom opcodes

--

if the Oot program can dynamically define custom opcodes using Oot code, that's like dynamic macros.

if the Oot program can statically define custom opcodes using external linked libraries, that's like extending the CPU with microcode.

if the Oot program can dynamically define custom opcodes using external linked libraries, i guess that's like dynamically reprogramming the microcode in the firmware.

--

https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements

--

custom opcodes are a good fit for the general goal, too, b/c it allows the user to define a new language which is run without the indirection of another bytecode interpreter loop

--

grammar-matching like in the Nock specification

--

custom-defined combinators (i guess Haskell is real good at that; i guess if you can custom define them it's basiclly the lambda calculus, not jut combinatory calculus?)

--

i guess if the interpreter wants to detect infinite loops, or to support preemptive multitasked green threads, or insert profiling or debugging instrumentation, or do mandatory periodic garbage collection, or do tail call optimization (but we don't want to make it always do TCO, some languages don't like that), it may need to interrupt the code it is executing every so often (or maybe every instruction). oot's VM should provide instructions to allow an interpreter running on top of it to request this sort of thing, so that the interpreter running on top can still use custom opcodes and use oot's bytecode interpreter loop rather than adding an other layer of looping.

--

i think one bottleneck in virtualization is mapping the virtual memory space of the virtualized process into real memory. i think most PCs these days have an MMU (memory management unit) to help with this in hardware -- i think standard Linux requires this, in fact. As far as i know (todo look this up), this system isn't composable, i.e. the x86 MMU can't be be told to help user-mode processes do memory mapping for user-mode virtualization, and Linux has no facilities for this, although perhaps i'm wrong.

anyway, the oot VM should offer composable virtual memory mapping, so even if it must be done in software, it doesn't have to be interpreted.

the idea of composable MMU in virtualization was from http://semipublic.comp-arch.net/wiki/SIE

todo look up

---

relation between (custom opcodes e.g. in parrot), and the PL forth?