https://github.com/nesbox/TIC-80 has 80k of RAM and accepts 64k of user code of any of: Lua, Moonscript, Javascript (using Duktape, which claims to be full JS, not a subset), Wren, and Fennel.
---
racket seems to have good metaprogrammy stuff, mb just implement in Racket, at least a prototype
that might be good for our core language and up, but what about the lower levels?
makes me wonder, how is Racket itself implemented? maybe they already did what we want to do. So, i took a look. They recently switched to be implemented on top of Chez Scheme (in the source code the version on top of Chez Scheme is called Racket CS, and the old version is called Racket BC (Before Chez, i assume)). Apparently before that they had a large C core which made the Racket implementation unwieldy to work with, but they (the Racket team) say that Chez Scheme has a much smaller C kernel than the C core that the old version of Racket used to have, and in general they say Racket CS is now easier to improve than Racket BC was.
So, what is this small C kernel in Chez Scheme? Unfortunately i did not find it to be very obvious or documented. Furthermore i didn't find documentation for a 'core lisp' within Chez Scheme.
Chez Scheme does have a file called "scheme.h" but i think it is only for FFI, for C programs to include that are providing interop with Chez, rather than a core of Chez, though i could be wrong. It is described in https://cisco.github.io/ChezScheme/csug9.5/csug9_5.pdf section 4.8. C Library Routines pdf page 94.
I've heard that Chez Scheme uses the nanopass compiler framework. https://andykeep.com/pubs/dissertation.pdf describes a little bit the Chez Scheme nanopass implementation, with an overview particularly in section 3.3.3. Workings of the new compiler. This section says that there are "approximately 50 passes and approximately 35 nanopass languages"! Lots of passes are fine but 35 languages is too much to get one's head around.
I think this reveals another goal that i have for Oot. The (reference) compiler should be understandable by newcomers. I thought i liked the idea of nanopass compilers because each pass is simple, but if the cost is to have 35 layers to learn about, that's just too much. Now, i'm sure most of these 35 languages are just simple variations on the previous one, but a newcomer first encountering the code doesn't know which ones are major transitions and which aren't so this just makes things more confusing (although perhaps really good documentation could help). Having a smaller number of language layers helps with that.
I also realized something in the way i think about these layers (although i think i've realized, and written about, this before). Part of the value of a 'bytecode' sort of thing is that it really makes it clear what functionality is in a given layer of the language; you can mostly see what it does by reading the list of instructions. The functionality of a lower bytecode layer (like Boot and LOVM, but maybe not so much JVM, CLR, or OVM) outside of the bytecode instructions itself is trivial, unlike, say, C, in which stuff like the functionality of maintaining local variables is not in any single core library function but creeps in in the language itself, or, worse, various Lisps which support closures and GC.
The goal of an understandable (and small) implementation is also why i don't like LLVM's encoding, which has things like variable-width integers (https://releases.llvm.org/8.0.1/docs/BitCodeFormat.html#variable-width-value); i feel like things are easier to understand if you can naively hack on the object file with a hex editor to some extent. And why i don't like LLVM's requirement that the compiler construct an SSA CFG graph before handing off the code; i think that's great for LLVM's goal of being an optimizer backend, but it's poor for my goal of a small reference compiler that is quickly comprehensible. I imagine that the Oot compiler could have an optional optimization flow which would do stuff like that, but a newcomer could more quickly understand the semantics by turning all that stuff off and look at Oot's basic 'reference compiler flow'.
By the way, i think the figures and (colorized) discussion in the 'Approach' section of http://www.cs.utah.edu/~mflatt/racket-on-chez-jan-2018/ are great for an architecture overview, we should do that sometime.
---
for some reason very late last night i had the inclination to look up that weird mathy intermediate language Om and its ilk. I had some trouble finding it because it seems to have changed its name and/or been replaced by other stuff. I finally found it all and put my notes on this archeology in section 'HTS, Pure, Infinity, EXE, Om project archeology' in plChProofLangs.
My conclusion is:
" Om as a programming language has a core type system, thePTS∞— the pure type system with the infinite numberof universes. This type system represents the core of the language. Higher languages form a set of front-ends to thiscore. Here is example of possible languages: 1) Language for inductive reasoning, based on CiC? with extensions; 2)Homotopy Core with interval [0,1] for proving J and funExt; 3) Stream Calculus for deep stream fusion (Futhark);3) Pi-calculus for linear types, coinductive reasoning and runtime modeling (Erlang, Ling, Rust). These languagesdesugar toPTS∞as an intermediate language before extracting to target language4.Not all terms from higher languages could be desugared to PTS. As was shown by Geuvers[8] we cannot buildinduction principle inside PTS, we need a fixpoint extension to PTS. And also we cannot build the J and funExt terms.But still PTS is very powerful, it’s compatible with System F libraries. The properties of that libraries could be provenin higher languages with Induction and/or [0,1] Homotopy Core. Then runtime part could be refined to PTS, extractedto target and run in an environment.We see two levels of extensions to PTS core: 1) Inductive Types support; 2) Homotopy Core with [0,1] and itseliminators. We will touch a bit this topic in the last section of this document " -- https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf
PI
MLTT
PI
SIGMA
ID
INDUCTIONHTS
PI
SIGMA
PATH
HIT
COMP" -- https://groupoid.space/homotopy/" PTS
PI
MLTT
PI
SIGMA
ID
INDUCTIONCUBICAL
PI
SIGMA
PATH
INDUCTION
COMP
GLUE" -- https://web.archive.org/web/20190102060830/https://groupoid.space/mltt/infinity/Regarding HTS/MLTT/Infinity/EXE, these seem to be layers on top of Om that compile to Om, but possibly also adding stuff like induction, J, funExt, that cannot be built on top of PTS according to https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf , and maybe pi calculus and stream calculus stuff is included in that, i'm not sure. The old EXE page, https://web.archive.org/web/20190102060830/https://groupoid.space/mltt/infinity/, contains spawn, receive, send, but the new https://groupoid.space/homotopy/ page does not.
The old EXE page has a section on 'Effects':
" Effect
Effect syntax extensions defined basic process calculus axioms, IO and exception handling.
data Effect: * :=
(receive: Receive → Effect)
(spawn: Spawn → Effect)
(send: Send → Effect)
(try: Exception → Effect)
(raise: Exception → Effect)
(write: File → Effect)
(read: File → Effect)Process
record Process: (Sigma: *) → (X: *) → * :=
(action: Sigma → X → GenServer X)Spawn
record Spawn:
(proc: Process)
(raise: list Eff)Send
record Send: (Sigma: *) → * :=
(message: Sigma)
(to: Process Sigma)"
Not sure what the difference between MLTT and HTS is, and which of them, if any, correspond to Infinity and to EXE.
The new page https://groupoid.space/homotopy/ says of MLTT (superscript infinity) "In its core it has only comp primitive (on empty homogeneous systems) and CCHM connections (for J computability).". I'm guessing the 'CCHM connections (for J computability)' refers to that same J thing that the old Om paper said can't be derived from PTS. Not sure if the 'comp primitive' is what compiles to Om (which is suggested by its being the only other thing outside of 'CCHM connections'), or if it's another extension that can't be compiled to Om (which is suggested by the table on that page, which looks like a table of primitives, which has PI in both PTS and MLTT). It's also confusing that that table puts COMP in the HTS section, NOT in the MLTT section.
Also, the old stuff seems to talk slightly more about programming language stuff, whereas the new https://groupoid.space/homotopy/ sounds like it's purpose is to "compute all the MLTT [rules]" and to have "full HoTT? computability". I think that's more likely a difference in writing style than a difference in goals, but i'm not sure.
My conclusions so far are:
---
so another rundown on the current layer idea:
Oot (3 profiles: std, small, nostdlib)
| Metaprogramming |
| Lowering, simplified syntax |
| Main implementation of Oot semantics |
| implementation of low-level stuff |
| simple compiliation |
Purposes:
Above this point I'll finally get to start working on my dreams for Oot syntax and semantics, instead of generic implementation-y stuff!
Below this point we have a generic toolbox of 'programming language implementation technologies' not very specific to Oot, and generally useful for creating portable programming languages that care more about simplicity than efficiency. Below this point we have unsafe languages with C-like wild undefined behavior. Above this point we have safe languages with mandatory bounds checking, suitable for security and sandboxing (possibly modulo low-level reliance due to CPU bugs on actual separate OS processes for security boundaries?), without wild undefined behavior (only 'controlled crashes'). Above this point we have stuff that is less likely to be reused outside of the Oot project. Highly optimized or highly platform-interoperable implementations will probably skip everything below this point and directly implement either OVM or Oot Core on the platform (it may be hard to go higher than that because Oot metaprogramming will operate on either the Oot Core or Preoot representation).
Either the OVM or Oot Core (probably OVM) introduces things like partial function application, lazy sequences/iterators, copy-on-write. However OVM is probably also somewhat concerned with efficiency and so has a lot of static-ness (think Java, C#, Wren, Haskell).
Either Oot Core or Pre-oot is sort of a Lisp-level AST language which is standardized and is the target of Oot metaprogramming manipulation (i haven't decided yet which one).
---
the_duke 1 day ago [–]
> Not All Programming is Systems Programming
Personally I often use Rust for "non systems-programming" tasks, even though I really wish a more suitable language existed.
Rust has plenty of downsides, but hits a particular sweet spot for me that is hard to find elsewhere:
So I often accept the downsides of Rust, even for higher level code, because I don't know another language that fits.
My closest alternatives would probably be Go, Haskell or F#. But each don't fit the above list one way or another.
reply
jaggirs 22 hours ago [–]
I feel like it should be trivial to make a 'scripting' variant of rust. Just by automatically wrapping values in Box/Rc when needed a lot of the cognitive overhead of writing Rust could be avoided. Add a repl to that and you have a highly productive and performant language, with the added benefit that you can always drop down to the real Rust backbone when fine-grained control is needed.
reply
grandmczeb 21 hours ago [–]
Check out Rune: https://rune-rs.github.io/
It’s more of a prototype, but I think it’s in the direction you’re describing.
reply
aidanhs 19 hours ago [–]
Similiar to GP, I too have been wondering about a Rc'd Rust.
Unfortunately Rune and Dyon[0] are dynamically typed, which isn't so attractive to me.
More promising are Gluon and Mun, both of which are statically typed. Of these two, Gluon has a somewhat alien syntax if you're coming from Rust (it notes it's inspired by Lua, OCaml and Haskell) so Mun is probably a better choice...but it seems very early, and the website notes as much (to serve the needs of a Rust-scripting-language I'd want seamless interop between it and Rust, which isn't quite there).
So I don't think there's anything in this space right now, but there are some promising options.
If you're wiling to go a little further afield, I'm kind interested in assemblyscript[3] - it's 'just' another WASM-based language so it's not a huge leap of imagination to believe there could be tooling to enable the seamless Rust interop. Just a matter of effort!
[0] https://github.com/PistonDevelopers/dyon [1] https://github.com/gluon-lang/gluon [2] https://github.com/mun-lang/mun [3] https://www.assemblyscript.org/
reply
athriren 18 hours ago [–]
gluon looks great, but the book site is down for some reason, which is unfortunate since i am looking for something nearly exactly like this.
reply
yashap 1 day ago [–]
You might like Scala and/or Kotlin. You listed Go, but Go’s type system is very weak, as is Go’s support for immutability, two problems that Scala and Kotlin don’t share.
reply
sanderjd 23 hours ago [–]
At one point in time, I thought Scala would be this, but it very much isn't. It feels bulky and lacking orthogonality to me, and it's tooling leaves a lot to be desired. (Note: this might be true of Rust too once it is as old as Scala.)
Kotlin, though, yep, big fan.
reply
terhechte 23 hours ago [–]
Did you try Swift? It is conceptually and syntax wise very similar to Rust.
reply
---
amelius 23 hours ago [–]
> Expressive, pretty powerful, ML and Haskell inspired type system
It's great that they use this, but it's still difficult to program in a purely functional style in Rust the way you would in, say, Haskell, because of memory management. Closures can create memory dependencies which are too difficult to manage with Rust's static tools.
reply
ohazi 23 hours ago [–]
This is definitely true, but you can often get surprisingly far by just boxing, cloning, Rc-ing, etc. whenever you hit something like this.
reply
acomar 20 hours ago [–]
the bigger issue is around the combination of closures, generic parameters, and traits. because higher-rank types can't be expressed and closures are all `impl Fn()`, you start to get an explosion in the complexity of managing the types and implementing the trait. try implementing something in the "finally tagless" style for an example of the kinds of things that can go wrong really fast. `Box` and `Rc` don't get you out of it and the resulting implementation is brittle and boiler-plate heavy in actual use.
... (comment goes on to give an example) ...
---
" To give a feel for the language, here’s a Scala implementation ofnatural numbers that does not resort to a primitive number type.
trait Nat { def isZero: Boolean; def pred: Nat; def succ: Nat = new Succ(this); def + (x: Nat): Nat = if (x.isZero) this else succ + x.pred; def - (x: Nat): Nat = if (x.isZero) this else pred - x.pred; }
class Succ(n: Nat) extends Nat { def isZero: Boolean = false; def pred: Nat = n }
object Zero extends Nat { def isZero: Boolean = true; def pred: Nat = throw new Error("Zero.pred"); } " -- [1]
---
i heard some praise for Scala recently and so i reviewed my notes on ppl's Scala opinions. The reasons ppl didn't like Scala appear to be:
the ones i care about the most for Oot are:
e.g. "The language and even the basic libraries are incredibly complex. Here's an example, a signature for List.map: final def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom?[List[A], B, That]): That" https://www.reddit.com/r/haskell/comments/3h7fqr/what_are_haskellers_critiques_of_scala/cu50l4x/?utm_source=reddit&utm_medium=web2x&context=3
" Maintaining and enforcing inheritance while dealing with erased types can be a real nightmare -- as these APIs show. The root cause of this is the JVM and Java interop. They aren't written this way by choice. level 3 [deleted] 10 points · 5 years ago
Ahm no… This example has particularly nothing to do with either inheritance or type erasure. There is an argument about decreasing the amount of mixins in the collections, but that goes for the implementation and not so much the API. And Martin Odersky acknowledged that a bunch of collection methods could indeed go into extension methods/ type classes if the API was to be redesigned. So it was a choice and not forced by the JVM.
"
" Martin did a talk on this, so the context is that it wasn't possible to encode map in a pure form in Scala because it wasn't possible to implement map to work with things like Array's to work with Java interopt. Java doesn't have value types, but Array's only work with primitives, so it isn't possible to map an Array that returns a non-primitive value without some implicit builder (which is your CanBuildFrom?)
There are arguably some cases where map is used where it shouldn't be (i.e. on a Set), but the primary reason was for Scala/Java compatibility
"
what ppl like:
---
so my conclusion from the above is:
---
also should checkout Kotlin, which receives lots of praise often
---
" The Shallow End/Deep End False Dichotomy
It has been touted that Scala allows you to wade into the shallow end of the pool, using only the features that you’re comfortable with....Sounds good, right? Until you realise that, by accident or not, each library developer pulls you out of the shallow end, throws you straight into the deep end...As soon as you want to do any production, business web application development, there’s a bunch of stuff you’re probably going to need to do, for example, route HTTP requests, make database calls, serve HTML and JSON/XML...As alluded to previously, the people who like to release libraries for new languages also like to experiment with language features. Unless the goal of their project is to cater to Scala noobs, they’ll cram every clever use of Scala into the client facing part of their library. Now is the point where you take a deep breath and hold it; you’re in deeper water than you’re capable of swimming in. "
---
2 points by bshanks 2 days ago
| parent [–] | on: Why Not Rust? |
I'd be interested to hear your thoughts on why Go might be better than Python for business logic.
reply
llimllib 21 hours ago [–]
Mainly that it’s just so aggressively boring (which I love). No exceptions means you have to deal with your errors all the time, and it’s just generally very easy, when dropped into a random spot in the code, to figure out what’s happening. Very little magic.
reply
---
higerordermap 9 hours ago [–]
I don't think zig is very well suited for / should target web dev.
There a GC'd language with modern features should do best. Imagine OCaml but modern and great tooling. That would be easy for developers, more productive to write, less type errors, also more productive because of IDE support, and much faster than current crop of scripting languages.
Sadly, there is a vacuum for such a language. Go is too gruntwork and minor details, Rust is designed for high reliability system programming, Zig is designed as sytem language in spirit of C, OCaml is archaic and does not seem to improve by much.
The hopes in this area are
Nim - Nice python-like syntax, but feels a little unpolished, metaprogramming may be hindrance to IDE tooling and code readability on large codebases.
Crystal - Ruby like syntax and stdlib. LLVM compilation, very young language, no windows support, and (personal opinion) I think some work can be done on performance of idiomatic code.
OCaml - there is not much manpower behind it. All those Reason XYZ attempts to provide javascript syntax over it don't seem to have gotten traction. Tooling is pretty good considering how obscure language is. They might need a modern interface to toolchain, and an optimizing compiler seems to be being worked on. La k of multicore is often cited as a drawback but it is being worked on, and Python doesn't do multicore either, I don't think multicore matters to 90% of people doing webdev.
F# - Would be nice if it was not confined to .NET ecosystem and had good native compilation story.
reply
---
a comment on https://lobste.rs/s/u2oufb/notes_on_smaller_rust
9 animatronic edited 1 year ago
| link |
I think much of this could be implemented in a “batteries included” library that has pervasive use of Arc<RefCell?