proj-oot-ootNotes38

https://github.com/nesbox/TIC-80 has 80k of RAM and accepts 64k of user code of any of: Lua, Moonscript, Javascript (using Duktape, which claims to be full JS, not a subset), Wren, and Fennel.

---

racket seems to have good metaprogrammy stuff, mb just implement in Racket, at least a prototype

that might be good for our core language and up, but what about the lower levels?

makes me wonder, how is Racket itself implemented? maybe they already did what we want to do. So, i took a look. They recently switched to be implemented on top of Chez Scheme (in the source code the version on top of Chez Scheme is called Racket CS, and the old version is called Racket BC (Before Chez, i assume)). Apparently before that they had a large C core which made the Racket implementation unwieldy to work with, but they (the Racket team) say that Chez Scheme has a much smaller C kernel than the C core that the old version of Racket used to have, and in general they say Racket CS is now easier to improve than Racket BC was.

So, what is this small C kernel in Chez Scheme? Unfortunately i did not find it to be very obvious or documented. Furthermore i didn't find documentation for a 'core lisp' within Chez Scheme.

Chez Scheme does have a file called "scheme.h" but i think it is only for FFI, for C programs to include that are providing interop with Chez, rather than a core of Chez, though i could be wrong. It is described in https://cisco.github.io/ChezScheme/csug9.5/csug9_5.pdf section 4.8. C Library Routines pdf page 94.

I've heard that Chez Scheme uses the nanopass compiler framework. https://andykeep.com/pubs/dissertation.pdf describes a little bit the Chez Scheme nanopass implementation, with an overview particularly in section 3.3.3. Workings of the new compiler. This section says that there are "approximately 50 passes and approximately 35 nanopass languages"! Lots of passes are fine but 35 languages is too much to get one's head around.

I think this reveals another goal that i have for Oot. The (reference) compiler should be understandable by newcomers. I thought i liked the idea of nanopass compilers because each pass is simple, but if the cost is to have 35 layers to learn about, that's just too much. Now, i'm sure most of these 35 languages are just simple variations on the previous one, but a newcomer first encountering the code doesn't know which ones are major transitions and which aren't so this just makes things more confusing (although perhaps really good documentation could help). Having a smaller number of language layers helps with that.

I also realized something in the way i think about these layers (although i think i've realized, and written about, this before). Part of the value of a 'bytecode' sort of thing is that it really makes it clear what functionality is in a given layer of the language; you can mostly see what it does by reading the list of instructions. The functionality of a lower bytecode layer (like Boot and LOVM, but maybe not so much JVM, CLR, or OVM) outside of the bytecode instructions itself is trivial, unlike, say, C, in which stuff like the functionality of maintaining local variables is not in any single core library function but creeps in in the language itself, or, worse, various Lisps which support closures and GC.

The goal of an understandable (and small) implementation is also why i don't like LLVM's encoding, which has things like variable-width integers (https://releases.llvm.org/8.0.1/docs/BitCodeFormat.html#variable-width-value); i feel like things are easier to understand if you can naively hack on the object file with a hex editor to some extent. And why i don't like LLVM's requirement that the compiler construct an SSA CFG graph before handing off the code; i think that's great for LLVM's goal of being an optimizer backend, but it's poor for my goal of a small reference compiler that is quickly comprehensible. I imagine that the Oot compiler could have an optional optimization flow which would do stuff like that, but a newcomer could more quickly understand the semantics by turning all that stuff off and look at Oot's basic 'reference compiler flow'.

By the way, i think the figures and (colorized) discussion in the 'Approach' section of http://www.cs.utah.edu/~mflatt/racket-on-chez-jan-2018/ are great for an architecture overview, we should do that sometime.

---

for some reason very late last night i had the inclination to look up that weird mathy intermediate language Om and its ilk. I had some trouble finding it because it seems to have changed its name and/or been replaced by other stuff. I finally found it all and put my notes on this archeology in section 'HTS, Pure, Infinity, EXE, Om project archeology' in plChProofLangs.

My conclusion is:

" Om as a programming language has a core type system, thePTS∞— the pure type system with the infinite numberof universes. This type system represents the core of the language. Higher languages form a set of front-ends to thiscore. Here is example of possible languages: 1) Language for inductive reasoning, based on CiC? with extensions; 2)Homotopy Core with interval [0,1] for proving J and funExt; 3) Stream Calculus for deep stream fusion (Futhark);3) Pi-calculus for linear types, coinductive reasoning and runtime modeling (Erlang, Ling, Rust). These languagesdesugar toPTS∞as an intermediate language before extracting to target language4.Not all terms from higher languages could be desugared to PTS. As was shown by Geuvers[8] we cannot buildinduction principle inside PTS, we need a fixpoint extension to PTS. And also we cannot build the J and funExt terms.But still PTS is very powerful, it’s compatible with System F libraries. The properties of that libraries could be provenin higher languages with Induction and/or [0,1] Homotopy Core. Then runtime part could be refined to PTS, extractedto target and run in an environment.We see two levels of extensions to PTS core: 1) Inductive Types support; 2) Homotopy Core with [0,1] and itseliminators. We will touch a bit this topic in the last section of this document " -- https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf

    PI

MLTT

    PI
    SIGMA
    ID
    INDUCTION

HTS

    PI
    SIGMA
    PATH
    HIT
    COMP" -- https://groupoid.space/homotopy/

" PTS

    PI

MLTT

    PI
    SIGMA
    ID
    INDUCTION

CUBICAL

    PI
    SIGMA
    PATH
    INDUCTION
    COMP
    GLUE" -- https://web.archive.org/web/20190102060830/https://groupoid.space/mltt/infinity/

Regarding HTS/MLTT/Infinity/EXE, these seem to be layers on top of Om that compile to Om, but possibly also adding stuff like induction, J, funExt, that cannot be built on top of PTS according to https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf , and maybe pi calculus and stream calculus stuff is included in that, i'm not sure. The old EXE page, https://web.archive.org/web/20190102060830/https://groupoid.space/mltt/infinity/, contains spawn, receive, send, but the new https://groupoid.space/homotopy/ page does not.

The old EXE page has a section on 'Effects':

" Effect

Effect syntax extensions defined basic process calculus axioms, IO and exception handling.

    data Effect: * :=
         (receive: Receive → Effect)
         (spawn: Spawn → Effect)
         (send: Send → Effect)
         (try: Exception → Effect)
         (raise: Exception → Effect)
         (write: File → Effect)
         (read: File → Effect)

Process

  record Process: (Sigma: *) → (X: *) → * :=
         (action: Sigma → X → GenServer X)

Spawn

  record Spawn:
         (proc: Process)
         (raise: list Eff)

Send

  record Send: (Sigma: *) → * :=
         (message: Sigma)
         (to: Process Sigma)

"

Not sure what the difference between MLTT and HTS is, and which of them, if any, correspond to Infinity and to EXE.

The new page https://groupoid.space/homotopy/ says of MLTT (superscript infinity) "In its core it has only comp primitive (on empty homogeneous systems) and CCHM connections (for J computability).". I'm guessing the 'CCHM connections (for J computability)' refers to that same J thing that the old Om paper said can't be derived from PTS. Not sure if the 'comp primitive' is what compiles to Om (which is suggested by its being the only other thing outside of 'CCHM connections'), or if it's another extension that can't be compiled to Om (which is suggested by the table on that page, which looks like a table of primitives, which has PI in both PTS and MLTT). It's also confusing that that table puts COMP in the HTS section, NOT in the MLTT section.

Also, the old stuff seems to talk slightly more about programming language stuff, whereas the new https://groupoid.space/homotopy/ sounds like it's purpose is to "compute all the MLTT [rules]" and to have "full HoTT? computability". I think that's more likely a difference in writing style than a difference in goals, but i'm not sure.

My conclusions so far are:

---

so another rundown on the current layer idea:

Oot (3 profiles: std, small, nostdlib)

Preoot Oot Core OVM LOVM with standard libraries --- Lo LOVM --- LOVM assembly --- Lo (HLL thin layer on top of LOVM) Boot (with BootX? profiles) --- Boot assembly
Metaprogramming
Lowering, simplified syntax
Main implementation of Oot semantics
implementation of low-level stuff
simple compiliation

Purposes:

Above this point I'll finally get to start working on my dreams for Oot syntax and semantics, instead of generic implementation-y stuff!

Below this point we have a generic toolbox of 'programming language implementation technologies' not very specific to Oot, and generally useful for creating portable programming languages that care more about simplicity than efficiency. Below this point we have unsafe languages with C-like wild undefined behavior. Above this point we have safe languages with mandatory bounds checking, suitable for security and sandboxing (possibly modulo low-level reliance due to CPU bugs on actual separate OS processes for security boundaries?), without wild undefined behavior (only 'controlled crashes'). Above this point we have stuff that is less likely to be reused outside of the Oot project. Highly optimized or highly platform-interoperable implementations will probably skip everything below this point and directly implement either OVM or Oot Core on the platform (it may be hard to go higher than that because Oot metaprogramming will operate on either the Oot Core or Preoot representation).

Either the OVM or Oot Core (probably OVM) introduces things like partial function application, lazy sequences/iterators, copy-on-write. However OVM is probably also somewhat concerned with efficiency and so has a lot of static-ness (think Java, C#, Wren, Haskell).

Either Oot Core or Pre-oot is sort of a Lisp-level AST language which is standardized and is the target of Oot metaprogramming manipulation (i haven't decided yet which one).

---

the_duke 1 day ago [–]

> Not All Programming is Systems Programming

Personally I often use Rust for "non systems-programming" tasks, even though I really wish a more suitable language existed.

Rust has plenty of downsides, but hits a particular sweet spot for me that is hard to find elsewhere:

So I often accept the downsides of Rust, even for higher level code, because I don't know another language that fits.

My closest alternatives would probably be Go, Haskell or F#. But each don't fit the above list one way or another.

reply

jaggirs 22 hours ago [–]

I feel like it should be trivial to make a 'scripting' variant of rust. Just by automatically wrapping values in Box/Rc when needed a lot of the cognitive overhead of writing Rust could be avoided. Add a repl to that and you have a highly productive and performant language, with the added benefit that you can always drop down to the real Rust backbone when fine-grained control is needed.

reply

grandmczeb 21 hours ago [–]

Check out Rune: https://rune-rs.github.io/

It’s more of a prototype, but I think it’s in the direction you’re describing.

reply

aidanhs 19 hours ago [–]

Similiar to GP, I too have been wondering about a Rc'd Rust.

Unfortunately Rune and Dyon[0] are dynamically typed, which isn't so attractive to me.

More promising are Gluon and Mun, both of which are statically typed. Of these two, Gluon has a somewhat alien syntax if you're coming from Rust (it notes it's inspired by Lua, OCaml and Haskell) so Mun is probably a better choice...but it seems very early, and the website notes as much (to serve the needs of a Rust-scripting-language I'd want seamless interop between it and Rust, which isn't quite there).

So I don't think there's anything in this space right now, but there are some promising options.

If you're wiling to go a little further afield, I'm kind interested in assemblyscript[3] - it's 'just' another WASM-based language so it's not a huge leap of imagination to believe there could be tooling to enable the seamless Rust interop. Just a matter of effort!

[0] https://github.com/PistonDevelopers/dyon [1] https://github.com/gluon-lang/gluon [2] https://github.com/mun-lang/mun [3] https://www.assemblyscript.org/

reply

athriren 18 hours ago [–]

gluon looks great, but the book site is down for some reason, which is unfortunate since i am looking for something nearly exactly like this.

reply

yashap 1 day ago [–]

You might like Scala and/or Kotlin. You listed Go, but Go’s type system is very weak, as is Go’s support for immutability, two problems that Scala and Kotlin don’t share.

reply

sanderjd 23 hours ago [–]

At one point in time, I thought Scala would be this, but it very much isn't. It feels bulky and lacking orthogonality to me, and it's tooling leaves a lot to be desired. (Note: this might be true of Rust too once it is as old as Scala.)

Kotlin, though, yep, big fan.

reply

terhechte 23 hours ago [–]

Did you try Swift? It is conceptually and syntax wise very similar to Rust.

reply

---

amelius 23 hours ago [–]

> Expressive, pretty powerful, ML and Haskell inspired type system

It's great that they use this, but it's still difficult to program in a purely functional style in Rust the way you would in, say, Haskell, because of memory management. Closures can create memory dependencies which are too difficult to manage with Rust's static tools.

reply

ohazi 23 hours ago [–]

This is definitely true, but you can often get surprisingly far by just boxing, cloning, Rc-ing, etc. whenever you hit something like this.

reply

acomar 20 hours ago [–]

the bigger issue is around the combination of closures, generic parameters, and traits. because higher-rank types can't be expressed and closures are all `impl Fn()`, you start to get an explosion in the complexity of managing the types and implementing the trait. try implementing something in the "finally tagless" style for an example of the kinds of things that can go wrong really fast. `Box` and `Rc` don't get you out of it and the resulting implementation is brittle and boiler-plate heavy in actual use.

... (comment goes on to give an example) ...

---

" To give a feel for the language, here’s a Scala implementation ofnatural numbers that does not resort to a primitive number type.

trait Nat { def isZero: Boolean; def pred: Nat; def succ: Nat = new Succ(this); def + (x: Nat): Nat = if (x.isZero) this else succ + x.pred; def - (x: Nat): Nat = if (x.isZero) this else pred - x.pred; }

class Succ(n: Nat) extends Nat { def isZero: Boolean = false; def pred: Nat = n }

object Zero extends Nat { def isZero: Boolean = true; def pred: Nat = throw new Error("Zero.pred"); } " -- [1]

---

i heard some praise for Scala recently and so i reviewed my notes on ppl's Scala opinions. The reasons ppl didn't like Scala appear to be:

the ones i care about the most for Oot are:

e.g. "The language and even the basic libraries are incredibly complex. Here's an example, a signature for List.map: final def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom?[List[A], B, That]): That" https://www.reddit.com/r/haskell/comments/3h7fqr/what_are_haskellers_critiques_of_scala/cu50l4x/?utm_source=reddit&utm_medium=web2x&context=3

" Maintaining and enforcing inheritance while dealing with erased types can be a real nightmare -- as these APIs show. The root cause of this is the JVM and Java interop. They aren't written this way by choice. level 3 [deleted] 10 points · 5 years ago

Ahm no… This example has particularly nothing to do with either inheritance or type erasure. There is an argument about decreasing the amount of mixins in the collections, but that goes for the implementation and not so much the API. And Martin Odersky acknowledged that a bunch of collection methods could indeed go into extension methods/ type classes if the API was to be redesigned. So it was a choice and not forced by the JVM.

"

" Martin did a talk on this, so the context is that it wasn't possible to encode map in a pure form in Scala because it wasn't possible to implement map to work with things like Array's to work with Java interopt. Java doesn't have value types, but Array's only work with primitives, so it isn't possible to map an Array that returns a non-primitive value without some implicit builder (which is your CanBuildFrom?)

There are arguably some cases where map is used where it shouldn't be (i.e. on a Set), but the primary reason was for Scala/Java compatibility

"

what ppl like:

---

so my conclusion from the above is:

---

also should checkout Kotlin, which receives lots of praise often

---

" The Shallow End/Deep End False Dichotomy

It has been touted that Scala allows you to wade into the shallow end of the pool, using only the features that you’re comfortable with....Sounds good, right? Until you realise that, by accident or not, each library developer pulls you out of the shallow end, throws you straight into the deep end...As soon as you want to do any production, business web application development, there’s a bunch of stuff you’re probably going to need to do, for example, route HTTP requests, make database calls, serve HTML and JSON/XML...As alluded to previously, the people who like to release libraries for new languages also like to experiment with language features. Unless the goal of their project is to cater to Scala noobs, they’ll cram every clever use of Scala into the client facing part of their library. Now is the point where you take a deep breath and hold it; you’re in deeper water than you’re capable of swimming in. "

---

2 points by bshanks 2 days ago

parent [–]on: Why Not Rust?

I'd be interested to hear your thoughts on why Go might be better than Python for business logic.

reply

llimllib 21 hours ago [–]

Mainly that it’s just so aggressively boring (which I love). No exceptions means you have to deal with your errors all the time, and it’s just generally very easy, when dropped into a random spot in the code, to figure out what’s happening. Very little magic.

reply

---

higerordermap 9 hours ago [–]

I don't think zig is very well suited for / should target web dev.

There a GC'd language with modern features should do best. Imagine OCaml but modern and great tooling. That would be easy for developers, more productive to write, less type errors, also more productive because of IDE support, and much faster than current crop of scripting languages.

Sadly, there is a vacuum for such a language. Go is too gruntwork and minor details, Rust is designed for high reliability system programming, Zig is designed as sytem language in spirit of C, OCaml is archaic and does not seem to improve by much.

The hopes in this area are

Nim - Nice python-like syntax, but feels a little unpolished, metaprogramming may be hindrance to IDE tooling and code readability on large codebases.

Crystal - Ruby like syntax and stdlib. LLVM compilation, very young language, no windows support, and (personal opinion) I think some work can be done on performance of idiomatic code.

OCaml - there is not much manpower behind it. All those Reason XYZ attempts to provide javascript syntax over it don't seem to have gotten traction. Tooling is pretty good considering how obscure language is. They might need a modern interface to toolchain, and an optimizing compiler seems to be being worked on. La k of multicore is often cited as a drawback but it is being worked on, and Python doesn't do multicore either, I don't think multicore matters to 90% of people doing webdev.

F# - Would be nice if it was not confined to .NET ecosystem and had good native compilation story.

reply

---

a comment on https://lobste.rs/s/u2oufb/notes_on_smaller_rust

9 animatronic edited 1 year ago

link

I think much of this could be implemented in a “batteries included” library that has pervasive use of Arc<RefCell?<Box<Any>>> or some such, spit balling here and/or embedded as a language within a macro.

I think a model like

    https://github.com/zdevito/terra
    https://github.com/titan-lang/

where a high level dynamic language is interwoven with a low level counterpart. This could well serve Rust, so that complexity and control are opt-in in a gradual way. Or maybe more tooling around starting projects in Dyon/Gluon with smooth affordances to dropping down into Rust. Where Rust is seen as the low level, correct, high performance base layer and application code can be written in a higher level language.

---

"

Tentative roadmap

This is a very preliminary roadmap towards Titan 1.0, where everything is subject to change, with things more likely to change the further they are in the roadmap: Supported

    control structures
    integers
    floats
    booleans
    strings
    arrays
    top-level functions
    early-bound modules
    multiple assignment/multiple returns
    FFI with C (C pointers, call C functions)
    records (structs) with methods
    maps

In progress

    first-class functions (still only in the top-level)

Next

    FFI with C, continued (C arrays, C structs)
    standard library that is a subset of Lua's standard library, built using the C FFI
    tagged variants (unions of structs with some syntax for switch/case on the tag)
    polymorphic functions
    for-in
    self-hosted compiler
    nested and anonymous first-class functions with proper lexical scoping (closures)
    ":" syntax sugar for records of functions
    classes with single inheritance, either Go/Java/C#/Swift-like interfaces/protocols or Haskell/Rust-like typeclasses/traits
    ":" method calls (not syntax sugar)
    operator overloading
    ...Titan 1.0!

" -- https://github.com/titan-lang/titan

---

"

    I would be careful to design and implement the compiler so that it could be embedded in different runtimes, and I would have two primary targets: an LLVM based one creating a standalone binary on the mainstream UNIX and windows OSes, and a WASM target that would be intended to use the host VM’s runtime for threads, garbage collection, and so on."

---

" Many people who use Rust for a bit - especially those who like the language but do not fall in love with it - feel a sense that there must be a smaller, simpler variation on the same theme which would maybe be a little less powerful, but would also be much easier to use. I agree with them, but I think they are almost always wrong about what would need to change. Here are some notes on where I would start to create that smaller Rust. What makes Rust work

People almost always start in precisely the wrong place when they say how they would change Rust, because they almost always start by saying they would add garbage collection. This can only come from a place of naive confusion about what makes Rust work.

Rust works because it enables users to write in an imperative programming style, which is the mainstream style of programming that most users are familiar with, while avoiding to an impressive degree the kinds of bugs that imperative programming is notorious for. As I said once, pure functional programming is an ingenious trick to show you can code without mutation, but Rust is an even cleverer trick to show you can just have mutation.

Here are the necessary components of Rust to make imperative programming work as a paradigm. Shockling few other production-ready imperative languages have the first of these, and none of them have the others at all (at least, none have them implemented correctly; C++ has unsafe analogs). Unsurprisingly, the common names for these concepts are all opaque nonsense:

    “Algebraic data types”: Having both “product types” (in Rust structs) and “sum types” (in Rust enums) is crucial. The language must not have null, it must instead use an Option wrapper. It must have strong pattern matching and destructuring facilities, and never insert implicit crashing branches.
    Resource acquisition is initialization: Objects should manage conceptual resources like file descriptors and sockets, and have destructors which clean up resource state when the object goes out of scope. It should be trivial to be confident the destructor will run when the object goes out of scope. This necesitates most of ownership, moving, and borrowing.
    Aliasable XOR mutable: The default should be that values can be mutated only if they are not aliased, and there should be no way to introduce unsynchronized aliased mutation. However, the language should support mutating values. The only way to get this is the rest of ownership and borrowing, the distinction between borrows and mutable borrows and the aliasing rules between them.

In other words, the core, commonly identified “hard part” of Rust - ownership and borrowing - is essentially applicable for any attempt to make checking the correctness of an imperative program tractable. So trying to get rid of it would be missing the real insight of Rust, and not building on the foundations Rust has laid out.

However, once you get away from that, there is must about Rust that is circumstantial complexity, only because of Rust’s extensive efforts to support a low-overhead high-control systems programming use case. " -- https://boats.gitlab.io/blog/post/notes-on-a-smaller-rust/

---

https://boats.gitlab.io/blog/post/notes-on-a-smaller-rust/ is a good read btw (and worth rereading sometime)

there is a followup:

https://without.boats/blog/revisiting-a-smaller-rust/

---

data types in R mentioned in https://r4ds.had.co.nz/transform.html:

" You might also have noticed the row of three (or four) letter abbreviations under the column names. These describe the type of each variable:

    int stands for integers.
    dbl stands for doubles, or real numbers.
    chr stands for character vectors, or strings.
    dttm stands for date-times (a date + a time).

There are three other common types of variables that aren’t used in this dataset but you’ll encounter later in the book:

    lgl stands for logical, vectors that contain only TRUE or FALSE.
    fctr stands for factors, which R uses to represent categorical variables with fixed possible values.
    date stands for dates."

---

"Keiichi once told that one of the reasons they could grow Ucc so rapidly was that they wrote Ucc by OCaml. OCaml allows you to manipulate tree structure so easily without any pointer bugs. In addition, of course that’s because they are awesome. BTW, for those who’re interested in the preprocessor part, we used Clang’s CPP. Did you know Clang’s CPP can be used as a standalone command? Keiichi has written his article about the compiler team in Japanese." -- [2]

---

more evidence that our target memory usage should be 'tens of kilobytes':

"Meanwhile, having been generally designed around modern machines with seemingly limitless resources, higher-level languages and environments are simply too full-featured to fit into (say) tens of kilobytes or into the (highly) constrained environment of a microcontroller. And even where one could cajole these other languages into the embedded use case, it has generally been as a reimplementation, leaving developers on a fork that isn’t necessarily benefiting from development in the underlying language." -- http://dtrace.org/blogs/bmc/2020/10/11/rust-after-the-honeymoon/

that quote also points out the value of having our core, supported implementations available on embedded systems

---

"We still need branches; they’re not going anywhere. Yeah, you can write branchless/low-branch code in some cases, such as graphics pipelines, but normal code can’t get away without them. And at that point, you’re going to want branch predictors. Maybe slower, more careful ones, but you’re still going to want them. And the second you do that, you don’t want conditional instructions: ARM originally worked that way, and it was awful for performance, because they get in the way of having good branch prediction." [3]

---

"I also remember a friend that worked at a failed processor company in the 1990's. Said the real reason the company went under was the academic designers thought programs 'do integer math' when modern programs 'do string manipulation'. "

---

i'm a little concerned by the GNU Lightning copyright license. It seems to be LGPL, which i'm fine with, but any hint of copyleft could scare off some unaware corporate contributors/users, and programming languages can't afford to do that.

---

keep it simple, keep it safe

---

"

ddragon 6 hours ago

parent flag favorite on: Julia: Dynamism and Performance Reconciled by Desi...

"Ever" could be a very long time, and lots of things can happen.

For example it might be possible to AoT? compile parts of the code (as long as the types are fully defined) to export as a standalone library, and if that happens it would be beneficial to write the number crunching libraries for glue languages in Julia. I mean, there are already those (like DiffEq? to Python and R), but if it's a binary that could have just as easily come from C it would be even better. And there are already works along this path, including for wasm deployment.

The multithreading in Julia is very fast and easy to use, and I'm sure it will become robust and safer as well as the language gets older, which can very well allow for an Akka style framework that allow for large software with both high reliability (thanks to stuff like supervision trees) and structured code (since like Julia's ecosystem design favors small libraries composing with each other, the future of large Julia codebases could become the composition of many small high level but fast actors).

There is also lots of interests in work-arounds on the garbage collector, like improved support to immutable structures (like 1.5 improvements on structs with references to mutable structures). It's possible that you might be able to more reliably write parts of the code that do not allocate (or allocate in predictable ways), so you'll be even handle those realtime code loops. "

---

random thread mentioning tail calls, tail call implementation and callee-cleanup (vs varargs and caller cleanup), the use of mutual recursion and TCO, mutually recursive TCO for state machine, TCO and less debuggability due to missing stack traces, and comparison to lazy sequences, clojure and trampolines in place of TCO. I already read it and don't need to read it again, and there's no huge insights, but i'm just putting it here in case i'm ever looking for it: [4]

---

---

dvt 1 day ago [–]

Happy birthday Go! I'm currently working on a little side-project and I wanted to write (part of) it in Go. As a side-note, the last time I wrote Go was all the way back when I made a few small contributions to the http standard library, circa 2009. As soon as I set up my environment, my gears were already turning (coming from Python, TypeScript?, Java, etc.): how should I structure things? Where should this live? What should be calling what, and where should things be passed from and into?

And, like a breath of fresh air, I started remembering how Go works: it's a very simple, stupid language -- in the best possible way -- and it just works. You start writing code, and then you write more code. You write a function. And then you write another one. Slowly, your program gets bigger. Sometimes you need to split up functions.

And that's it. No fiddling with project structure, with taxonomies, and with hierarchies. I forgot what it felt like to just sit down and write some code. I've gotten used to battling tooling, fighting with opinionated frameworks, and having a zillion ways of doing something -- stuck in an eternal analysis paralysis.

I missed Go.

reply

throwaway894345 1 day ago [–]

To add to the list of things other languages make you think about: setting up a CI job for building your code packages and uploading them to the appropriate repository, setting up a CI job for building your documentation packages and a website to host them, deciding on a test framework, drafting a style document (and policing contributions accordingly), configuring a web server (e.g., tomcat, uwsgi, etc) if your project is a service, figuring out how to distribute your code to your target platform (while making sure all of the right runtime dependencies are installed), etc. This is all stuff you don't really have to learn or think about with Go because it's all solved for you out of the box.

reply

---

" In such closures, return, ?, await, and so on capture the control flow of their surrounding scope, just as all closures can capture the variables of their surrounding scope. This property has been used to great effect in some languages, of which Kotlin is the most successful statically typed example I’m aware of.

To give a better taste of the idea, these sorts of closures are sometimes proposed to be demarkated by sitting outside of the function arguments (as they do in Kotlin), like this for example:

collection.iter().filter() do

elem{
    // the `?` here returns early from the function this
    // expression is contained in:
    elem.get_foo_bar()? == bar}

This has the advantage of making these control flow preserving closures look more like most other blocks in the same control flow scope, by putting them outside of the parens. "

---

" I argue that we can still make a lot of progress and create better models for many things related to programming languages, including (but not limited to): unicode, interprocess communication (and in general compatibility of data across programs and languages), access to graphics and audio APIs, pass by value vs pass by reference, mutability vs immutability, transparency of memory models, cross-compatibility, database accessibility, etc.

"

---

marcosdumay 1 day ago [–]

> What's a possible way to design a language for collaboration besides encouraging ever more fine-grained modularization and code reuse?

Contract, contracts and contracts!

Collaborative development depends on people not freely overstepping outside of their bounds and clear communication of those.

OOP creates some actually strong kinds of contracts by abstract interfaces and information hiding. Flexible typed languages do the same with generics and specific types. Any language feature that creates more contracts will help collaboration.

reply

---

" For over a decade now, I've been pondering a "perfect" language.

Of course, such a thing is impossible, because we're always learning, and we can always do better, but it leads to some interesting avenues of thought. We can claim that certain concepts are mandatory, which can be used to reject languages that can never meet our ultimate requirements.

For example, a common complaint is the "impedance mismatch" between, say, query languages such as SQL or MDX, and object oriented languages such as Java and C#. Similarly, JSON has become popular precisely because it has zero "mismatch" with JavaScript? -- it is a nearly perfect subset.

This leads one to the obvious conclusion: An ideal language must have subsets, and probably quite a few. If it doesn't, some other special-purpose language would be forced on the developers, and then our ideal language isn't perfect any more!

The way I envision this is that the perfect language would have a pure "data" subset similar to JSON or SQL tables, with absolutely no special features of any type.

The next step up is data with references.

Then data with references and "expression" code that is guaranteed to terminate.

Then data with expressions and pure functions that may loop forever, but have no side-effects of any kind.

At some point, you'd want fully procedural code, but not unsafe code, and no user-controlled threads.

At the full procedural programming language level, I'd like to see a Rust-like ownership model that's optional, so that business glue code can have easy-to-use reference types as seen in C# and Java, but if a library function is called that uses ownership, the compiler takes care of things. You get the performance benefit where it matters, but you can be lazy when it doesn't matter.

Interestingly, C# is half-way there now with their Span and Memory types, except that this was tacked on to an existing language and virtual machine, so the abstractions are more than a bit leaky. Unlike Rust, where the compiler provides safety guarantees, there are a lot of warnings-to-be-heeded in the dotnet core doco.

TL;DR: We don't need separate simplified languages, we need powerful languages with many simpler subsets for specialist purposes. " -- jiggawatts

the_french 1 day ago [–]

I think you would be interested in Noether, a full language design based around this principle: https://tahoe-lafs.org/~davidsarah/noether-friam4.pdf. I've always been sad that an implementation was never created. It's one of the most unique designs for a language in the past decade.

reply

jiggawatts 13 hours ago [–]

Thank you for the reference, it's very interesting to see that someone has has had the same line of thought!

I'm reading through the presentation slides for Noether, and it almost exactly follows my line of thinking, but uses much more precise definitions and restrictions than my own hand waving.

However, it only goes "down" to a very pure functional language. I would argue to that there is a need to take a step further to a data-only language also.

reply

shmageggy 1 day ago [–]

I agree. For an extreme example, by taking an extremely restricted subset of Python, Numba is able to get crazy speed-ups, automatic compilation to GPU kernels, etc. I've been waiting for someone to implement the subset of Python that allows for no GIL and huge speedups by cutting out some of the dynamic stuff. I'd bet 99% of the code I write would fit in that subset.

reply

---

colanderman on Nov 15, 2015 [–]

Completely agree. People think I'm weird that I think functions being first-class is a dumb idea.

But think about it: how often do you need to perform some complicated algorithm on a function, or store a dynamically generated function in a data structure? Almost always, first-class functions are simply used as a means of genericizing or parameterizing code. (e.g. as arguments to `map` or `fold`, or currying.) (Languages like Haskell and Coq that are deeply rooted in the lambda calculus are a notable exception to this; it's common to play interesting games with functions in these languages.)

You can get the same capabilities by making functions second-class objects, with a suitable simple (non-Turing-complete) language with which to manipulate them. That language can even be a subset of the language for first-class objects: the programmer is none-the-wiser unless he/she tries to do something "odd" with a function outside the bounds of its restricted language. Generally, there is a clearer, more efficient way to express whatever it is they are trying to do.

There is some precedent for this. In Haskell, typeclass parameters live alongside normal parameters, but aren't permitted to be used in normal value expressions. In OCaml, modules live a separate second-class life but can be manipulated in module expressions alongside normal code. In Coq, type expressions can be passed as arguments, but live at a different strata and have certain restrictions placed on them.

Unfortunately designing languages like this is hard. It's easy to just say "well, functions are a kind of value in the interpreter I just wrote; let's make them a kind of value in the language". This is the thinking behind highly dynamic languages like JavaScript?, Python, and Elixir: the language is modeled after what is easy to do in the interpreter without further restriction. The end result is a language that is difficult to optimize and analyze.

It's a lot more work to plan out "well, I ought to stratify modules, types, functions, heterogenous compounds, homogeneous componds, and scalars, because it will permit optimizations someday". But these are the languages that move entire industries.

---

" I understand that adding

make a language "more powerful" by allowing the programmer to think at a higher level. " [5]

--- old

removed from boot_design.adoc

The plan is to build four layers, each of which is implemented upon the previous:

---

"

pierrebai 8 hours ago [–]

Ah the funny things we resad about in 2020.

In 1985... yes I said 1985, the Amiga did all I/O through sending and receiving messages. You queued a message to the port of the device / disk you wanted, when the I/O was complete you received a reply on your port.

The same message port system was used to receive UI messages. And filesystems, on top of drive system, were also using port/messages. So did serial devices. Everything.

Simple, asynchronous by nature.

As a matter of fact, it was even more elegant than this. Devices were just DLL with a message port.

reply

beagle3 8 hours ago [–]

And it worked, well, with 512K memory in 1985.

The multitasking was co-operative, and there was no paging or memory protection. That didn't work as well (But worked surprisingly well, especially compared to Win3.1 which came 5-6 years later and needed much more memory to be usable).

I suspect if Commodore/Amiga had done a cheaper version and did not suck so badly at planning and management, we would have been much farther along on software and hardware by now. The Amiga had 4 channel 8-bit DMA stereo sound in 1985 (which with some effort could become 13-bit 2 channel DMA stereo sound), a working multitasking system, 12-bit color high resolution graphics, and more. I think the PC had these specs as "standard" only in 1993 or so, and by "standard" I mean "you could assume there was hardware to support them, but your software needed to include specific support for at least two or three different vendors, such as Creative Labs SoundBlaster? and Gravis UltraSound? for sound).

reply

atombender 7 hours ago [–]

Something else that's mentioned less than the hardware side is AmigaDOS? and AmigaShell?, which were considerably more sophisticated than MS-DOS, and closer to Unix in power (e.g. scripting, pipes, etc.).

The fate of Amiga is so infuriating. It's mind-boggling to think how Microsoft was able to dominate for so long with clearly inferior technology, while vastly superior tech (NeXT?, Amiga, BeOS?) lost out.

There are many such unhappy stories, and I often think about the millions of hours spent on building tech that should have conquered the world, but didn't. The macOS platform is a rare incidence of something (NeXT?) eventually winning out, but the Amiga was a different kind of dead end.

reply

int_19h 1 hour ago [–]

If you think about it, the triumph of "good enough in the right place at the right time" describes most of history of computing. Unix was that, as well, compared to many of its contemporary OSes. C was several steps back from the state of the art in PLs. Java, JavaScript?, PHP... the list goes on and on.

reply

wikibob 1 hour ago [–]

“C was several steps back from the state of the art in PLs”

This very accurately describes Go

reply "

---

implemented Boot in Python again on Nov 26 (Thanksgiving) and Nov 27 (commit 39df6da; "pyboot3"). I'm much happier with it this time (compared to pyboot and pyboot2)!! I got it (the first version) done in 2 days (maybe half a day yesterday and a full day today) so think i finally managed to make Boot simple enough for it to qualify as a weekend project (although, maybe not if you have a lot going on on the weekend/kids; otoh a better programmer than i could do it even faster -- i think plenty of ppl implement Lisp in a weekend). It's only 641 LOC Python according to cloc. I tested/debugged it a little but i expect there's still lots of bugs.

The assembly and disassembly still took a surprisingly large proportion of the time. Luckily, most implementors won't have to rewrite that stuff. The actual VM was fairly straightforward, probably as close to 'dead simple' as a serious VM can get. I think that for someone who isn't up on bit twiddling, two's-complement, etc, it would probably still get pretty confusing in places, but i think that's unavoidable for this sort of thing.

implementing it in Python also reminded me how great Python is, with neat libraries like argparse that a young new language probably wouldn't have yet.

the only thing that i'm immediately considering changing is whether the syscalls pass arguments in registers (the current version) or on the stack (which would work better with BootX?).

next steps are probably (not necessarily in this order):

more validation that the current design of Boot may be feasible:

the next layer up:

of course this will all go very slowly b/c i don't have time to work on this project very often. I did a lot of stuff recently b/c i had had a large pile of inconsistent notes as i had changed my mind about some things, and i wanted to get it down to a consistent state before i forgot what i had decided. But it's in a consistent state now (except that it's possible that my recent change to make the Boot encoding different from the BootX? encoding isn't reflected in all the design docs and BootX? docs, although i did try to update them); boot_reference.md and bootx_reference.mdare both up to date now, and boot_reference.md is consistent with the implementation in pyboot3.

---

i don't think this is for me (at the OS level at least, at the language level sure) but it's interesting:

" technomancy 4 days ago

link flag
    The operating system should provide applications with a standard way of inputting and outputting structured data, be it via pipes, to files

I’d go so far as to say that processes should be able to share not only data structures, but closures.

    4
    lojikil 3 days ago (unread) | link | flag | 

This has been tried a few times, it was super interesting. What comes to mind is Obliq, (to some extent) Modula-3, and things like Kali Scheme. Super fascinating work.

3 kisonecat 4 days ago

link flag

Neat! Do you have a use-case in mind for interprocess closures?

    4
    sjamaan 3 days ago (unread) | link | flag | 

To me that sounds like the ultimate way to implement capabilities: a capability is just a procedure which can do certain things, which you can send to another process.

    5
    technomancy 3 days ago (unread) | link | flag | 

This is one of the main things I had in mind too. In a language like Lua where closure environments are first-class, it’s a lot easier to build that kind of thing from scratch. I did this in a recent game I made where the in-game UI has access to a repl that lets you reconfigure the controls/HUD and stuff but doesn’t let you rewrite core game data: https://git.sr.ht/~technomancy/tremendous-quest-iv "

---

18 zge edited 4 days ago

link flag

I like to think about these things, but don’t have much hope. Here are my points:

    Networking shouldn’t be an afterthought. Distributed computing should not be as difficult as it is. Transparently interacting or using resources from other systems should be something you don’t have to think about. I don’t care about hardware. I don’t care about CPU architectures. I don’t care about GPUs. I don’t care about drivers. All computers form an transnational turing machine.
    Object capabilities should be a primitive concept. Imagine sharing a screen: That shouldn’t be the hassle it is, you should just be able to give someone read access to a segment or the whole display. The same applies to Files (but we probably shouldn’t have files), Hardware access, etc.
    Hypertext should be a everywhere. The web has shown how attractive the idea is, but browsers are cursed to contain it, which is getting harder and harder. Project Xandu had good ideas about this, and HTTP is a shallow copy. We need complex links that can point to miscelanious parts of the system, and ideally also have back-references. You probably want a lof of cryptography for something like thise, to avoid the centralisation of power.
    Logic and UI should be separate. Unix programms regard the standard output and input as the default UI, everything else is a side effect. Instead we should have the ability for a program (or procedure, algorithm, …) to produce complex data, that doesn’t only mean something in a specific environment (Powershell), but is universally understood. A terminal-like environment could display the results line-by-line, but it should be transformed into a graphical representation using a table, or a graph (or whatever one might come up with later).
    Programming should not be a specialist’s affair. We have two classes of people, those who are at the mercy of computers, and those who can use them. This shouldn’t be the case, because the former are in a much weeker position, getting lost, getting overwhelmed, and sometimes even abused by those who know better. A proper operating system cannot be based on the lie, that you don’t need to know anything to use a computer: To be a responsible user, you need to know some basics. A simple programming language (I would like something like Scheme, but that’s just be) should be integrated into the system, and the user shouldn’t fear it. It’s a direct link to the raw computational power that can be used.

In some sense, I like to think of it like Plan 9, without the Unix legacy, but that seems to simplistic. The interesting thing about Unix, is that despite it’s limitations, it creates the fantasy of something better. Inbetween it’s ideal power and it’s practical shortcomings, one can imagine what could have been.

10 spc476 4 days ago

link flag
    Networking: QNX was network transparent. It was wild running a command on computer 1, referencing a file from computer 2, piping the output to a program on computer 3 which sent the output to a device on computer 4. All from the command line. The IPC was fast [1] and network transparent, and used for just about everything.
    Hypertext: The only operating system I know of that uses extensive form of hypertext is TempleOS (I don’t think it’s HTML but it is a form of hypertext) that extends pervasively throughout the system.
    Logic and UI: There are bits and pieces of this in existence. AmigaOS has Rexx, which allows one to script GUI programs. Apple has (had?) something similar. Given that most GUI based programs are based around an event loop, it should be possible to pump events to get programs to do stuff.
    Programming: True, but there is Excel, which is a programming language that doesn’t feel like one. Given an easy way to automate a GUI (similar to expect on the command line), and teaching people that computers excel (heh) at repeated actions could go a long way in giving non-programmers power.

7 snej 3 days ago (unread)

link flag
    Transparently interacting or using resources from other systems should be something you don’t have to think about.

Then everyone will run headlong into the fallacies of distributed computing, unfortunately. This is why things like CORBA and DistriibutedObjects? failed. Networking is not transparent, much as we would like it to be.

At least not in a normal imperative programming paradigm, like RPC. You can get a lot of transparency at a higher level through things like async replication, e.g. Dropbox or [plug] Couchbase Mobile. But even then you have to be aware & tolerant of things like partitions and conflicts.

    5
    zge 3 days ago (unread) | link | flag | 
    Could you elaborate on this? Why no files?

Maybe it’s clearer, if I say file system. It might be too much to throw out the concept of a digital document, but I have come to think that file systems, as we know them on POSIX systems, are too low level. Pure text, without hyperlinks would be a wierd thing in an operating system where everything is interconnected, and why directories shoudln’t have to be a simple tree (because tools like find(1) couldn’t do proper DFS in the 70’s), but instead could be any graph structure of sets, or even computed.

---

" The rc shell is excellent, but can be streamlined a bit. The C API is not great - something a little bit closer to POSIX (with the opportunity to throw a bunch of shit out, refactor like mad, fill in some gaps, etc) would be better. The acid debugger has amazing ideas which have had vanishingly little penetration into the rest of the debugger market, and that ought to be corrected. gdb, strace, and dtrace could all be the same tool, and its core could be just hundreds of lines of code. The marraige of dtrace and acid alone would probably make an OS the most compelling choice in the market. " http://man.9front.org/1/rc https://plan9.io/sys/doc/acidpaper.html

--- name

Ootizens

---

"language design would become increasingly focused on “how to design great libraries.”" [6] (this essay is quoting someone else who said that and maybe even that person no longer agrees with it)

---

" Arc was released. It was a Lisp-1 with shorter names and fewer parentheses than most other Lisps, and some reader macros to make anonymous functions easier to define. ... It was, in all, underwhelming ... He had written that strings were premature optimization, and should be replaced by lists of characters. If he had done so, and made the characters full Unicode code points ... Graham had asked “[h]ow many times have you heard hackers speak fondly of how in, say, APL, they could do amazing things with just a couple lines of code? I think anything that really smart people really love is worth paying attention to.” But the undeniably succinct primitives and composition rules of array programming languages were nowhere to be found. ... If Arc had its own runtime, it could have supported durable closures ... In rejecting parallel computation as “premature optimization”, Graham also seemed to have eschewed any consideration of concurrency primitives, which are necessarily “fundamental operators” and critically important to writing clear, concise network-facing software.

My favorite near-miss is this observation Graham made in December 2001, only months after he began development on Arc:

    assoc-lists turn out to have a property that is very useful in recursive programs: you can cons stuff onto them nondestructively. We end up using assoc-lists a lot.

He had noticed that immutable maps are a useful data structure. Building on this, he might have found a paper published the previous year describing an efficient implementation for immutable maps, and made those a foundational data representation in his language. That, in any case, is what Rich Hickey did when creating Clojure, which was released three months before Arc and became the most widely-used Lisp ever made. Instead, assoc-lists remain a list-backed data structure which, Arc’s documentation informs us, makes them “inefficient for large numbers of entries”. " [7]

---

	Database Startup Idea: Dynamic Relational
	1 point by tabtab on Sept 27, 2017 | hide | past | favorite | 4 comments
	Over at the c2.com wiki, the idea of "Dynamic Relational" was explored. You DON'T need a DBA: columns and tables are Create-On-Write, unless you start adding constraints to make it act more like a traditional RDBMS: as a project matures, you can incrementally "lock it down". Such a tool would be great for start-up projects and situations where the requirements are still fuzzy.

Conceptually you can think of each row as an XML statement. For example, an employee record could be represented as:

  <employee lastname="Li" firstname="Joe" salary="120000" id="318"/>

This does not imply it has to be implemented as XML, it's just a handy conceptualization. It's possible to use SQL as the query language, with some minor tweaks. For example, one has to be careful about comparisons because of the implied type model. But other than type handling, users of a Dynamic Relational system would feel right at home because they can leverage most of their existing RDBMS and SQL knowledge.

If you ask for a non-existing column, such as "SELECT madeUpColumn FROM employee", the result column would be blank or null. That is unless one adds constraints that forbid undefined columns on a given table. Also, people disagree if and how to implement nulls. I'd suggest make null-handling a configuration switch.

If you did "SELECT * FROM employee", the result table would have all the columns found in the "employee" rows. In a typical CRUD application, you probably should use explicit columns in the SELECT clause.

Now, if somebody would just build such a database system...

---

" While the actual flavor of Lisp used has varied for me (Scheme, Common Lisp, Racket, Lisp-for-Erlang), the core has always remained the same: An s-expression based, dynamically typed, mostly functional, call-by-value λ-calculus-based language. "

---

11 icefox 35 hours ago

link flag

Working on making a programming language called Garnet. The goal is basically “what if Rust, but small?”. Take an OCaml/Haskell-ish type system, add move semantics and a borrow checker, and see if it can be made small and powerful enough to be usable as a lingua franca similar to how C is used: The ABI works anywhere (one way or another), there are compilers that work even in places they probably shouldn’t, building a simple compiler from scratch can be a medium-sized one-person project, etc.

Currently it looks doable, with your basic language primitives being functions, data types, and namespaces. I want to try to do something about some of the icky/weird parts of Rust (.iter(), .iter_mut(), .into_iter(), .borrow(), .as_ref(), .deref(), .to_owned(), fn(), Fn(), FnOnce?(), FnMut?(), oh my, how do you tell the differences between these things and where does it end…), which may or may not be possible. I want to nail things down to have as little Undefined Behavior as possible, which may be a fool’s errand but I’m fine with trying. And I want to try to preserve the Totally Awesome things such as macros/derives.

So far the hard part is dealing with generics, one way or another; I want to explore the design space between monomorphic and polymorphic generics a bit more. It looks like you can find a nice middle point of performance, simplicity and generalness with them… until you start adding type bounds of one kind or another at least. Swift provides some interesting inspiration there but not all of it seems suitable for a systems language, so I have work to do. We’ll see how it goes; I haven’t even started figuring out how to write a borrow checker yet. But it can compile and run a Fibonacci function, so how much harder could the rest of that be?

---

re-reading https://without.boats/blog/notes-on-a-smaller-rust/ , there are a lot of good thoughts in there. Maybe should take this as one of the many 'main inspirations'/lists of things to consider doing for Oot.

---

" There are some low-level features that Rust doesn't have a proper replacement for:

    computed goto. "Boring" uses of goto can be replaced with other constructs in Rust, like loop {break}. In C many uses of goto are for cleanup, which Rust doesn't need thanks to RAII/destructors. However, there's a non-standard goto *addr extension that's very useful for interpreters. Rust can't do it directly (you can write a match and hope it'll optimize), but OTOH if I needed an interpreter, I'd try to leverage Cranelift JIT instead.
    alloca and C99 variable-length arrays. These are controversial even in C, so Rust stays away from them." [8]

---

i guess sometime i should make my updated flowchart for deciding which programming language to use at this time:

0. Is the project domain:

  1. START EMBEDDED

EmbeddedOrLowlevel?: Is code size, performance, or low-level functionality important? Yes: LowLevel?? No: EmbeddedButNotLowLevel??

EmbeddedButNotLowLevel?: Is Lua or Micropython available on the platform? Yes: Lua or Micropython No: LowLevel??

LowLevel?: Does LLVM support the platform? Yes: LLVM? No: EmbeddedNoLLVM??

LLVM: Is the project large? Yes: Rust No: Zig

EmbeddedNoLLVM?: The project is not supported by LLVM. Is performance or code size extremely important? Yes: native assembly No: CSupport?

CSupport: Is there a C toolchain for the platform? Yes: C No: NoCSupport??

NoCSupport?: Is this a big project? Yes: Implement a Forth interpreter or C compiler on the platform, then write your program in that No: native assembly

  1. END EMBEDDED
  2. START SCICOMP

SciCompNotStats?: Is it an AI/machine learning project? Yes: Python No: SciCompNotAI??

SciCompNotAI?: Is it a statistical analysis project? Yes: R No: SciCompNotAINotStats??

SciCompNotAINotStats?: Is high performance required? Yes: Julia No: Python

  1. END SCICOMP

MobileApp?: Is the phone app targetting Android or iOS or both? Android: Kotlin iOS: Swift Both: Not sure. Either Xamarian, React Native, Flutter, or Kotlin Multiplatform

  1. GeneralPurpose?

GeneralPurpose?: Is extremely high performance required? Yes: GeneralPurposeHighestPerf? No: HLL

GeneralPurposeHighestPerf?: Is maximum CPU performance required (for example, a demanding client-side application like a web browser), or does it not matter because the program is expected to be IO-bound (for example, backend server infrastructure)? Maximum CPU perf needed: Rust Otherwise: Golang

HLL: A general-purpose high-level language is indicated. Does the project require lots of concurrency? Yes: Elixir No: HLLNoConcurrency??

HLLNoConcurrency?: Is reliability in production extremely important? Yes: StaticHLL?? No: HLLNoConcurrencyNoReliable??

HLLNoConcurrencyNoReliable?: Is high performance very important? Yes: StaticHLL? No: DynamicHLL?

  1. START STATIC HLL

StaticHLL?: A language with static typing is indicated. Is the project mostly symbolic manipulation (e.g. a compiler or interpreter or chess engine)? Yes: ReliableHLLSymbolic? No: ReliableHLLNoSymbolic?

ReliableHLLSymbolic?: Would you prefer writing a little boilerplate in exchange for simplicity, or do you want maximal expressiveness? Boilerplate: OCaml Expressiveness: Haskell

ReliableHLLNoSymbolic?: Do you strongly prefer the JVM ecosystem? Yes: Kotlin No: C#

  1. END STATIC HLL
  2. START DYNAMIC HLL

DynamicHLL?: Is the project worth possibly learning a new language? Yes: UnreliableHLLWorthNewLang? No: Python

UnreliableHLLWorthNewLang?: Does the project require either a lot of abstraction, or a lot of symbolic manipulation (e.g. a compiler or interpreter or computer algebra system or chess engine or expert system)? Yes: UnreliableSymbolic? No: UnreliableHLLNonConcurrentNonSymbolic?

UnreliableSymbolic?: Is it a logic programming project? Yes: Prolog No: Lisp?

Lisp: Will you be writing or making heavy use of DSLs in your project? Yes: Racket No: UnreliableSymbolicNoDSL??

UnreliableSymbolicNoDSL?: Can you deal with the latency of the JVM? Yes: Clojure No: Common Lisp or Racket

UnreliableHLLNonConcurrentNonSymbolic?: Python

  1. END DYNAMIC HLL

The nodes that I am the least confident about are: symbolic: - Lisp (and child UnreliableSymbolicNoDSL?): i haven't used Lisp enough to be very certain if Lisp is better even for symbolic stuff, or to be very certain that i've chosen the right kind of Lisp for the right application - ReliableHLLSymbolic?: i haven't used these languages enough to know if they should be recommended - UnreliableSymbolic?: i don't know if there's something better than Prolog these days other specific applications: - GPU/OpenCL?: as noted, i don't know enough about GPU programming - SciCompNotAI?: i don't know if R is worth it even for stats - MobileApp?: as noted, i don't know enough about cross-platform mobile dev

Also, note that Python, Lisp can be pretty reliable, esp. with optional type annotations; note that the reliability question asked if reliability is EXTREMELY important.

---

characters visible on a variety of platforms:

https://github.com/bshanks/cross-platform-terminal-characters/tree/chrome-on-Android

(note: i link to my fork b/c it subtracted some chars that weren't visible from Chrome browser on Android, but upstream only cares about terminals)

---

NiceWayToDoIT? 5 hours ago [–]

What strikes me the most is that over long time I developed a crazy notion, which is a kind of build up of frustration I guess, which can be expressed with following words "this should have been solved by now, and it should be simple, darn it is 2021 century..." I do not know how many times I wanted to do some functionality, and then I would realize it is not implemented.

Regardless of what my expectations are for some language or library, simply it is not there, so after banging my head and burning hours trying to solve a problem I would realize I need to use some dirty workaround (and I hate those so much I do not have enough words to describe...)

Just a few example from JS world:

For each of above there is some solution, but it looks like a dirty hack ... it seems with time, instead simplifying things we are complicating things that should be simple.

The other day I was pondering, how in any other trade people becoming masters, as they becoming good with their tools, tools are becoming part of their body, so they are focusing more on art and creativity, in programming, except for those rare who are blessed with very good and fast memory, tools are always changing ... as soon as you become comfortable there will be new set of tools... and I should not even start with the whac-a-mole of method and property renaming ...

reply


https://uploads.peterme.net/nimsafe.html

---

a much-loved language build and packaging system is Rust's Cargo

a much-liked cross-language build system appears to be Bazel

a somewhat liked cross-language packaging system appears to be Nix note that Nix can be an OS (NixOS?) or just a packaging system however https://news.ycombinator.com/item?id=26748696 describes a lot of practical problems with Nix sounds like the consensus may be that the core concepts behind Nix are great, but the implementation UX choices have a lot of 'rough edges' that post comments on https://tech.channable.com/posts/2021-04-09-nix-is-the-ultimate-devops-toolkit.html which talks about Nix used for packaging, not about NixOS? btw https://nix.dev/ is a recommended intro to Nix

---

" Rust is Different (In a Good Way)

After you've learned enough programming languages, you start to see common patterns. Manual versus garbage collected memory management. Control flow primitives like if, else, do, while, for, unless. Nullable types. Variable declaration syntax. The list goes on.

To me, Rust introduced a number of new concepts, like match for control flow, enums as algebraic types, the borrow checker, the Option and Result types/enums and more. There were also behaviors of Rust that were different from languages I knew: variables are immutable by default, Result types must be checked they aren't an error to avoid a compiler warning, refusing to compile if there are detectable memory access issues, and tons more. "

---

csbartus 2 days ago [–]

Javascript and Typescript replacements.

Started learning Clojure / ClojureScript? and keeping an eye on ML languages like ReScript? and ReasonML?.

I wish soon I'll be able to never write JS/TS code again.

reply

haxiomic 2 days ago [–]

Take a look at haxe[0] for a compile-to-js ML inspired language. It's relatively mature these days and integrates with existing typescript definitions via dts2hx[1]

I talk about this more here: https://news.ycombinator.com/item?id=26084187

[0] https://haxe.org/

[1] https://github.com/haxiomic/dts2hx

reply

mdm12 2 days ago [–]

In a similar vein, Fable (an F#-to-JS compiler) is gaining a lot of momentum in the F# community. https://fable.io/

reply