proj-oot-ootNotes37

abiro 1 day ago [-]

PSA: porting an existing application one-to-one to serverless almost never goes as expected. Couple of points that stand out from the article:

1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

...

---

aasasd 4 hours ago [-]

I'm currently using a CPU-slow machine, and I've discovered everything is slow among the popular scripting languages, except Lua. I've thought JS has good startup time, but some of my Lua scripts that are doing file i/o finish faster than node.js does a ‘hello world.’ Lua is also very lightweight on memory, to my knowledge. I think it can even do true multithreading, via libraries.

So, there's a Lisp on top of Lua―compiled, not interpreted: https://fennel-lang.org

It's deliberately feature-poor, just like Lua: you use libraries for everything aside simple functions and loops. And it suffers somewhat from double unpopularity of Lua and Lisp. But it works fine.

reply

---

" Dylan design meetings discussed creating a parser for an "infix syntax". The general idea was to create such a thing to mollify the non-Lispers. I even supported it. My argument was that it was a superficial matter, and if it attracted more users, it would be all to the good.

I was wrong.

I didn't appreciate just how helpful Lisp syntax is until I got hold of a Lisp that had an infix syntax. It was bulkier and more cumbersome. It was less pleasant to work with. It wasn't as easy to recognize the boundaries of expressions. Editors had a harder time working with it, so they weren't as nimble.

Of course, I could always switch back to the parenthesized syntax...until that was removed.

Okay, the surface syntax was clunkier, but it was still a Lisp underneath...until it wasn't.

The separation between development environment and delivered program got harder and faster. That evolution increasingly restricted what you could accomplish in the repl, until, finally, the repl disappeared altogether. At that point, Dylan really wasn't a Lisp anymore. My favorite Lisp had evolved into just another batch application compiler.

Moreover, it didn't work. The stated goal was to make the language more appealing to more working programmers, so that it would gain broader acceptance.

It didn't. It lost the things that made Lisp and Smalltalk programmers happy, and it didn't gain broader acceptance in the process.

I still like Ralph better than anything else, including Common Lisp, Scheme, Clojure, or Arc. But Ralph is extinct now, so I'll stick with those other Lisps until someone invents a better one. "

---

i guess there are sort of 3 levels of language specification:

an informal description a more formal spec an implementation

compared to the informal description, the more formal spec is less ambiguous. But compared to the implementation, the spec should typically be more ambiguous, because the spec should typically leave room for different implementations to do some things differently. For example, is a function object opaque, or can you look inside its representation? Some implementations may want it to be opaque (because it represents some compiled thingee at a lower level), while others may want to make it visible (for example, if the implementation is a self-hosting metacircular interpreter).

---

dfranke 6 hours ago [-]

Reproduced from feedback that I gave pg on an earlier draft (omitting things he seems to have addressed):

When you say,

> But I also believe it will be possible to write efficient implementations based on Bel, by adding restrictions.

I'm having trouble picturing what such restrictions would look like. The difficulty here is that, although you speak of axioms, this is not really an axiomatic specification; it's an operational one, and you've provided primitives that permit a great deal of introspection into that operation. For example, you've defined closures as lists with a particular form, and from your definition of the basic operations on lists it follows that the programmer can introspect into them as such, even at runtime. You can't provide any implementation of closures more efficient than the one you've given without violating your spec, because doing so would change the result of calling car and cdr on closure objects. To change this would not be a mere matter of "adding restrictions"; it would be taking a sledgehammer to a substantial piece of your edifice and replacing it with something new. If closures were their own kind of object and had their own functions for introspection, then a restriction could be that those functions are unavailable at runtime and can be only be used from macros. But there's no sane way to restrict cdr.

A true axiomatic specification would deliberately leave such internals undefined. Closures aren't necessarily lists, they're just values that can be applied to other values and behave the same as any other closure that's equivalent up to alpha, beta, and eta conversion. Natural numbers aren't necessarily lists, they're just values that obey the Peano axioms. The axioms are silent on what happens if you try to take the cdr of one, so that's left to the implementation to pick something that can be implemented efficiently.

Another benefit of specifying things in this style is that you get much greater concision than any executable specification can possible give you, without any loss of rigor. Suppose you want to include matrix operations in your standard library. Instead of having to put an implementation of matrix inversion into your spec, you could just write that for all x,

    (or
     (not (is-square-matrix x))
     (singular x)
     (= (* x (inv x))
        (id-matrix (dim x))))

Which presuming you've already specified the constituent functions is every bit as rigorous as giving an implementation. And although you can't automate turning this into something executable (you can straightforwardly specify a halting oracle this way), you can automate turning this into an executable fuzz test that generates a bunch of random matrices and ensures that the specification holds.

If you do stick with an operational spec, it would help to actually give a formal small-step semantics, because without a running implementation to try, some of the prose concerning the primitives and special forms leaves your intent unclear. I'm specifically puzzling over the `where` form, because you haven't explained what you mean by what pair a value comes from or why that pair or its location within it should be unique. What should

   (where '#1(#1 . #1))

evaluate to? Without understanding this I don't really understand the macro system.

reply

cousin_it 1 hour ago [-]

This is similar to the feedback Dave Moon gave to PG's previous language, Arc, more than a decade ago. http://www.archub.org/arcsug.txt

Representing code as linked lists of conses and symbols does not lead to the fastest compilation speed. More generally, why should the language specification dictate the internal representation to be used by the compiler? That's just crazy! When S-expressions were invented in the 1950s the idea of separating interface from implementation was not yet understood. The representation used by macros (and by anyone else who wants to bypass the surface syntax) should be defined as just an interface, and the implementation underlying it should be up to the compiler. The interface includes constructing expressions, extracting parts of expressions, and testing expressions against patterns. The challenge is to keep the interface as simple as the interface of S-expressions; I think that is doable, for example you could have backquote that looks exactly as in Common Lisp, but returns an <expression> rather than a <cons>. Once the interface is separated from the implementation, the interface and implementation both become extensible, which solves the problem of adding annotations.

This paragraph contributed a lot to my understanding of what "separating interface from implementation" means. Basically your comment is spot on. Instead of an executable spec, there should be a spec that defines as much as users need, and leaves undefined as much as implementors need.

reply

---

cousin_it 8 hours ago [-]

Thanks for the response! Which features of Bel do you think will contribute the most to making programs shorter or clearer, compared to other Lisp dialects?

reply

pg 7 hours ago [-]

It's so broadly the goal of the language that many things do, often in small ways. For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them. And the where special form and zap macro make it much easier to define operators that modify things. The advantage of Bel is a lot of small things like that rather than a single killer feature.

Making bel.bel shorter was one of my main goals during this project. It has a double benefit. Since the Bel source is a Bel program, the shorter I can make it, the more powerful Bel must be. Plus a shorter source is (pathological coding tricks excepted) easier to understand, which means Bel is better in that way too. There were many days when I'd make bel.bel 5 lines shorter and consider it a successful day's work.

One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.

reply

---

"The second thought is that lists smell like a type of stream. Successive calls to `next` on `rest` don't necessarily discern between lists and streams. The difference seems to be a compile time assertion that a list is a finite stream. Or in other words, a sufficiently long list is indistinguishable from an infinite stream (or generator)."

---

rntz 11 hours ago [-]

> 5. (where x)

> Evaluates x. If its value comes from a pair, returns a list of that pair and either a or d depending on whether the value is stored in the car or cdr. Signals an error if the value of x doesn't come from a pair.

> For example, if x is (a b c), > > > (where (cdr x)) > ((a b c) d)

That is one zany form.

1. How is this implemented?

2. What is the use of this?

3. What does (where x) do if x is both the car of one pair and the cdr of another, eg. let a be 'foo, define x to be (join a 'bar), let y be (join 'baz a), and run (where a).

reply

pg 11 hours ago [-]

It's used to implement a generalization of assignment. If you have a special form that can tell you where something is stored, you can make macros to set it. E.g.

  > (set x '(a b c))
  (a b c)
  > (set (2 x) 'z)
  z
  > x
  (a z c)

which you can do in Common Lisp, and

  > (set ((if (coin) 1 3) x) 'y)
  y
  > x
  (y z c)

which you can't.

reply

---

waterhouse 9 hours ago [-]

A couple of things on my checklist for mathematical purity are (a) first-class macros and (b) hardcoded builtins. It looks like Bel does have first-class macros. As for (b)...

> Some atoms evaluate to themselves. All characters and streams do, along with the symbols nil, t, o, and apply. All other symbols are variable names

The definitions of "ev" and "literal" establish that nil, t, o, and apply are in fact hardcoded and unchangeable. Did you consider having them be variables too, which just happen to be self-bound (or bound to distinctive objects)? nil is a bit of a special case because it's also the end of a list, and "(let nil 3 5)" implicitly ends with " . nil"; o might be an issue too (said Tom arguably); but apply and t seem like they could be plain variables.

P.S. It looks like you did in fact implement a full numerical tower—complex numbers, made of two signed rational numbers, each made of a sign and a nonnegative rational number, each made of two nonnegative integers, each of which is a list of zero or more t's. Nicely done.

reply

---

TekMol? 5 hours ago [-]

    Bel has four fundamental data types:
    symbols, pairs, characters, and streams.

No numbers?

...In any case, I was mistaken. Numbers are defined.

---

pwpwp 1 hour ago [-]

No fexprs? https://web.cs.wpi.edu/~jshutt/kernel.html

reply

---

drcode 1 hour ago [-]

I think at the end of the day, the question is whether a compiler for this type of language could efficiently handle a function like distinct-sorted:

   > (distinct-sorted '(foo bar foo baz))
   (bar baz foo)

This is a function that usually requires efficient hash tables and arrays to be performant, a hash table for detecting the duplicates, an array for efficient sorting. However, both the hash map and array could theoretically be "optimized away", since they are not exposed as part of the output.

A language like Bel that does not have native hash maps or arrays and instead uses association lists would have to rely entirely on the compiler to find and perform these optimizations to be considered a usable tool.

reply

sooheon 45 minutes ago [-]

Interesting example of a unit test of sorts for language usability. Got any others?

reply

drcode 0 minutes ago [-]

I'm no language expert, but the there things I can think of that make bel impractical without major compiler trickery are (1) lack of primitive hash tables (2) lack of primitive arrays (3) no support for tail call optimization (though that third thing is probably fixable with the right compiler tricks)

The other concern I have is the lack of a literal associative data structure syntax (like curly braces in clojure) It seems that would negatively impact pg's goal of "code simplicity" quite a bit.

reply

---

some ideas from https://docs.python.org/3/whatsnew/3.8.html

" while (block := f.read(256)) != : process(block) "

and avoid stuff like

" m = f(many, args) while m: # do stuff with m m = f(many, args) # duplicate "

(in oot we'd probably just use =)

---

" we can't bootstrap openJDK on ARM. There's no source code for a JVM that compiles on ARM afaik. Though in theory we could package binary blobs from oracle and bootstrap JDK and Bazel from there, it means now our trust path for your critical build tool has a random Oracle blob in its trust path there that is extremely hard to get rid off. "

---

gamache 2 days ago [-]

Here are a few legitimate complaints about Elixir (and in some cases, its host VM, the BEAM):

reply

mmartinson 2 days ago [-]

I can take a stab at this. For context, my background is in Ruby, NodeJS?, a bit of Golang and Java. I've been doing work with Elixir full time for the last 3 years.

I could probably write a novel about the things I like about Elixir, but here's what I'd caution about for someone looking at adopting it over another alternative.

Learning to be productive in Elixir isn't really a huge investment, even for someone with a strict OO background. It's pretty light conceptually compared to other FP languages, can be written in an imperative-ish style if needed, and has clear, simple patterns for code organization. For every place you would consider mutating, you tend to find primitives that make working with immutable data structures quite convenient.

reply

---

cpeterso 4 hours ago [-]

I like Python, but I often wonder how many developers use Python because they actually use dynamic language features versus just liking the languages' clean syntax and library ecosystem.

---

regarding Rust

dunkelheit 7 hours ago [-] ... BTW this is a big pain point for me (unrelated to async). Code like this:

  let ref = &mut self.field;
  self.helper_mutating_another_field();
  do_something(ref);
 gets rejected because self.helper_mutating_another_field() will mutably borrow the whole struct. The workaround is either to inline the helper or factor out a smaller substruct so that helper can borrow that which doesn't always look good.

---

 jahaja 5 hours ago [-]

Go is such a productive language. Well deserved cred to its authors and community. Please, please hold firm against the inherent pressure from language theoreticians to add more features that increases complexity.

reply

jeffdavis 4 hours ago [-]

I think the tension is more between people writing executables and people writing libraries.

If you want your language to be suitable for any purpose by using an intricate web of libraries depending on each other, you need those language features and deep theory. Think haskell, lisp, rust, etc.

If you kind of know what people will use your language for, and you build in the most important functionality, you aren't nearly so dependent on libraries. You just make the language accessible and as productive as possible. Think Go, erlang, PHP, SQL, javascript, VB.

reply

---

" I know a lot of people don't really follow Swift, and it can be hard to understand what they've really accomplished without some context of what the language is like, so here's a TL;DR of the language's shape:

    Exists to replace Objective-C on Apple's platforms, oriented at application development
        natively interoperates with Objective-C
        has actual classes and inheritance
    At a distance, very similar to Rust (but "higher-level")
        interfaces, generics, closures, enums with payloads, unsafe escape hatch
        no lifetimes; Automatic Reference Counting (ARC) used for complex cases
        simple function-scoped mutable borrows (inout)
        Ahead-Of-Time (AOT) compiled
    An emphasis on "value semantics"
        structs/primitives ("values") are "mutable xor shared", stored inline
        collections implement value semantics by being Copy-On-Write (CoW) (using ARC)
        classes are mutably shared and boxed (using ARC), undermining value semantics (can even cause data races)
    An emphasis on things Just Working
        language may freely allocate to make things Work
        generic code may be polymorphically compiled
        fields may secretly be getter-setter pairs
        ARC and CoW can easily result in surprising performance cliffs
        tons of overloading and syntactic sugar" -- https://gankra.github.io/blah/swift-abi/

---

"In software development there are many concepts that at first glance seem useful and sound, but, after considering the consequences of their implementation and use, are actually horrifying. Examples include thread cancellation, variable length arrays, and memory aliasing. " -- https://nullprogram.com/blog/2019/11/15/ https://news.ycombinator.com/item?id=21553882

---

random programming language that compiles to Java

https://www.unitily.com/articles/boilerplate.html

---

jakub_g 1 day ago [-]

This. Far from being a fanboy of Java (and I write JS for a living), but you can generally quickly understand what's going on with the boring code, and can refactor mercilessly.

Whereas in huge JS codebases relying on framework magic you can really have hard time to understand what's going on without plugging a debugger (and even then it's still not easy). Trying to refactor legacy code that relies on `this` and prototypes is a nightmare.

reply

---

regarding generics in Go:

"Go is “You Ain’t Gonna Need It” taken to the extreme. Need is the operative word here. You won’t need generics, but you will almost assuredly want them. Ditto syntactic sugar for error handling, functional programming, and operator overloading."

---

i don't really like these ideas from Bel, but they're interesting:

bel has "When a function has a single parameter, its value will be a list of all the arguments supplied when the function is called"

and "Functional composition":

(def all (f xs) (~some ~f xs))

---

Bel says (from https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1570993483 ):

" 2. (join x y)

Returns a new pair whose first half is x and second half is y.

> (join 'a 'b) (a . b) > (join 'a) (a)

A pair returned by join will not be id to any existing pair. "

What is the purpose of the guarantee that different joins will return unique pairs that compare not equal in ID?

---

stateofjs has some data on which new language features are most used:

https://2019.stateofjs.com/features/syntax/ https://2019.stateofjs.com/features/language/ https://2019.stateofjs.com/features/data-structures/

---

"The idea is to have a language which runs in a VM written in C, and allows you to easily access code which has to be in C. Sort of like uPython, except it can easily be used on any board with a C compiler with a small amount of setup, and C FFI is a first class citizen. Also coroutines masquerading as first class actors with message passing, since I always end up building that in C anyway. " -- caseymarquis 9 hours ago [-]

---

stuff that some ppl on HN think C should add:

That's exactly my complaint. I think it's missing things because if you added them in a way compatible with the C way of doing things then you would permanently cleave C from C++. Since C++ is a runaway train at this point detaching is probably a good idea now.

Me I want range types like Ada. Real array types. I think I want blocks/coroutines.

reply

agumonkey 4 hours ago [-]

> Me I want range types like Ada. Real array types. I think I want blocks/coroutines.

aka CDC

reply

rumanator 9 hours ago [-]

> Coding in C is like camping. It's fun for a while, but eventually you really miss things like flushing toilets and grocery stores.

The only issue I have with C is that there are no good reasons for some of those missing features to be missing.

Take, for example, namespaces. Would it be a problem to implement them as, say, implicit prefixes?

reply

AllanHoustonSt? 9 hours ago [-]

Would it be possible to do this without changing how name mangling works?

reply

a1369209993 6 hours ago [-]

Actually, yes:

  1. include <string.h> __prefix__ str; /* in scope: "","str" */ strlen("Hi!"); /* try "strlen",done,ignore "strstrlen" */ len("Bye"); /* no "len",try "strlen",done */ __prefix__(foo_) { void bar(void); } /* "foo_bar" */

reply

---

macros like __FILE__ and __LINE__

---

jupp0r 1 day ago

parent flag favorite on: Programming Language Checklist

Is there anybody who would argue against:

Those seem to be universally loved features that new programming languages are required to have in order to still be appealing.

---

Beef language " Design goals

    High performance execution
        No GC or ref counting overhead
        Minimal runtime
        Compliled (no JIT delays)
    Control over memory
        Extensive support for custom allocators
        Enchanced control over stack memory
    Low-friction interop with C and C++
        Statically or dynamically link to normal C/C++ libraries
        Support for C/C++ structure layouts and calling conventions
    Prefer verbosity over conciceness when it aides clarity, readability, or discoverability
    Enable fluid iterative development
        Fast incremental compilation and linking
        Runtime code compilation (code hot swapping), including data layout changes
    Familiar syntax and programming paradigms for intended audience (C-family)
    Good debugability
        Emits standard debug information (PDB/DWARF)
        Emphasis on good execution speed of debug builds
    Well-suited to IDE-based workflow
        Compiler as a service
        Fast and reliable autocomplete results
        Fast and trustworthy refactorability (ie: renaming symbols)
    Leverage LLVM infrastructure
        Battle-hardened backend optimizer
        ThinLTO link time optimization support"

---

maerF0x0 1 day ago [-]

Another "tip" for helping your language grow. I highly recommend you look at how the golang bootstrapped itself and aim to replicate.

Golang provided:

and a number of other things. IMO a language is only as good as it's community.

reply

---

"

That makes Clojure powerful.

Clojure is a well-designed language and, crucially, a newer language with many of the past sins of older languages confidently thrown out (ubiquitous mutability, OOP and typing boilerplate) and great new things added in. It’s both a conservatively scoped and a fresh take on a programming language. That is great in itself, but to get from well-thought out to powerful, you need to multiply with something else: reach. Clojure is designed to reach beyond a single runtime, while being able to interoperate with its many implementations that constantly spread into different runtimes. ... At the same time, the fact that Clojure’s primitives are easily serialisable led to using EDN (Extensible Data Notation) as the format of exchange between frontend and backend. This is similar to the relationship that JavaScript? has with JSON. Since then, the Transit library has made that process even simpler by setting up a direct data pipe between ClojureScript? and Clojure. ... "Clojure’s main Achilles’ heel right now, with respect to reach, is its relatively slow application start-up time. " -- [1] ...

Clojure is (slowly) eating the world. Honestly, we should be thankful that the language continues to grow despite being fairly unorthodox, compared with the current norm of typed functional languages and the massive sea of object-oriented code that’s out there. It’s also the odd child in the Lisp family with its functional, data-driven, and interop-heavy approach to programming.

And we can’t forget the terrible sin of putting starting parentheses in front of function names rather than after them! "

tensor 20 hours ago [-] Startup time isn't what is holding the language back. In my opinion it's:

1. Tooling. You end up spending way more time getting your tooling setup than it should be. 2. Difficulty in learning. 3. Clear best practices and frameworks. The philosophy of composing libraries is extremely powerful, but to gain broader adoption you still need straight forward frameworks and opinionated best practices.

DannyB?2 1 hour ago [-]

Yes, this. Tooling.

If you want to get started, Clojurists will insist you learn emacs. After all it's so great you must learn it.

Now I have nothing against emacs.

But I don't expect to have to invest significant effort to learn a specific editor in order to use a programming language. No other programming language has this requirement.

I was surprised at how many Clojure users took this as an attack upon emacs. It took some careful explaining to make the point that I have better things to invest my time into, such as building more code in Clojure, rather than learning yet another tool, that does something for which I have other tools already.

There were some great IDEs for Clojure. But they have all fallen into disrepair and being unmaintained.

reply

crispinb 19 hours ago [-]

As someone just playing with Clojure for curiosity, I agree that it's hard to find a way through the thicket of available tooling & library options.

---

 vemv 19 hours ago [-]

My personal perspective (after 8 years clojuring, half of that professionally) is that Clojure keeps progressing, with ever better tools and ideas for getting stuff done, optimally. I remain optimistic.

At the same time, it still fails at my "golden test": can I gather 5 random freelance engineers and get them to ship a project within a few months, wasting almost no billable time?

I can (and have) with Ruby, Typescript. People can learn those on the go, being productive on day 3 or so.

Clojure is still bit of a journey on itself, involving quite a lot of stuff to learn, and plenty of choices to make.

That's not necessarily a bad thing, nor Clojure's "fault", but it's a real cost/risk that is still there.

I do envision a future where the mainstream is ideologically closer to Clojure, and Clojure offers an (even) more polished set of tools than today. With that mutual proximity, Clojure could indeed eat the world.

reply

iLemming 14 hours ago [-]

> I can (and have) with Ruby, Typescript

That's precisely the reason why Clojure attracts seasoned, experienced, grumpy developers.

Over and over again, they have been perfecting the art of "building things fast." The problem [with the most] programming languages today, that although they provide frameworks, command-line tools, code generators, etc. to build things quickly, the codebases very quickly become difficult to maintain and scale.

So many times, I have seen it myself. I quit jobs simply because I exhausted my mental and cognitive capacity to deal with the code that I wrote myself several months ago.

While Clojure codebases, no matter how messy they can get, they feel very much like gardens - you can slowly and organically keep growing them.

Some may say: "You're talking about Haskell, or Scala, (or some other language)" And in my experience, although you can do pretty cool things in those languages, sometimes simple, dumbed-down solutions are better. Some may argue: "Now you're talking about Python, or Go...". I think Clojure has a perfect balance between "sophisticated, ultrasmart" PLs and Pls with the books titled "X for complete idiots."

I think Clojure is an ideal language for writing business apps and more and more companies starting to see that.

reply

vemv 11 hours ago [-]

I don't disagree with anything you've said, and might be myself one of those grumpy programmers :)

But that isn't at odds with the view that Clojure is more fit in some environments than others, depending on existing knowledge, deadlines, budget etc.

Maybe I'd introduce Clojure in ~6 out of 10 projects. I want that N to be 9.

reply

---

lbj 22 hours ago [-]

Thats last few sentences make no sense. The argument is that uptake is stunted by the JVM's slow startup time, making it unsuitable for commandline utils or desktop apps. However thats easily fixed which was never more hillariously stated than when Rich Hickey did it a decade ago. A blogpost went viral and it was a thunderous critique of Clojure, focused on this single point of slow start up. In the comment section was only 1 reply from Rich:

time java -client -jar clojure.jar helloworld.clj

> system time 0.0001s

The -client param makes all the difference :)

reply

rubyn00bie 21 hours ago [-]

For anyone else wondering, "holy crap, WTF does -client do?" here you go: https://stackoverflow.com/a/198651/931209

And to save folks a click:

> The Client VM compiler does not try to execute many of the more complex optimizations performed by the compiler in the Server VM, but in exchange, it requires less time to analyze and compile a piece of code. This means the Client VM can start up faster and requires a smaller memory footprint.

There's literally so many just insane things about the JVM and its variants, I totally get why some folks are like "JAVA OR DEATH." I just wish I had started learning it 20 years ago, like a lot of 'em, so it wasn't such a gigantic wall of pedantic knowledge to acquire.

reply

jimbo1qaz 18 hours ago [-]

> A 64-bit capable JDK currently ignores this option and instead uses the Java Hotspot Server VM.

reply

stingraycharles 12 hours ago [-]

This is an extremely important detail, as I expect pretty much everyone to run the 64bit version of the JVM.

reply

mistersys 18 hours ago [-]

Uhm.. I want to give the benefit of the doubt, but frankly this seems deceptive, or at least out of date, unless I'm totally missing something.

Why is there no magical --client / --script option for the Clojure ClI? to make startup faster, if this is such an easy solution? Also, you show system time, but the time to load dependencies is in user time, which is slow by the nature of Clojure.

Example:

    $ time java -client -classpath /usr/local/Cellar/clojure  /1.10.1.492/libexec/clojure-tools-1.10.1.492.jar clojure.main hello.clj
    Hello, world
    java -client -classpath  clojure.main scripts/script.clj    1.39s user 0.08s system 187% cpu 0.788 total
    $ time java  -classpath /usr/local/Cellar/clojure/1.10.1.492/libexec/clojure-tools-1.10.1.492.jar clojure.main hello.clj
    Hello, world
    java -classpath  clojure.main scripts/script.clj  1.37s user 0.09s system 180% cpu 0.811 total

(Excuse the details of my Clojure install location above)

User time appears slightly slower without the -client option, but probably is not statistically significant. Also, anyone who has worked on more than a toy project with Clojure knows that it's not just about the Hello World startup speed, every dependency you add has a significant impact on Clojure startup time.

reply

stingraycharles 12 hours ago [-]

As mentioned in other comments, the -client option is ignored on 64 bit JVMs, so it has no practical effect. Giving the benefit of doubt, Rich Hickey must have been using a 32 bit JVM.

reply

didibus 16 hours ago [-]

For me, scripts and command line application startup time is mostly a solved problem. You can just use ClojureScript?, Babashka or Joker for scripting and CLI apps that don't need to perform too much CPU bound work. And for CPU intensive CLI apps, you can now use Graal's SubstrateVM? to have a native build which starts instantly.

reply

---

"since 2015 Rust made only 2 changes to how idiomatic code looks like (? for errors, and 2018 modules),"

"try!() -> ?, the addition of impl trait, and the dyn keyword are all features that have changed what it means to write idiomatic Rust. "

---

ux 3 days ago [-]

Is there any plan to deal with the locale fiasco at some point?

Some hints on what I'm referring to can be found here: https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...

Unrelated, but I also miss a binary constant notation (such as 0b10101)

reply

OnACoffeeBreak? 3 days ago [-]

I know that we're not voting, but I miss a binary literal very much. I would also like a literal digit separator to improve readability. Verilog Hardware Description Language does that with an underscore [1]. For example, 0xad_beef to improve readability of a hex literal, and 0b011_1010 to improve readability of a binary literal.

1: http://verilog.renerta.com/mobile/source/vrg00020.htm

reply

---

https://trust-in-soft.com/wp-content/uploads/2017/01/vmcai.pdf

"Type-based alias analyses allow C compilers to infer thatmemory locations of distinct types do not alias"

---

PHP showing its maturity in release 7.4

https://lwn.net/SubscriberLink/818973/507f4b5e09ab9870/

" The evolution of OOP in PHP

PHP's object model has had some pretty rough patches in the past. When objects were first introduced in PHP 4, the implementation was essentially an array with function references and was horribly problematic. The fundamental problems of that implementation were thankfully addressed with the release of PHP 5, featuring an entirely rethought object implementation. In the versions that have come since, up through version the current 7.4 releases, numerous interesting language features for object-oriented programming (OOP) have emerged.

While still a dynamically typed language, PHP does now support a robust typing mechanism including the typing of function parameters and return values along with the use of scalar types. New in PHP 7.4 is typing for object properties. Combining those features with namespaces, interfaces, traits, and iterables results in a well-rounded and modern object-oriented language. If you still think PHP lacks an appropriate object model, you might be pleasantly surprised taking a look again.

For example, consider this brief example encapsulating many of the now-available PHP OOP constructs:

    <?php
    namespace App\LWN;
    use App\LWN\Database;
    use App\LWN\Contracts\Collection
    use App\LWN\Filter as MyFilter;
    use App\LWN\PrettyArray;
    class Example extends Database implements Collection
    {
            use PrettyArray;
	    protected array $data;
	    public function filter(?MyFilter $filter) : iterable
	    {
		    /* ... filter $this->data based on the $filter object's logic ... */
		    return $filtered;
	    }
    }

This snippet defines a class called Example, which extends from the base class Database, and that implements the Collection interface. The fully qualified class name — including namespace — would be App\LWN\Example. Four pieces are imported: the Database class, Collection interface, Filter type (aliased as MyFilter?), and PrettyArray? trait. In the body of the class, we use PrettyArray? in order for the class to take advantage of the properties and methods defined in the trait.

The class also defines a single array property $data (scoped as protected) and adds a single class method filter() scoped for public access. The filter() method accepts a single optional parameter $filter that must either be an instance of App\LWN\Filter (aliased to MyFilter?) or NULL. The method returns an iterable pseudo-type, which indicates that the method must either return a PHP array or any other object that implements the built-in PHP Traversable interface. This allows callers to know that the results can be traversed using other language constructs such as foreach.

Proper dependency management and third-party extensions ...

PHP gets proper FFI ...

Making security less obscure

As PHP has matured, so has its focus on security. PHP gets a bad reputation on security, due in no small part to various ill-conceived language features of the past, such as the notorious register_globals and safe_mode settings. Thankfully it has been ten years since register_globals and safe_mode were removed in PHP 5.4. Most PHP-related security issues don't come from the implementation of the language these days but rather from insecurely written code. To address this, a lot of effort has been put in to make handling complicated security issues as straightforward for PHP developers as possible.

This has taken the form of a couple of key technologies that have appeared in more recent versions of PHP. Starting with PHP 7.2, the popular libsodium cryptographic library has been included as part of the PHP tool set, providing high-quality cryptographic services to developers. For hashing critical data such as passwords, PHP also bundles the Argon2 password-hashing algorithm (the winner of a multi-year password-hashing competition).

Of course, there have been continuous improvements and additions to the built-in filtering and sanitization tools that were first included in PHP 5.2. These provide the facilities to consistently and reliably ensure external input is what developers expect it to be.

Through the widespread adoption of proper dependency management, key security-related technologies, and modern frameworks that implement all of this intelligently, PHP has made welcome and significant security-minded improvements over the years.

...

Last but not least is just-In-time (JIT) compilation to machine code for PHP, which is provided as an experimental feature in PHP 7.4 and is slated for official release in PHP 8. Described as "extremely simple" in its present form compared to implementations such as V8 and PyPy?, PHP's forthcoming JIT implementation will be powered by DynASM? (developed for the LuaJIT? project). It will likely serve as the bedrock for future execution-speed improvements.

"

s1k3s 16 hours ago [-]

The amount of misinformation, false claims and unsupported statements in this thread is mindblowing for the quality that I've been used to see on HN.

Here are some facts:

KerryJones? 6 hours ago [-]

As someone who has programmed PHP professionally for 13+ years (and also JS and Python) I have never run into anyone who can place a solid argument since the PHP 7 + Laravel.

-- Let me clarify, PHP is useful with context. For building web apps. Each language has its best-uses. Laravel has the strongest ecosystem and community out of any language/framework combo I've seen.

I can literally setup and deploy a laravel application on a horizontally scaled setup with CI & autodeploy within an hour -- while maintaining industry standards.

PHP had its problems -- historically, but so did most languages. Do you remember Javascript before Node/React? Or before jQuery/Prototype? It was _trash_.

The biggest "con" to using PHP is that it's "bad" for the resume, no one is excited about it, so when looking to hire for it, most developers don't feel inspired by it -- which is a serious consideration depending on your circumstance.

reply

dpacmittal 1 hour ago [-]

Javascript is still trash. What with the insane amount of configuration and build tools, extremely small standard library forcing you to use third party modules which may or may not have tens of security issues and the dependency hell it brings with it. I can go on and on.

reply

ChrisMarshallNY? 3 hours ago [-]

PHP is the Web's C++.

It keeps getting declared dead, but then, keeps rising from the grave.

I wrote PHP for a long time (more than 20 years). Many, many thousands of lines of it. I got fairly good at it. I wrote my last big application in it about a year and a half ago (PHP 7.3, I think). It's a great application, but no one would want it, as it does the same thing lots of big SaaS? does.

Except one single person wrote it, from soup to nuts (no dependencies), in about six months -part time- (it was sort of a "thesis" project for me, as I was re-learning my engineering discipline), and it's quite good quality. So I guess PHP is good for something. I just open-sourced the application, and more or less sent it out to stud.

https://riftvalleysoftware.com/work/open-source-projects/#ba...

I discuss my design methodology for the project here: https://medium.com/chrismarshallny/forensic-design-documenta...

That said, I never really liked the language, and am glad to have seen the back of it. I write full-time in Swift, nowadays, and would be thrilled to never write PHP ever again.

Despite all the hate, though, it's a perfectly good language; especially with all the stuff added in 7+.

reply

---

saagarjha 9 hours ago [-]

(For the iOS engineers reading along: please don't put network calls in +load, or __attribute__((constructor)), or a C++ static variable, or whatever other clever way you think you can get code execution before main.)

reply

red_admiral 46 minutes ago [-]

For the iOS developers reading along: please ban this behaviour in a future version?

reply

kccqzy 7 hours ago [-]

Not just network calls, but also the file system, or basically anything nontrivial.

C++ static variables can now be annotated with constinit to resolve issues like this: https://en.cppreference.com/w/cpp/language/constinit It basically asks the compiler to enforce that constructor calls can only do trivial things.

reply

---

outside1234 5 days ago [–]

VSCode is indistinguishable from native, so not sure its Electron that's at fault here.

reply

keb_ 5 days ago [–]

This gets said a lot, and granted VSCode is certainly one of the best performing Electron apps, but it definitely is not indistinguishable from native apps. Sublime, Notepad++, or TextAdept? all fly compared to VSCode in terms of performance and RAM efficiency.

reply

fiddlerwoaroof 5 days ago [–]

On Mac, VSCode does a better job than many apps at emulating the Cocoa text input systems but, like every electron app, it misses some of the obscure corners of cocoa text input system that I use frequently.

If we’re going to use JavaScript? to write native apps, I’d really like to see things like React Native take off: with a good set of components implemented, it would be a first class environment.

reply

AlchemistCamp? 5 days ago [–]

No. I like VS Code but it's a hog.

I still use Macvim or even Sublime Text a lot for speed reasons, especially on large files.

reply

andai 5 days ago [–]

If your native apps are indistinguishable from VSCode, they're doing something wrong.

reply

voltagex_ 5 days ago [–]

Start Notepad++ or https://github.com/rxi/lite and then compare the startup speed with VSCode.

reply

drngdds 5 days ago [–]

I use VS Code daily (because it seems to be the only full-featured editor that Just Works(TM) with WSL), but it can get pretty sluggish, especially with the Vim plugin.

reply

---

zhenchaoli 3 hours ago [–]

> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Would love some links to read over weekend. Thanks!

reply

Teknoman117 2 hours ago [–]

Things like:

To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It's not so much that I wasn't concerned about it before, more that I strive to be more expressive of these conditions in code.

reply

midnightclubbed 39 minutes ago [–]

Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

However I feel like most of the heavy lifting features came with C++11. Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).

reply

jfkebwjsbx 1 hour ago [–]

string_view, span and ranges are not conductive to safety, quite the opposite.

reply

dnpp123 30 minutes ago [–]

Yeah, if anything, C++ is getting less serious about safety by piling features over features. Just write Rust instead.

reply

---

Naomarik 1 day ago [–]

Prior to Clojure I feel that I didn't really know how to do things well.

In the context of the domain I have the most experience with, web applications, I'd summarize the bulk of programming as defining data, data transformations and shoving data through APIs. Once you get the hang of it, Clojure makes it trivial to transform datastructures from A to B. This has enormously expanded the kinds of problems I can solve.

When Rich says "It was impossible to avoid the sinking feeling that I had been “doing it wrong” by using C++/Java/C# my whole career."

I feel somehow that this is the case for the majority of people but they don't realize it yet because their experience is the most popular trendy language or framework. I've seen many examples of libraries in different languages having enormous amounts of commits and hundreds issues for problems that are trivial to solve in Clojure.

I was in the same boat, constantly grabbing for frameworks and if one wasn't available that made my task simple, would struggle trying to bend the frameworks or language I was using to come up with a solution.

I'm not a language geek and I don't write code outside my work for fun. I want to spend the least amount of time possible in front of the computer and sleep knowing my web applications won't topple over. Clojure has fit me so well that I don't think I would have accomplished what I have in other languages.

reply

natdempk 23 hours ago [–]

Could you elaborate on a problem that illustrates this property of Clojure? This sounds awesome, but I have a hard time understanding what you're getting at without knowledge of Clojure.

reply

drcode 23 hours ago [–]

Three examples of this:

1. core.async: So the guys who built golang did it because they had this cool idea for creating coroutines using channels. However, since it required very low-level functionality to be part of the core of the programming language, they thought they'd have to design a brand new language to implement their idea. However, shortly after golang was released, Rich and some other clojure folks implemented the same feature in clojure as a totally ordinary external library, proving that the core of clojure was general enough to support this. And it wasn't just a gimmick: I use core.async every day and think it is better than golang's implementation.

2. The Expression Problem: One of the core challenges in language design is designing it so that (simplifying a bit) you can transparently add support for new function methods to objects designed by a third party. Clojure makes this easy https://www.infoq.com/presentations/Clojure-Expression-Probl...

3. Lots of languages have attempted to allow you to write a single library that you can use both on the client and the server, having your code transpiled to different languages in each case. However, this type of feature is rarely used in production, because there are usually lots of headaches involved, with many limitations. However, for clojure developers it is perfectly normal (and usually expected) that all code is written in cljc files & so that it can be used on both the client (transpiled to javascript) and the server (transpiled to java). It is easy to do this, even for cooperatively multithreaded code.

reply

dkersten 10 hours ago [–]

I use core.async a lot too and really like it, however, I do feel that it suffers from being a macro instead of part of the language, for example, you cannot nest core.async functions inside other functions as they must be directly visible by the go macro, which cannot see jnside function calls. I’ve also had errors where the stack trace did not reference my code files at all, because it happened inside some core.async setup (IIRC it was a variable that was meant to be a channel but was nil, inside a sub or mix or something, ie I connected two core.async things together, it created go blocks internally, the exception happened inside that and therefore never referenced any if my source files, since the exceotion happened in an asynchronous context after my code ran).This was extremely painful to figure out.

Neither of these issues can be solved as long as core.async is implemented as a macro. However, it is extremely cool how far it was able to be taken without changing the language!

reply

mschaef 5 hours ago [–]

> I use core.async a lot too and really like it, however, I do feel that it suffers from being a macro instead of part of the language,

Agreed. The core abstraction of a go block is essentially a user-space thread. Because macros are limited to locally analyzing/rewriting code (and limitations of the JVM), go blocks are super limited in scope. As you point out, one call to a function in a go block, and you can't switch out of that go block during the function's execution. And functions calls are very common in Clojure code and can easily be hidden behind macros that obscure the details of the control flow.

There are 'core.async'y ways around all this, but the net effect is that 'core.async' imposes a very distinct style of writing (and one that tends to be contagious). Python's 'async/await' is contagious also, but because it's more tightly integrated into the runtime/compiler, it doesn't feel nearly as restrictive. (at least to me).

reply

drcode 5 hours ago [–]

I don't think it makes logical sense for there to be a "core.async function". Instead, you have each function return a channel with it's own go macro, then have the parent function consume data from the channels.

reply

dkersten 4 hours ago [–]

Sure, that’s a solution, but its necessary only because of a limitation that only exists because its a macro that needs to look into the code to rewrite it. It also adds extra (cognitive) overhead that such functions are always themselves asynchronous and the return value are channels - they can never just return a raw value - which limits what you can do with them or how you can call them or pass them elsewhere.

I mean, yes, usually it isn’t a problem at all, but it IS a limitation of core.async that wouldn’t need to exist if it had tighter integration into the language.

reply

enitihas 22 hours ago [–]

> and think it is better than golang's implementation.

How is core.async better than the golang implementation. In golang, all network I/O you do is automatically asynchronous, so your goroutines won't block a thread on network I/O. That is simply something core.async can't do.

reply

patrickthebold 21 hours ago [–]

I don't know if it does, but why couldn't it? The Java has non blocking IO.

reply

---

" 3.6 The Next Lisp

I think there will be a next Lisp. This Lisp must be carefully designed, using the principles for success we saw in worse-is-better.

There should be a simple, easily implementable kernel to the Lisp. That kernel should be both more than Scheme -- modules and macros -- and less than Scheme -- continuations remain an ugly stain on the otherwise clean manuscript of Scheme.

The kernel should emphasize implementational simplicity, but not at the expense of interface simplicity. Where one conflicts with the other, the capability should be left out of the kernel. One reason is so that the kernel can serve as an extension language for other systems, much as GNU Emacs uses a version of Lisp for defining Emacs macros.

Some aspects of the extreme dynamism of Common Lisp should be reexamined, or at least the tradeoffs reconsidered. For example, how often does a real program do this?

    (defun f ...)
    (dotimes (...)
      ...
      (setf (symbol-function 'f) #'(lambda ...))
      ...)

Implementations of the next Lisp should not be influenced by previous implementations to make this operation fast, especially at the expense of poor performance of all other function calls.

The language should be segmented into at least four layers:

    The kernel language, which is small and simple to implement. In all cases, the need for dynamic redefinition should be re-examined to determine that support at this level is necessary. I believe nothing in the kernel need be dynamically redefinable.
    A linguistic layer for fleshing out the language. This layer may have some implementational difficulties, and it will probably have dynamic aspects that are too expensive for the kernel but too important to leave out.
    A library. Most of what is in Common Lisp would be in this layer.
    Environmentally provided epilinguistic features. 

In the first layer I include conditionals, function calling, all primitive data structures, macros, single values, and very basic object-oriented support.

In the second layer I include multiple values and more elaborate object-oriented support. The second layer is for difficult programming constructs that are too important to leave to environments to provide, but which have sufficient semantic consequences to warrant precise definition. Some forms of redefinition capabilities might reside here.

In the third layer I include sequence functions, the elaborate IO functions, and anything else that is simply implemented in the first and possibly the second layers. These functions should be linkable.

In the fourth layer I include those capabilities that an environment can and should provide, but which must be standardized. A typical example is defmethod from CLOS. In CLOS, generic functions are made of methods, each method applicable to certain classes. The first layer has a definition form for a complete generic function -- that is, for a generic function along with all of its methods, defined in one place (which is how the layer 1 compiler wants to see it). There will also be means of associating a name with the generic function. However, while developing a system, classes will be defined in various places, and it makes sense to be able to see relevant (applicable) methods adjacent to these classes. defmethod is the construct to define methods, and defmethod forms can be placed anywhere amongst other definitional forms.

But methods are relevant to each class on which the method is specialized, and also to each subclass of those classes. So, where should the unique defmethod form be placed? The environment should allow the programmer to see the method definition in any or all of these places, while the real definition should be in some particular place. That place might as well be in the single generic function definition form, and it is up to the environment to show the defmethod equivalent near relevant classes when required, and to accept as input the source in the form of a defmethod (which it then places in the generic function definition).

We want to standardize the defmethod form, but it is a linguistic feature provided by the environment. Similarly, many uses of elaborate lambda-list syntax, such as keyword arguments, are examples of linguistic support that the environment can provide possibly by using color or other adjuncts to the text.

In fact, the area of function-function interfaces should be re-examined to see what sorts of argument naming schemes are needed and in which layer they need to be placed.

Finally, note that it might be that every layer 2 capability could be provided in a layer 1 implementation by an environment. "

-- https://www.dreamsongs.com/WIB.html

---

http://turboforth.net/resources/locals.html

---

onetom 1 day ago [–]

I'm surprised that no one has mentioned https://flashforth.com/

There is no intriguing backstory for it, like for CollapseOS?, but it's a ~6 kiloword, practical 4th environment for Microchip PIC microcontrollers, which are a lot simpler than Z80, btw... The source code is trivial to understand too. My father is still using it daily to replace/substitute Caterpillar machine electronics or build custom instruments for biological research projects.

We started with Mary(Forth) back then, when the first, very constrained PIC models came out, with 8 deep stack and ~200 bytes of RAM. Later we used the https://rfc1149.net/devel/picforth.html compiler for those, which doesn't provide an interactive environment.

I made a MIDI "flute" with that for example, which was fabricated from sawing out a row of keys from a keyboard and used a pen house as a blow pipe and a bent razor with a photo-gate as the blow-pressure detector...

There are more minimal Forth OSes, which might be more accessible than a Z80-based one.

I would think those are more convenient for learning, how can you have video, keyboard and disk IO, an interactive REPL and compiler in less than 10KB

I remember, I played a lot with https://wiki.c2.com/?EnthForth

But if you really want to see something mind-bending, then you should study Moore's ColorForth?! I found it completely unusable, BUT I've learnt immense amount of stuff from it: https://colorforth.github.io/

There are more usable variants of it, btw. Also worth looking into Low Fat computing: http://www.ultratechnology.com/lowfat.htm I think it's still relevant today.

reply

---

" The Z80 asm version of Collapse OS self-hosts on a RC2014 with a 5K shell on ROM, a 5K assembler binary loaded in RAM from SD card (but that could be in ROM, that's why I count it as ROM in my project's feature highlights) and 8K of RAM. That is, it can assemble itself from source within those resources.

The biggest piece of software to assemble is the assembler itself. It's a reasonably well-featured assembler that supports forward labels, includes, many useful directives. The code implementing those features requires those aforementioned resources to assemble.

... a Forth Collapse OS achieves self-hosting with as much resources than its Z80 counterpart...

Forth is, to my knowledge, the most compact language allowing high level constructs. It is so compact that Collapse OS' Forth implementation achieves self-hosting with about the same amount of resources than its assembler counterpart!

... If I wanted to re-implement that assembler feature-for-feature in Forth, it would probably require much more resources to build. Even though higher level words are more compact, the base of the pyramid to get there couldn't compete with the straight assembler version. This was under this reasoning that I first dismissed Forth.

So, again, what makes Forth more compact than assembler? Simplicity. The particularity of Forth is that it begins "walking by itself", that is, implementing its own words from its base set, very, very early. This means that only a tiny part of it needs to be assembled into native code. This tiny part of native code requires much less tooling, and thus an assembler with much less features. This assembler requires less RAM.

What is more compact than something that doesn't exist? Even Z80 assembler can't beat the void. "

---

https://aiju.de/plan_9/plang

" The P programming language

Some ideas for a new programming language for Plan 9.

Desidarata:

    Straightforward compilation to C
    Reasonably interoperable with C
    Type inference
    Parametric types
    Namespaces
    Tuple types (shorthand for structs)
    Tagged unions
    Pattern matching
    Closures (compiles to the classic void * aux argument); probably a bit clumsy

Questions:

    How much of this could be implemented by a smart macro system? Having a simple meta language to implement these things could simplify the compiler.
    Extend C's syntax, supplement it with a new one or replace it altogether?"

---

"

Notes on Languages

    I really like static typing because it eliminates a common source of errors.
    I really like pointers because they make the distinction between a value and a reference explicit.
    I'm fine with both manual memory management and garbage collection. Manual memory management adds awkwardness (particularly to code with a functional style). Garbage collection adds uncertainty.
    I like having goto. It would drive me insane not having break, return or continue (see also "structured programming").
    I'm not a fan of automatic exception handling (like in C++). It usually leads to anything unusual being flagged as an error. It's troublesome with concurrent programs.
    I like having a good way to do threading, in particular having channels for communication.
    I dislike OOP. While it can mean vastly different things to different people, it usually boils down to unnecessary abstraction and complexity.

Notes on particular languages:

    I really like C, despite its weak spots. The most annoying problem is the lack of a good standard library. Plan 9 fixes that.
        I like the syntax, except for the precedence issue with the & operator.
        Type declarations can be crazy, but it's usually not a problem in practice and can always be worked around using typedef.
    I like Go, although I feel it's not much of an improvement over Plan 9 C. It has much nicer interfaces than C on other systems though.
        I don't like the syntax. I hate forced braces.
        I don't like having 1 MB binaries for small programs.
    I hate C++ (although I've used it a lot in the past). The languages has too many crazy complex features.
    I hate Java and C# for similar reasons. Add to that the forced use of object orientation and a completely insane standard library.
    I like writing shell scripts with rc, but I don't like the bourne shell.
    Python is kind of okay for testing algorithms. Awkward to write larger programs in.
    Perl is a mess.
    LISP is fun although it gets messy quickly.
    I admire FORTH for its simplicity but I can't wrap my head around all the stack dancing."

-- https://aiju.de/misc/languages

---

 ufo 21 hours ago [–]

This release brings many performance improvements, including a new garbage collector.

In terms of language features, the biggest new feature are the "to be closed" variables. They can be used to ensure that cleanup routines are always called, similarly to RAII in C++ or try-finally blocks.

reply

anonymoushn 20 hours ago [–]

<close> is equivalent to golang's defer (either can be implemented in terms of the other) except at the block level. imo calling it RAII mostly leads to confusion. One mailing list user recently asked a bunch of questions around using <close> for RAII. They expected RAII to work even for resources that are never bound to a particular type of local variable, for example if one writes do_thing(io.open("foo.txt")), where do_thing is not responsible for closing files because sometimes it is used with files that will continue to be used. They eventually concluded that the closest thing to RAII available was __gc.

Some users presented separate complaints about resources kept in to-be-closed locals in coroutines not necessarily being closed. You can do something to try to solve this, like setting the __gc and __close metamethods of coroutines to call coroutine.close. A "real" solution would look more like dynamic-wind. Notably golang doesn't attempt to solve this one, so maybe not solving it is fine.

reply

pansa2 19 hours ago [–]

> <close> is equivalent to golang's defer [...] except at the block level.

Does that mean it’s the same as C#'s `using` and Python's `with`?

reply

anonymoushn 18 hours ago [–]

I wasn't deeply familiar with python's `with`, so I looked it up[0]

<close> differs from `with` in at least the sense that it doesn't have any equivalent to __enter__ and doesn't create its own block. It creates a local variable whose __close metamethod will be called when it goes out of scope. Since Lua has lexical scope at the block level rather than the function level, this works similarly to the way Python calls __exit__.

These snippets of Python and Lua are roughly equivalent. They both open a file, read one line, and close it, or do nothing if there was some error opening the file.

  try:
    with open("foo.txt") as f:
      print(f.readline())
    # f is now closed.
  except:
    pass
  local f,err = io.open("foo.txt")
  if not err then
    local f <close> = f
    print(f:read("*l"))
  end
  -- f is now closed.

C#'s `using`[1] seems much closer, except that it handles nulls by not trying to close them and lua's <close> does not include any such handling.

[0]: https://docs.python.org/3/reference/compound_stmts.html#the-...

[1]: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

reply

frabert 16 hours ago [–]

The reference manual https://www.lua.org/manual/5.4/manual.html#3.3.8 seems to specify that "nil" and "false" values are ignored, so it behaves similarly to C#'s using.

reply

anonymoushn 16 hours ago [–]

Oh, that's great!

reply

pull_my_finger 15 hours ago [–]

Do file descriptors come with a default __close metamethod or do you have to create your own?

reply

anonymoushn 14 hours ago [–]

They come with one.

reply

---

There are many lightweight, embeddable scripting languages available, yet Lua remains dominant. This is despite the language being "quirky" at best (and "dreaded" at worst [0]).

Is this just inertia, or is there a technical reason for new projects to embed Lua rather than a different language?

[0] https://insights.stackoverflow.com/survey/2018#technology-_-...

reply

mhd 13 hours ago [–]

Tcl and various scheme dialects were popular, but they have a pretty unusual syntax, at least when viewed from a typical C/Algol-ish programmer. Lua just came at the right time and place and was more "usual" than that, also with a pretty small package and permissive license.

And I would say that this argument still holds true. For extending a game or application, I'd feel pretty bad forcing people to use JS, for example (unless it's EE, where the gloves are off). Wren might be an option, but it's certainly a lot less proven.

Also, there's LuaJIT?, if you really need some performance in a small package.

StackOverflow? might not be the best source for end-user scripting. (Never mind that I have my doubts about any statistic where Rust ends up winning the popularity contest.)

reply

TazeTSchnitzel? 14 hours ago [–]

What better languages are there which are friendly to embedding? A significant part of Lua's popularity, AIUI, comes from it being easy to embed.

reply

rsecora 2 hours ago [–]

ChaiScript? [1] For c++.

Almost trivial to add functions or calls in both directions.

There are two drawbacks. One is the maintenance and some performance quirks with returns and type conversions.

The other drawback, maintenance only [2]. the project accepts pull requests, but no recent activity on lenguaje or performance.

[1] http://chaiscript.com/

[2] https://github.com/ChaiScript/ChaiScript/commits?author=RobL...

reply

pansa2 14 hours ago [–]

Squirrel [0] seems to have been used in a few places [1], but not many compared to Lua. There are also embeddable JavaScript? implementations such as Duktape [2], and a lightweight Ruby in mruby [3].

[0] http://www.squirrel-lang.org/

[1] https://en.wikipedia.org/wiki/Squirrel_(programming_language...

[2] https://duktape.org/

[3] https://github.com/mruby/mruby

reply

brobdingnagians 13 hours ago [–]

I love Angelscript for that purpose (w/ C++). Very similar syntax to C++ (but w/ GC), very easy to expose classes, functions, and variables. All it takes is a quick compile & link. It also has a JIT compilation if you need it and ability to save/load the intermediate binary code for faster loading.

reply

bjoli 14 hours ago [–]

Guile scheme, if you have users that don't have the knee-jerk "ewww parentheses!" reaction. The threading model is vastly different from lua, but compared to regular lua you get a lot faster execution.

reply

srean 14 hours ago [–]

Guile is great and I enjoy how active it has been. However, it is no more that small embeddable lisp. Its embeddable of course, but not small by any means.

reply

bjoli 4 hours ago [–]

One you start wanting to run complex things over multiple interpreters in Lua (say, to use multiple threads) you might as well just use guile.

Sure, we are talking 2.5mb of library and about the same amount object code. Quite a bit larger than Lua. But that also gives you object code for working with texinfo (and quite a lot of completely unused, undocumented modules). I wonder how much could be stripped without anyone actually boticing

reply

salamanderman 11 hours ago [–]

Wren https://wren.io/

reply

zeveb 11 hours ago [–]

Tcl leaps to mind. It is very easy to embed, too.

reply

---

" 2.2 Which shortcomings of Gopher does Gemini overcome?

Gemini allows for:

    Unambiguous use of arbitrary non-ASCII character sets.
    Identifying binary content using MIME types instead of a small set of badly outdated item types.
    Clearly distinguishing successful transactions from failed ones.
    Linking to non-gopher resources via URLs without ugly hacks.
    Redirects to prevent broken links when content moves or is rearranged.
    Domain-based virtual hosting.

Text in Gemini documents is wrapped by the client to fit the device's viewport, rather than being "hard wrapped" at ~80 characters with newline characters. This means content displays equally well on phones, tablets, laptops and desktops.

Gemini does away with Gopher's strict directory / text dichotomy and lets you insert links in prose.

Gemini mandates the use of TLS encryption.

...

 "The Gemini experience" is roughly equivalent to HTTP where the only request header is "Host" and the only response header is "Content-type" and HTML where the only tags are <p>, <pre>, <a>, <h1> through <h3>, <ul> and <li> and <blockquote>" -- https://gemini.circumlunar.space/docs/faq.html

https://gemini.circumlunar.space/docs/specification.html

---

https://en.wikipedia.org/wiki/Hayes_command_set

---

don't know Linus's reasons specifically, but our presentation at Linux Security Summit last year laid out why we think that Linus's past objections to C++ don't apply to Rust. See slides 19-21 of https://ldpreload.com/p/kernel-modules-in-rust-lssna2019.pdf .

His previous objections were:

    In fact, in Linux we did try C++ once already, back in 1992.
    It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.
    The fact is, C++ compilers are not trustworthy. They were even worse in 
    1992, but some fundamental facts haven't changed:

In brief, Rust does not rely on C++-style exception handling/unwinding, it does not do memory allocations behind your back, and its OO model is closer to the existing kernel OO implementation in C than it is to C++'s model. (There are other good safe languages besides Rust that I personally like in general but do not satisfy these constraints for this particular use case.)

reply

---

ve actually played with writing kernel code in rust - for Windows/NT, however - and it's quite weird to be able to use such high-level type constructs in code where you typically manually chases pointers and wouldn't be surprised to see statically allocated global variables used to monitor reference counts.

reply

ksec 13 hours ago [23 more]

pjmlp 9 hours ago [–]

Why it is weird? It has been done before in Ada, Object Pascal, C++, Mesa/Cedar, Modula-2, Lisp, Oberon, Sing#, System C#, Objective-C,....

reply

---

favs

https://www.quora.com/Which-is-the-most-lovely-programming-language

Which is the most lovely programming language? 30 Answers

" Andy Gryc , Have been playing guitar a good long while Answered December 14, 2014 · Author has 152 answers and 270.1K answer views Originally Answered: According to you, which is the most lovely programming language ?

This is an interesting question, because I think a programming language that's "lovely" would have some characteristics that most (but not all) programmers would agree on, but will also have an intangible attraction or evoke some deep seated personal bias.

In my mind, the characteristics that lead to a lovely language would be things like orthogonality, simplicity, consistency, capability and readability. Things like Perl and Lisp fail miserably IMHO. Perl because it really is a write-only language, and Lisp because I think it's just plain ugly with those hoards of parenthesis. (((I said there was a personal bias, didn't I?))) While one could argue that C fits most of the criteria, it's not particularly beautiful to me. Functional, yes, but beautiful, no. And C++ most definitely is a fail due to its layers of complexity. I use it a lot, and find it a tremendously capable language, but it's too baroque to be a lovely one.

Of the languages I actually use, Python probably comes closest. It's quite cleanly and elegantly designed overall. It does have some rather esoteric advanced features that erode it's simplicity and readability, but it's still a good candidate. While Forth meets most of my stated criteria being simple, orthogonal, consistent, here the intangible comes in. Forth programming is just not as much fun. For me anyway, it has a sort of a back of the mind "tight rope" quality to it that doesn't put one at ease.

Of languages that I haven't done any practical programming in, Julia would have to be my vote. I like almost everything about it, especially how it handles operator overloading. As with anything, I suspect if I were to use it for some actual development I'd see some of the warts, so maybe it seems to be the most lovely because I only know it from afar. 7.6K viewsView 7 Upvoters

Jon Harrop , programming since 1981. Updated July 29, 2020 · Author has 872 answers and 1.7M answer views

Core ML is my favorite:

    Eager evaluation: simple and predictable.
    Type system: unit, int8/16/32/64, float32/64, unboxed tuples, unboxed algebraic data types and function types that always take one value of one type and return one value of one type.
    Guaranteed elimination of all tail calls so an unbounded depth of calls in tail position can be executed using only finite space. Tail calls subsume all looping constructs.
    Good built-in collections: arrays, sets and dictionaries.
    Type checker: Damas-Milner type inference.
    Modules: first-order modules with signatures are fine. No need for a higher-order module system.
    Special cases for some type-based dispatch: just equality, comparison, hashing, pretty printing, parsing and serialization. Similar to F# (but F# extends this to type conversion functions, trigonometric functions and doesn’t cover hashing, pretty printing, parsing or serialization which are, I think, more important in practice).
    Simple, predictable and efficient compilation strategy: much faster than Lisp and much more predictable than Haskell.

I’m in the process of implementing an interpreter for this language…

Some novel ideas that might be of interest:

    Use dot notation (foo.bar) as shorthand for (typeof<foo>.bar foo) so functional languages can get the tooling benefits of OOP without the complexity of dynamic dispatch, inheritance, diamond problems and so on, e.g. if you have a list xs then the expression “xs.map succ” should be equivalent to “List.map succ xs”.
    Unboxed tuples and algebraic data types undermine the generational hypothesis, making generational GCs less desirable. I’d like to experiment with non-moving Immix-like mark region GCs or perhaps a GC design that uses reference counting for references between types that are not mutually-recursive and tracing GC for mutually recursive types.
    Aggressively hash cons all values to maximize sharing. Use history-independent collections like PATRICIA trees for Set and Map. Use a Concestor dictionary (see A simple Concestor dictionary

and An optimized Concestor dictionary

    ).
    Another strategy of interest might be to bake the concept of unrolled purely functional collections into the type system so that, for example, a recursive data type can specify a maximum level of recursion. This would aid the optimiser in choosing the most efficient representation for values of a given type.

5.2K viewsView 22 Upvoters

...

Tikhon Jelvis , studied programming languages and did research on program synthesis Answered December 15, 2014 · Author has 1.8K answers and 12.7M answer views Originally Answered: According to you, which is the most lovely programming language ?

"Lovely" is an interesting adjective for a programming language. For me, it would have to be something small, almost dainty. At the same time, it would also have to be expressive: it would have to grow

and let me express what I want, how I want.

With these particular criteria, I think the most lovely language would have to be Scheme. It's definitely small and elegant: even formally specified, its standard is 50 pages, which covers prose, its grammar and, most impressively, its operational semantics. It's far smaller than the completely informal standards of other languages!

At the same time, it's one of the most expressive languages I know. Largely thanks to its macro system, it can bend to be whatever you want. It really grows. Not surprising considering Guy Steele, who gave the wonderful talk about growing a language I linked above, is one of Scheme's main creators.

Now, this doesn't mean that Scheme is my favorite language, but it's definitely the loveliest of choices. 8.3K viewsView 30 Upvoters · Answer requested by Jake Januzelli

Eric Pennington , Software Engineer Answered February 1, 2013 Originally Answered: According to you, which is most beautiful programming language and why?

Scheme. It's compact. It's powerful. It's LISP. If you want a true appreciation of its beauty, I suggest reading The Structure and Interpretation of Computer Programs (SICP). If you read the book and do the exercises, when you see how much you've created with such a tiny little language you'll gain a true appreciation of its beauty and simplicity.

Resources: Welcome to the SICP Web Site (Full text online!) Scheme (programming language) (Wikipedia, for a quick overview) The Scheme Programming Language

(More info and some implementations) 9.7K viewsView 10 Upvoters Tao Xu , worked at Facebook Answered May 13, 2014 · Author has 63 answers and 839.7K answer views Originally Answered: According to you, which is the most beautiful programming language and why?

F#, The code is short/concise, very high level (implementation is like spec). It looks appealing too (more so than python).

It is one of the very few things that I missed after left Microsoft to the silicon valley. 6.5K viewsView 11 Upvoters TR Livesey , Scientist and Activist Answered February 1, 2013 · Author has 2K answers and 1.6M answer views Originally Answered: According to you, which is the most beautiful programming language and why?

LISP and its variants, definitely. All procedural languages look like hacks. Functional languages make algorithms look like mathematical expressions. 3.5K viewsView 12 Upvoters

Hitesh Kewalramani , works at Microsoft Updated April 26, 2014 Originally Answered: Which programming language is the most enjoyable?

In my experience, the joy of learning Haskell and appreciating its beauty is unmatched. This is after having learnt the regular languages like Python, C++, Java, JavaScript?, etc. 2.8K viewsView 12 Upvoters "

---

geofft 9 months ago

parent favorite on: Unfork()

Emacs somewhat famously uses "unexec" in its build process, you build a skeletal Emacs in C (mostly the Emacs Lisp implementation), run it, load and compile and process a bunch of Lisp that implements the editor itself, and dump the resulting process memory back out to disk. The result of this eldritch process is the final emacs binary. When you exec emacs, you get an environment that consists of the editor code ready to go.

jws 9 months ago [–]

It comes from a time of machines executing instructions thousands of times slower than they do now. Literally – thousands. Memory access was about as fast as an instruction execution, so the amount of compute you can justify per unit of data was hundreds of times less than it is now. They did however have virtual memory systems with on demand page fetching.

Also, that machine was being time shared with a dozen or more users.

Launching emacs or TeX? on this machine might take tens of seconds without access to unexec(), but only 3 seconds for the freeze dried version.

unexec() was easier at the time. There were no shared libraries, no address space layout randomization. One memory region grew up from the bottom, one down from the top. There was no mmap() jamming mysterious stuff in the middle. Just copy the bottom, copy the top, do magic to adjust the stack for your unexec() call, and write the thing out as an executable.

(Yeah, I excised unexec() from BibTeX? back in the ‘80s to port it to a 68k Mac for a coworker, then later implemented unexec() for a Motorola 88k based multilevel secure SysV? system in the early ‘90s because launching emacs was driving me insane. I prefer our shiny new future of stupidly fast computers.)

---

kps 4 hours ago [–]

\ was invented so that ∧ and ∨ could be written /\ and \/. In retrospect, a mistake.

Original ASCII (1963) had ↑, but it was converted to ^ in 1967 so that it could double as a circumflex.

(Also ←, which was a good assignment operator; it's too bad _ didn't replace \ instead.)

reply

-- https://news.ycombinator.com/item?id=24120839

---

should also search for the words 'gotchas' and 'footguns' for existing languages to see what problems people have identified with them

---

if you group things together u can remember more than 7+-2 things (this is how people do memory 'tricks').

so the 7+-2 limit is really about branching factor at each memory graph 'node'

---

Did small c have break and continue?

yes, it did.

---

"from a theoretical standpoint I find the stack abstraction more beautiful than registers, the same way an ordered set outshines a hashed one; because they're analog, full spectrum, without gaps and extra complexity to cover them." -- [2]

---

regarding Lisp and Forth as simple languages (with an eye towards implementation layers eg OVM), i think i'm beginning to undertsand.

Both Lisp and Forth have dramatic syntactic simplifications compared to most languages e.g. C. In contrast to what you might think, that syntax is just window dressing, these syntactic simplifications appear to harmonize with simple semantic designs (although the semantics of Lisp and Forth are very different).

Lisp's syntactic simplification is just to represent the AST directly. Each AST node is represented as a list, which may contain other lists. These 'lists' are called S-expressions, and really they are trees. The 'type' of each AST node is determined by its head; but the head of the AST node may itself be another child AST node, which must be evaluated first before you can evaluate the parent AST node. The modus operandi of running lisp is to recursively evaluate these AST nodes. Typically in Lisp you evaluate all of the children before evaluating the parent, but there are a few 'special forms' which do things like short-circuit booleans OR and AND and short-circuit conditionals.

Lisp has some facilities that come in handy in this sort of setup; for example:

Lisp's semantics are simple but somewhat high level; for example, memory management is abstracted. Continuations are often provided. Tail recursion optimization is often supported. Closures are often supported. And of course, like most languages, functions and function argument passing and local variables are provided, and the call stack is abstracted.

Lisp (and its variant, Scheme; note that Racket is a sub-variant of Scheme) famously has macros, which are more accessible than usual because the data structure used to represent code is s-expressions, which is exactly the same way that the same code is written in the source program.

Forth's syntactic simplification is even more dramatic than Lisp's; don't have an AST at all (or you could think of it as a trivial one); just execute tokens one at a time (especially in e.g. Retro Forth's prefix-using syntax; i get the sense that original Forth has to lookahead sometimes, but i'm not sure; i'm also not sure that Retro Forth's doesn't). Forth does abstract over function calling but does not abstract over argument passing or provide local variables, instead just exposing the stack (well actually, Forth has TWO stacks, one for data and one for addresses) (note that i've heard that in some Forth implementations, the underlying stack is not necessarily 'exposed', rather a virtual machine with an exposed stack is emulated). Forth doesn't do memory management for you (much; it does maintains a pointer, 'here', to the next available free spot in the heap, which is just incremented when you use it), and i don't think it offers continuations, tail recursion optimization, or closures.

Forth has been described as 'extensible assembly'; the idea being that you can 'define new words' which are like subroutines, but they can also be thought of as custom assembly instructions. A canonical implementation would compile each custom word to a jump to the code defining that word. People call Forth a 'concatenative language' because juxtaposition means function composition: if you say WORD1 WORD2, what happens is that the program would jump to the code for WORD1, then after that is done, it would jump to the code for WORD2 (in contrast to e.g. LISP where juxtaposition means giving arguments for a function; although there do exist Lisp functions like 'progn' which just run their arguments one by one).

Forth has some sort of metaprogramming, apparently involving IMMEDIATE words (which are executed at compile time), and somewhat involving "CREATE" and "DOES>", and apparently the language as a whole when these constructs are included is similar to Lisp macros in power, described in https://www.forth.com/starting-forth/11-forth-compiler-defining-words/ . I'm told that these guys can affect the operation of the compiler (if the implementation is compiling rather than interpreting), so Forth can be thought of as having 'an extensible compiler', in a very simple, powerful way. I'm told that this metaprogramming can even create DSLs which use syntax in the Forth source code; so if i understand correctly, it's not quite correct to say that Forth doesn't have a grammar or has a flat one, rather, its grammar starts out flat but is mutable (and i bet it isn't even guaranteed to remain context-free).

So, in both syntax and semantics, Forth does seem even lower-level than Lisp. It doesn't support nesting in its AST (i think?); its 'syntax' is just to process each token (word) as it is encountered. It doesn't abstract function argument passing and local variables for you (although some Forths do have some facilities to help with that), it doesn't manage your memory (much), and i don't think it offers some of Lisp's advanced stuff like closures, continuations (although i think you could in theory add any of this using its metaprogamming facility).

Both Lisp and Forth famously have REPLs.

In terms of applications, Forth has very small (in code size) implementations and so has found a niche in embedded hardware.

---

so, the above makes me wonder if we shouldn't have a Forth-like layer somewhere. It sounds like, due to the simpler syntax, it may be as easy to implement as an assembler, but more powerful (which is what CollapseOS? found). Otoh i can more easily see how (a subset of) assembly programs (the subset of those which don't exhibit fancy control flow) could be compiled to native code, or transpiled into a HLL like Python; Forth's two stacks could make things difficult, although otoh we can similarly consider the restricted subset of Forth programs which don't do anything weird.

---

random paper on Forth implementation details:

Updating the Forth Virtual Machine "...the addition of address registers is considered"

http://www.complang.tuwien.ac.at/anton/euroforth/ef08/papers/pelc.pdf

---