proj-oot-ootNotes36

 soapdog 1 day ago [-]

Tell us more about your books. I'm addicted to scheme books :-)

Also, what made you choose Haskell for your project? Can you share some of the reasoning?

reply

mark_l_watson 1 day ago [-]

I think the deployment story for Haskell is better than Racket. It is easy enough make standalone Racket executables but with stack and cabal it is baked in to easily build multiple executables, separate libraries, keep everything tidy.

Racket is much better to get something done and working quickly. Same comment for Common Lisp.

Haskell has great support for strongly types web services (servant) and lots of great libraries. Racket has a very rich ecosystem of libraries, custom languages (like Typed Racket). Both are great.

EDIT: It takes me longer to get to working code in Haskell but once written the code has higher value to me because it is so much faster/easier to refactor, change APIs, reuse in other projects, etc. I just did a major refactoring/code-tidying this morning, and it was very simple to do.

reply

---

" But now suppose we want to use a condi­tional in place of the oper­ator, not the right-hand value. In a Lisp, because every­thing is an expres­sion—including the oper­ator itself—this is easy:

((if (< 1 0) + *) 42 100)

((if (< 1 0) + *) 42 100)

But if you try the same thing in Python, it will raise a syntax error:

42 (+ if 1 < 0 else *) 100

42 (+ if 1 < 0 else *) 100

Why? Because Python oper­a­tors are not expres­sions. " [1]

---

Parity's WASI (WASM interpreter) finds the structured control flow of WASM (e.g. IF...END) too hard to handle so it flattens it first! I knew it! (that's a concern that i had had). I think structured control flow might be good for optimization but bad for dead-simple implementation.

https://github.com/paritytech/wasmi/blob/master/src/isa.rs

---

" Io's core is about 400 lines of code, including evaluator. Hardly huge. It's the libraries that consume much of the rest. However it's not a bytecode VM, it's a tree walker. – jer Nov 8 '11 at 1:38 "

---

" As for the mod: it has never been published but if I have the time I might try submitting patches upstream, or passing them on to someone who will. It is likely they won't be accepted, though. LUA has never been receptive of people splitting the lexer and compiler apart from the interpreter. There is also the issue of bytecode architecture portability. LUA does not have a big-endian/little-endian agnostic interpreter. – soze Aug 13 '15 at 11:25 "

---

some earlier history of JVM's use for non-java languages: "One alternative is to make all these high-level services part of the abstraction of- fered by the portable assembler. For example, the Java Virtual Machine, which provides garbage collection and exception handling, has been used as a tar- get for languages other than Java, including Ada (Taft 1996), ML (Benton, Kennedy, and Russell 1998), Scheme (Clausen and Danvy 1998), and Haskell (Wakeling 1998). But a sophisticated platform like a virtual machine embodies too many design decisions. For a start, the semantics of the virtual machine may" -- C--:aportableassemblylanguagethatsupportsgarbage collection

---

" One alternative is to make all these high-level services part of the abstraction of- fered by the portable assembler. For example, the Java Virtual Machine, which provides garbage collection and exception handling, has been used as a tar- get for languages other than Java, including Ada (Taft 1996), ML (Benton, Kennedy, and Russell 1998), Scheme (Clausen and Danvy 1998), and Haskell (Wakeling 1998). But a sophisticated platform like a virtual machine embodies too many design decisions. For a start, the semantics of the virtual machine may not match the semantics of the language being compiled (e.g., the exception se- mantics). Even if the semantics happen to match, the engineering tradeo s may di er dramatically. For example, functional languages like Haskell or Scheme allocate like crazy (Diwan, Tarditi, and Moss 1993), and JVM implementations are typically not optimised for this case. Finally, a virtual machine typically comes complete with a very large infrastructure

the like much, much lower. Our problem is to enable a client to implement high-level services, while still "
class loaders, veri ers and
that may well be inappropriate. Our intended level of abstraction is

---

i just checked in with C--/Cmm again.

according to [2], C-- died because no one had time to work on it.

according to this search regarding Cmm since 2016, it looks like GHC/haskell stuff is still using it at least somewhat:

https://www.google.com/search?q=what+happened+to+Cmm+haskell&safe=active&client=ubuntu&hs=59f&channel=fs&biw=1620&bih=2636&source=lnt&tbs=cdr%3A1%2Ccd_min%3A2016%2Ccd_max%3A&tbm=

i skimmed the first part of the paper. It seems to be slightly higher-level than Boot; it's more of an OVM. E.g. it provides run-time facilities to treat activation records/frames as abstract things, so that a garbage collector can ask the Cmm runtime for the address of variables in activation records, even without knowing exactly how the platform implements activation records. The idea is that you'd write a garbage collector on top of Cmm.

---

" LLVM has been mentioned a few times recently, but unless I'm missing something, it doesn't really seem to be a good fit for functional languages at the moment. For a start, it apparently has self-tail-call support, but no general tail-call optimization, which is pretty important for languages like ML.

Also, LLVM's "low level" nature means it doesn't deal with things like nested lexical scoping at the VM level, so functional languages have to treat LLVM in the same way as they would a native target, but with a more limited range of possibilities, afaict. Granted, they achieve some portability in return.

VMs for functional languages are typically higher-level, supporting e.g. a 'closure' instruction which handles all of the issues related to nested lexical scoping, and the management of activation frames. There are a lot of advantages to this from the perspective of the language developer, including e.g. easier integration with debugging tools. " -- [3]

" > > Without an efficient representation of first-class continuations, it's > > too weak for Scheme. I also don't see direct support for proper tail > > recursion, which would also be needed to make things work well. > > The tail call documentation is unofficial and currently available from the > primary author of LLVM, Chris Lattner: " -- [4]

http://nondot.org/sabre/LLVMNotes/GuaranteedEfficientTailCalls.txt

" There's another document also dated Sep 5: http://nondot.org/sabre/LLVMNotes/ExplicitlyManagedStackFrames.txt , which describes a way to manage stack frames in LLVM to support "garbage collected closures". The technique seems to be to convert code to CPS, and allocate stack frames on the heap (standard stuff). But the document ends with "If someone is interested in support for this, guaranteed efficient tail call support and custom calling conventions are the two features that need be added to LLVM." Sounds like fun! " -- [5]

---

pytest's component system:

https://pluggy.readthedocs.io/en/latest/

---

gerbilly 2 days ago [-]

Does anyone remember reading K&R for the first time?

To me seemed like C was such a tight, perfect little design.

Only thirty keywords, and simple consistent semantics.¹

When I first learned it, it was still pre ANSI, and you declared a function like this

int f(a)

char *a

{

}

The ANSI style function declaration was maybe the only innovation that came after that that significantly improved the language.

I remember in the late '80s knowing my compiler so well that I could tell you what the stack frame would look like pretty much anywhere in my code. It was awesome to have that level of familiarity.

Soon after that things started to get more complicated, and I did a lot of Java, and I never again felt as dialed in to those languages as the late 80s C that I started out with.

The K&R book is worth a read if anyone missed it. It's beautifully written. A far cry from the 'opinionated' copy that you often find on the Go website.

Personally, I don't think you can make similar claims about Go's design or the 'because I told you so' tone that surrounds it.

1: Yes, I know, undefined behaviour, but this is my post and this is how I feel about C.

reply

lmm 2 days ago [-]

Can't say I got that feeling at all.

Smalltalk is a tight design. So is Lisp. So is Forth. So is APL. C's design just feels like a pile of... stuff. Arrays are almost but not quite the same as pointers. You can pass functions but not return them. There's a random grab-bag of control flow keywords, too many operators with their own precedence rules, and too many keywords given over to a smorgasbord of different integer types with arbitrary rules for which expressions magically upconvert to other ones.

reply

fao_ 2 days ago [-]

FYI: 2am possible rambling on design and software

Funny, I got the same feeling from Smalltalk and Lisp.

I own both "Common Lisp: The Language" and "Smalltalk-80: The Language and its Implementation", and while there are many ways those languages could be described as 'tight' (tightly-coupled, perhaps), at no point can you look at the C language and say "This could be smaller" without significantly removing functionality. Ok, perhaps there are some things around array/pointer syntax, etc. but the room for removing things from the language is very small.

LISP and Smalltalk are both 'kitchen-sink' languages. As I understand it (i.e. Unless I misread something or skipped a page) for an implementation to be a proper spec-conforming instance of Smalltalk-80, a screen and graphics server is required. Indeed, Smalltalk-80 requires a very specific model of graphics display that is no longer appropriate for the time. Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.

On the other hand, all of the C that is there serves a purpose.

Perhaps the only thing in your list that does feel like a tight design in addition to C, is FORTH. But FORTH puts the burden for the programmer to remember what is on the stack at any given time. It has some beauty, indeed, but all of the abstractions seem inherently leaky. I haven't programmed in FORTH, however, so I can't really talk more about how that plays out in practice.

If the "There is nothing else to remove" does not resonate with you, then I think the perspective of the OP, and myself, and others, when we call C a "small"/"tight" language, is that essentially, C was born out of necessity to implement a system. Conversely, the 'batteries included' aspect of Smalltalk and Lisp more or less presume the existence of an operating system to run. It feels like the designers often did not know where to stop adding things.

Most of the library functions in C, can be implemented very trivially in raw C. Indeed, much of K&R is just reinventing the library 'from scratch', there is no need to pull out assembly, or any more assumptions about the machine other than "The C language exists". Whereas, a lot of the libraries of Smalltalk and Lisp seem bound to the machine. Not to harp on too much about the graphics subsystem of smalltalk, but you couldn't really talk about implementing it without knowing the specifics of the machine. And while much of Lisp originally could be implemented in itself, Common Lisp kind of turned that into a bit of a joke. Half the time when using it, it is easier and faster to reimplement something than find whether it exists.

Apologies if this is repetitive or does not make much sense.

reply

nickloewen 2 days ago [-]

I agree with you, but perhaps you are reading “tight” slightly differently than the way the original poster intended it?

To me, ANSI C is “tight” in the sense that it is made up of a small set of features, which can be used together to get a lot done. But the design of the features, as they relate to each other, can feel somewhat inelegant. Those different features aren’t unified by a Simple Big Idea in the way that they are in Lisp or Smalltalk.

Lisp and Smalltalk, then, have “tight” designs (everything is an s-expression/everything is an object) which result in minimal, consistent semantics. But they also have kitchen sink standard libraries that can be challenging to learn.

(Although to be fair, Smalltalk (and maybe Common Lisp to a lesser extent) was envisioned as effectively your whole OS, and arguably it is a “tighter” OS + dev environment than Unix + C...)

FWIW, I am learning Scheme because it seems to be “tight” in both senses.

reply

lmm 1 day ago [-]

It sounds like you're talking about the standard library rather than the language? The examples I gave have a very small language where you really can't remove anything, whereas in C quite a lot of the language is rarely-used, redundant, or bodged: the comma operator surprises people, for and while do overlapping things, braces are mandatory for some constructs but not for others, null and void* are horrible special cases.

Standard libraries are a different matter, but I'm not too impressed by C there either; it's not truly minimal, but it doesn't cover enough to let you write cross-platform code either. Threading is not part of the pre-99 language spec, and so you're completely reliant on the platform to specify how threads work with... everything. Networking isn't specified. GUI is still completely platform-dependent. The C library only seems like a baseline because of the dominance of unix and C (e.g. most platforms will support BSD-style sockets these days).

I'm actually most impressed by the Java standard library; it's not pretty, but 20+ years on you can still write useful cross-platform applications using only the Java 1.0 standard library. But really the right approach is what Rust and Haskell are doing: keep the actual standard library very small, but also distribute a "platform" that bundles together a useful baseline set of userspace libraries (that is, libraries that are just ordinary code written in the language).

reply

simias 2 days ago [-]

>To me seemed like C was such a tight, perfect little design. Only thirty keywords, and simple consistent semantics.

Except that clearly history showed that it wasn't enough, and we ended up with about 50 millions (and counting) different meaning for "static" for instance. I like C but its simplicity is almost by accident more than by design. It's pretty far from "perfect" in my book.

There are so many weird features about the language that can bite you ass for no good reason. Why don't switch() break by default since it's what you want it to do the overwhelming majority of time (answer: if you generate the corresponding assembly jump table by hand "fall through" is the easiest and simplest case, so they probably kept it that way).

Why do we have this weird incestuous relationship between pointers and arrays? It might seem elegant at first (an array is a pointer to the first element or something like that) but actually it breaks down all over the place and can create some nasty unexpected behavior.

Why do we need both . and -> ? The compiler is always able to know which one makes sense from the type of the variable anyway.

String handling is a nightmare due to the choice of using NUL-terminated strings and string.h being so barebones that you could reimplement most of it under an hour.

Some of the operator precedences make little sense.

Writing hygienic macros is an art more than a science which usually requires compiler extensions for anything non-trivial (lest you end up with a macro that evaluates its parameters more than once).

Aliasing was very poorly handled in earlier standards and they attempted to correct that in more modern revisions while still striving to let old code build correctly and run fast. So you have some weird rules like "char can alias with everything" for instance. Good luck explaining why that makes sense to a newbie without going through 30+ years of history.

The comma operator.

Undefined function parameter evaluation order.

I suspect that with modern PL theory concepts you could make a language roughly the size of C with much better ergonomics. I'm also sure that nobody would use it.

reply

---

" Good languages come with integrated tests

You can be sure that if a language brings a testing framework -- even minimal -- in its standard library, the ecosystem around it will have better tests than a language that doesn't carry a testing framework, no matter how good the external testing frameworks for the language are. "

"

Good languages come with integration documentation

If the language comes with its own way of documenting functions/classes/modules/whatever and it comes even with the simplest doc generator, you can be sure that all the language functions/classes/modules/libraries/frameworks will have a good documentation (not great, but at least good).

Languages that do not have integrated documentation will, most of the time, have a bad documentation.

"

"

A language is much more than a language

A programming language is that thing that you write and make things "go". But it has much more beyond special words: It has a build system, it has a dependency control system, it has a way of making tools/libraries/frameworks interact, it has a community, it has a way of dealing with people. "

---

an argument for effect typing...

" Math.Round opens the browser print dialog (github.com) 264 points by gokhan 1 day ago

flag hide past web favorite 59 comments

ChrisSD? 1 day ago [-]

    var ASM_CONSTS = [(function(){
        var err = new Error;
        print('Stacktrace: \n');
        print(err.stack)
    } // ...

The issue is that print call. They expect it to call their own print function. But that's not in scope so it falls back on window.print (I.e. the function defined in the global object). "

---

koolba 1 day ago [-]

I’d argue that type checking combined with editor auto completion leads to faster development.

The feedback loop is within your editor, that’s a step before the app console!

reply

ncphillips 1 day ago [-]

100% going from Java+IntelliJ? to Ruby+VSCode was shocking. In regards to tooling, the developer experience is way behind with Ruby. Sure there’s a lot more boilerplate to look at, but with tolls you don’t actually end up writing much. And then you get refactoring tools that are actually super useful and robust.

reply

---

" I like Go as a language for building implementations of ((distributed systems)) things. It's well-suited to writing network services. It compiles fast, and makes executables that are easy to move around. "

---

earenndil 38 minutes ago [-]

Nim and cython are not analogous. Most notably, nim is statically typed, and identifiers are statically determinable

---

nicwilson 16 hours ago [-]

> C, D, C++, rust, and nim should all have comparable performance;

For the same design; what sets fast code apart from slow code is the availability of designs enabled by the language. Compile time computation is a huge win for D (IIRC nim has similar capabilities).

reply

nimmer 5 hours ago [-]

> C, D, C++, rust, and nim should all have comparable performance

No, Nim is often among the fastests, sometimes surpassing C. This is because it targets C by default and uses data structures and code paths that GCC can optimize very well.

reply

---

"language semantics... such as whether mutable pointers are allowed to alias (or even the existence of non-mutable pointers) can play a huge role in what optimisations can be applied in a given situation and even how effective certain optimisations are. "

---

" D's contract mechanism (i.e. in/out, and invariant blocks) can provide very strong guarantees to the compiler (as a specified part of the language as opposed to a GCC pragma/builtin), which the LLVM D compiler definitely uses.

All of the on by defaults that D has are usually there for a reason, i.e. floats are NaN? initialized and bounds checking is on by default: These are very good idiot-proofing which can often be ignored unless profiling suggests there is a tangible issue. "

---

WalterBright? 16 hours ago [-]

> which ones lead you to the performant path naturally

D's big advantage is the plasticity of the code, meaning it's much easier to try out different data structures and algorithms to compare speed. My experience with C and C++ is they're hard to change data structures, meaning one tends to stick with the initial design.

For a smallish example, in C one uses s.f when s is a value of a struct, and s->f when s is a pointer to a struct. If you're switching from one to the other, you have to go through all your code swapping . and ->. With D, both are .

reply

---

pjmlp 12 hours ago [-]

Adding to your list:

reply

jamesmp98 4 hours ago [-]

Ada has always been intriguing to me. Is it used much anywhere these days?

reply

pjmlp 4 hours ago [-]

Avionics, trains, oil rigs, basically everything where human life's are at stake, deemed as High Integrity Computing.

Only 4 languages apply, Java with Real Time extensions, C and C++ with certification processes like MISRA and AUTOSAR among others, and Ada/SPARK.

reply

---

lenkite 15 hours ago [-]

Java's productivity is high with good choice of libraries and an IDE that Intellij that offers fantastic refactoring, code generation and code-intention abilities. And Kotlin's productivity is even more higher since all the above apply along with convenience language features for succinct and functional-style coding.

So I would rate Java as fast and Kotlin as very fast in the dev productivity scale. You can really push the pedal in these two if you are working in an IDE and you can change your design iteratively on the go thanks to excellent tooling.

reply

---

JoshuaScript? 10 hours ago [-]

You'll definitely want to check out ATS[0].

It's a functional, dependently-typed language with performance on par with C(its compilation target), and has a type system and theorem prover that guarantees memory-safe code.

[0] http://www.ats-lang.org

reply

philzook 2 hours ago [-]

I am personally very intrigued by ATS, I think it is striving for a truly unique and powerful point in the programming language space, but have extreme reservations about it being ready for general use. Do you consider it ready for adoption, or are bringing it up as a learning exercise?

reply

---

faissaloo 13 hours ago [-]

Actually I think D would be helped by removing features and focusing on really refining the existing stuff, especially CTFE which I think is the real selling point of D.

reply

skocznymroczny 5 hours ago [-]

What kind of features do you think should be removed? There were some people on the forums that were in favor of removing any feature that could be reimplemented with templates as a library solution.

reply

faissaloo 2 hours ago [-]

Optional parentheses, UFCS, BetterC?, named character entities and the objective-C & C++ interfaces to name a few.

reply

---

" D Arrays are Phat Pointersstruct Array {size_t length;int* ptr;} " [6]

---

" Memory-safe code cannot use certain language features, such as:

    Casts that break the type system.
    Modification of pointer values.
    Taking the address of a local variable or function parameter.

" [7]

" The following operations are not allowed in safe functions:

    No casting from a pointer type to any type other than void*.
    No casting from any non-pointer type to a pointer type.
    No pointer arithmetic (including pointer indexing).
    Cannot access unions that have pointers or references overlapping with other types.
    Calling any system functions.
    No catching of exceptions that are not derived from class Exception.
    Disallow @system asm statements.
    No explicit casting of mutable objects to immutable.
    No explicit casting of immutable objects to mutable.
    No explicit casting of thread local objects to shared.
    No explicit casting of shared objects to thread local.
    No taking the address of a local variable or function parameter.
    Cannot access __gshared variables.
    Cannot use void initializers for pointers.
    Cannot use void initializers for class or interface references.

"

[8]

" Array bounds checks are necessary to enforce memory safety, so these are enabled (by default) for @safe code even in -release mode.

...

Scope and Return Parameters

The function parameter attributes return and scope are used to track what happens to low-level pointers passed to functions. Such pointers include: raw pointers, arrays, this, classes, ref parameters, delegate/lazy parameters, and aggregates containing a pointer.

scope ensures that no references to the pointed-to object are retained, in global variables or pointers passed to the function (and recursively to other functions called in the function), as a result of calling the function. Variables in the function body and parameter list that are scope may have their allocations elided as a result.

return indicates that either the return value of the function or the first parameter is a pointer derived from the return parameter or any other parameters also marked return. For constructors, return applies to the (implicitly returned) this reference. For void functions, return applies to the first parameter iff it is ref; this is to support UFCS, property setters and non-member functions (e.g. put used like put(dest, source)).

These attributes may appear after the formal parameter list, in which case they apply either to a method's this parameter, or to a free function's first parameter iff it is ref. return or scope is ignored when applied to a type that is not a low-level pointer.

Note: Checks for scope parameters are currently enabled only for @safe code compiled with the -dip1000 command-line flag.

" [9]

---

my summary of the comments on [10] (one small part of this was moved to ootMetaProgrammingNotes3):

A beloved benefit of gradual or static typing is IDE incremental typo catching and autocomplete support. Many spoke on this. One interesting anecdote is that user seanmcdirmid says that in his opinion, TypeScript? is amazingly useful just due to enabling IDE stuff, even though it doesn't give you either soundness or performance.

A key hurdle for any static-only type system is that how do you do something like Python's Pandas? See this great quote: " tomp 5 days ago [-]

I am an "expert" in static type systems (I'm familiar with Java, Scala, OCaml, Haskell, Rust, Go, TypeScript?, C++ ... and keep up to date with latest type systems research like 1ML, MLsub, Liquid Haskell, ...), but I have a really hard time imagining how one would develop a statically typed library that would even approximate the usefulness and convenience (for rapid prototyping and interactive data analysis) of Python's Pandas (although if I was a betting man, I'd wager the best language to implement it in would be Scala, with it's advanced macros & almost-dependent type system). "

And another hurdle:

" I've yet to see the equivalent of Django or Rails when it comes to rapidly assembling a database-backed CRUD app in a statically typed language...the metaprogram-y things those frameworks are free to do without a type system stopping them. "

Another one is: the type of JSON schemas produced by API tooling. This is also used as an example of how in dynamically typed languages, sometimes you really don't have to reason about the type (although some ppl claim you are implicitly reasoning about it anyways). Some ppl say that Java's got this one solved now:

" auto obj = JSON(text); int age = obj.["persons"][3]["age"]; "

" io-ts https://github.com/gcanti/io-ts "

"Python and Ruby are extremely malleable, like Smalltalk and lisp before them. You can inspect anything, you can override / replace / proxy anything. You can use a few built-in collection types for nearly anything."

"What I think is more important is the flexibility that it ((dynamic typing)) brings to express design patterns that in other languages, like Java for example, can become very cumbersome. I can’t tell you how many times I have been in the bowels of some Java code and found some method that takes a concrete implementation of something that could or should be an interface when I really want to pass in something different. "

REPLs are beloved.

Type inference is beloved.

The friction of a compile step annoys dynamic language ppl, but not much, unless compile times are long.

People describe the following languages as having something like gradual typing:

Java, Typescript, Haskell have a mode where type errors are postponed to runtime where possible.

More support for type inference: "Type inference is where it's at. Everybody loves types when there's hardly any overhead. Typescript is the best example of this."

User losvedir makes a good point that it's not just static typing, but just any compile step, that can ((sometimes?)) catch typos, misnamed variables, not imported function calls.

Static null checking is beloved. And "We're working on static null checking in Dart now. Unlike TypeScript?'s, Dart's approach to null checking will be sound"

Data classes and lambdas are beloved. (mb see https://projectlombok.org/features/Data )

Typescript is beloved, although some (type system implementation expert folks?) point out that its type system is complicated/hard to understand/hard to implement.

In a long post, munificient explains why Dart transitioned from gradually typed to statically typed:

bool string)^3. munificient replied sure, but that requires a complicated type system, which burdens both newbies and implementors. For example, a newbie might say "array[0] + 2" and then later change it to "array[random() % 3] + 2". Suddely they get a type error, which confuses them.

example: "Python's list type has a sort() method...((that))...takes an optional "key" argument that is a callback that converts each value to a key of some time and then sorts using those projected keys. If you pass a key function, then sort() needs to be a generic function that takes a type parameter for the return type of the key function, like:

    sort<R>(key: (T -> R))

But if you don't pass the key function, the R type argument is meaningless. Should it be a generic method or not?

An even gnarlier question is "What kinds of lists can be sorted at all?" The sort() method works by calling "<" on pairs of elements. Not all types support that operation. Of those that do, not all of them accept their own type as the right-hand operand. How do you design the list class and the sort() method such that you ensure you won't get a type error when you call sort()?

To handle this kind of stuff, the "best practices" for your API design effectively become "the way you would design it in a fully statically-typed language". But those restrictions are one of the main reasons people like dynamic languages.

You can mitigate some of this with very sophisticated type system features. Basically design a type system expressive enough to support all of the patterns people like in dynamically typed languages. That's the approach TypeScript? takes. But one of the main complaints with static type systems is that they are too complex for humans to understand and too slow to execute.

This makes that even worse. TypeScript?'s type system is very complex and type-checking performance is a constant challenge. In order to let you write "dynamic style" code, TypeScript? effectively makes you pay for a super-static type system. "

Gradually typed language don't give you the soundness guarantees or performance benefits of static typing. And their type systems have to be very expressive in order to catch most of the tricks that dynamic language users expect. "I used to joke that we gave you the best of both worlds: the brevity of Java and the speed of JavaScript?. "

" But for new development, I think you're much better off choosing a modern statically typed language if you think there's a chance your program will grow to some decent size. By that, I mean C#, Go, Swift, Dart, Kotlin, etc. Type inference gives you most of the brevity of dynamic types and you'll get all the safety and performance you want in return for your effort to type your code.

If you're going to do the work to make your code typable, you should get as much mileage out of it as you can. So far, no one I know has figured out how to do that with an optionally or gradually typed language. "

" One of the real large benefits of dynamic types is there is much less to learn before you can start writing real code. For new users, hobbyists, or people where programming isn't their main gig, this is huge. I love that dynamically typed languages exist and can serve those people.

But my experience is that if you're a full time professional software engineer writing real production code eight hours a day, it's worth it to get comfortable with static typing and use it. "

This guy claims anecdotal experience that even Python experts are greatly slowed down by a lack of types (replies to this comment focused on stuff like maybe with more unit tests it would have worked out better; others say to use more asserts, and point out that although in theory asserts could fail at runtime, in practice they don't see that much):

"I inherited legacy Python projects at work and I'm shocked by the time I'm wasting fighting with the lack of types. Most of the time I have no idea (and my IDE neither) what methods and properties are available on variables and function parameters. And what's the most insane to me: I'm sitting next to people with LOT of experience in Python and I see them losing as much time as me when maintaining and debugging some basic piece of code they have written not even 2 weeks ago."

And this guy claims that lack of types make Python code harder to understand: "My experience with writing prototypes in Python is that one week later I have difficulties understanding it and one year later (yeah "temporary, my ass" kind of thing) it's completely alien to me. It's absolutely not the same with OCaml, F# and lately, TypeScript?."

Some people point out that in dynamically-typed languages, you end up having to carry the type information around in your head instead of letting the language help keep track of it. User jonnytran makes the point that, yes, that's a pure win for simple types like int, string, float, but for complicated types they get really hard to read and think about, and complicated types also demand some design thought in order to choose which complicated type (out of those that don't have a type error in the immediate situation) is actually the best choice for the long run (for example, how general should the type signature be? Often it should be at least a bit more general than the immediate situation, but if it's overly general then the compiler can't produce efficient code).

Static typing fans complain that static typing doesn't really come into its own until you use a language with a sufficently expressive type system, like OCaml, Haskell, Rust, Scala, F#, and maybe C#, Kotlin.

---

i took another look at the first two pages of PLOT. I like most everything on page 3, copied here for easy reference:

" PLOT includes five organizing mechanisms: files, modules, types, functions, and macros. Most languages conflate one or more of these mechanisms, but in PLOT they are independent and orthogonal.

Files are the units of source code. This makes them the units of compilation and of loading executable code. Breaking a program into multiple files eases maintenance and delivery of the program, especially when different individuals or organizations are responsible for different files.

Modules are the units of global name scoping. A module can inherit named definitions from other modules, with renaming. Breaking a program into multiple modules makes name conflict issues tractable.

Types are the units of data classification. A type is a dichotomy that classifies every object as either a member of the type or not a member of the type. Classes are the most important type of type in PLOT. A class defines the structure of its member objects as a set of named slots that can hold values. A class can inherit typeness and slots from another class.

Functions are the units of behavior. In PLOT all functions are composite, being constructed at load time from one or more pieces of executable code called methods.

Macros are the units of language definition. A macro defines the syntax of a construct by specifying the procedure for parsing that construct, and defines the semantics of a construct by specifying an expression that is the implementation of that construct. The syntax of PLOT is not built into the compiler, but is entirely defined by macros (many of which are defined by the implementation rather than by the user, of course). The compiler is only concerned with the semantics of the primitive expressions. This enables users to define their own languages based on PLOT which are every bit as flexible, powerful, and clean as the base language.

A few examples of the orthogonality of these mechanisms:

    The source code in a given file need not be related to just one class, nor be contained in just one module.
    The source code for methods applicable to a given class need not be all in one file nor all in one module.
    Methods can be added to an existing function at load time, which means at any time because you can load additional code at run time.
    The language syntax can be different in different modules, because the syntax is determined by the macros whose names are in scope."

---

lmm on May 18, 2017 [-]

I think there is real convergence. All serious new languages have some form of static typing augmented with some form of type inference. All are lexically scoped. All have first-class functions and map/reduce/filter. None have unchecked manual memory management. None have checked exceptions.

There are still areas of debate, but at the same time I think there is real progress; we have learnt from past mistakes and they won't be repeated.

---

jnbiche on May 18, 2017 [-]

Kotlin lacks the ability to do true functional programming that Swift has. Swift has pattern matching, recursive data structures (edit: specifically ADTs with enums or case classes is what I was thinking of here, should have written algebraic data types), tail call optimization, even some form of type classes, immutability (thanks @tmail21), and so on.

Kotlin does not have any of these (edit: this is now partly false, see below)

I'm sad that Google is supporting Kotlin and not Swift or Scala for Android, since at least with the latter two, you can use functional programming.

Edit: Actually, I'm looking into Kotlin again, and it looks like it's greatly expanded support for functional programming compared to a year or two ago. For example, algebraic data types can now be encoded in a similar manner to Scala, and kind of pseudo-pattern matched using `when`. TCO is now supported. There are lambdas, and support for closures. Destructuring assignment. But as far as I can see, still no immutable values (just immutable references), and no way to make extensible type classes, like in Scala and Swift.

I'm definitely going to take another look now. Last I checked a few years ago, Kotlin had very limited support for functional programming.

pillowkusis on May 18, 2017 [-]

Maybe you haven't checked out Kotlin lately?

Kotlin has somewhat pattern matching: https://kotlinlang.org/docs/reference/control-flow.html#when...

Tail call optimization: https://kotlinlang.org/docs/reference/functions.html#tail-re...

Type classes: https://kotlinlang.org/docs/reference/sealed-classes.html

And I'm not sure what you mean by recursive data structures. Basically every C style language I'm aware of can contain a reference to another instance of its own type.

It's more java-y than Scala, but it's fundementally capable of a very FP style.

Filligree on May 18, 2017 [-]

> Type classes: https://kotlinlang.org/docs/reference/sealed-classes.html

Those are not type-classes, at least not in the Haskell sense. They allow you to avoid an else branch, yes, which is actually quite useful; but one basic ability they're missing is the ability to define a new branch for a new instance of the type defined by library users.

Just as an example. They're really not much like type-classes at all.

jnbiche on May 18, 2017 [-]

I agree with Filligree that sealed classes are not really capable of emulating type classes. They can be used for algebraic data types, which is great, but for true extensible type classes you'd need something like Scala traits.

masklinn on May 18, 2017 [-]

> Type classes: https://kotlinlang.org/docs/reference/sealed-classes.html

Sealed classes are an odd/wonky take on sum types, they're not even remotely close to type classes.

zejay on May 20, 2017 [-]

Tail recursion is a special case of tail calls that can be turned into loops. Their keyword is called tailrec, so it's probably not real TCO. Real tail call optimization can't easily be turned into loops in the general case.

---

Swift has "structures, ARC, proper extensions, good protocols/interfaces, flexible enums, and runs natively" [11]

---

C# has both 'ref' parameters and 'out' parameters. they both indicate parameters which are passed by reference. The difference is that one of them requires the variable to be initialized before the function call, and the other one doesn't (i forgot which is which).

an interesting thing about these is that they can't be assigned to fields in objects. At first this seemed surprising to me; so you have pointers, so why can't you put the pointer in an object? The reason is that a reference it not just a pointer, it's a transparent pointer, by which i mean that reading from/writing to the reference is transparently indirected through the pointer to the value it points at. So i guess whatever does this transparent indirection in C# isn't setup to check every object field access to see if it's to a reference and hence should be indirected.

an interesting consequence of this is that iterator methods and async methods in C# can't take ref or out parameters. This is because of an implementation detail; in C# they implemented anonymous function closures and iterator and async methods in terms of lowering (rewriting the C# code to other C# code) that is essentially a state machine with state (including the local variables of the iterator or async method) being held by an object [12] [13]. Since you can't put ref or out parameters in an object field, they can't be stored in the state machine state. It would have been possible to implement iterators and async directly in te target platform (CIL) but the C# team decided not to do it that way.

---

i think i said this elsewhere long ago, but imo some goals of a programming language are:

some languages say 'programmer happiness'. I guess that's sort of a supergoal of all of these? I feel like this is assumed, it doesn't need to be in the list. Programmers want code to be readable, they don't want bugs, they want performance, they don't want their teammates breaking stuff or wrongly blaming them for broken stuff, they want to be able to learn the language and to find other people who can work on it with them; all that is stuff that makes them happy. Is there a programming language with goals that, when achieved, make programmers less happy?

---

some examples of leaky/dangerous abstractions:

" If you use a concurrency library, and don't understand threadpools, block in a thread and threadlock your application, you are in trouble.

If you use IEnumerable, and don't filter out the results before you toList, you are in trouble.

If you use monads and think flatmap is stack safe, always, you are in trouble. " [14]

---

Rust vs. C++ lambda syntax:

[15] combined with later replies by others:

comex on May 26, 2017 [-]

Rust supports the same level of power and abstraction, yet it makes various aesthetic (ish) decisions that bring it closer to Python than C++ in this example.

Rust implements lambdas the same way as C++, yet it doesn't need capture lists. It has two modes: by default it tries to guess whether to take each variable by move/copy or by reference, but you can specify 'move' on a lambda to have it move/copy all mentioned variables. Not as flexible, right? Actually, it's equivalent in power, because if you want to "capture a variable by reference" in a 'move' lambda, you can manually assign a reference (pointer) to it to a new variable, and move that. With Rust's syntax, the new variable can even have the same name as the original, so it looks very natural:

    {
         let x = &x;
         foo(move || bar(x));
    }

...

 This is a bit more verbose, but most of the time you don't need it.

Like C++, Rust uses semicolons, but it largely unifies expressions with statements. For example, the following are equivalent:

    foo(bar(42));
    foo({ let x = 42; bar(x) })

The syntax for lambdas is "

argsreturn_expression", so a simple lambda can be very succinct: "xx + 1". But just like above, return_expression can also be a braced block, allowing you to have large bodies with multiple statements. In most languages, supporting both blocks and expressions as lambda bodies would require two forms of lambda in the syntax, an added complexity. C++ conservatively chose to support only blocks, while JavaScript? and Swift, among others, chose to have two forms. But in Rust, that's just a special case of a more general syntax rule.

Rust is statically typed, but it has type inference, so - among other things - you can usually omit the types of lambda arguments.

So what does the adder example actually look like in Rust? With soon-to-be-stable 'impl Trait' syntax, like this:

    fn adder(amount: u32) -> impl Fn(u32) -> u32 {
        move |x| x + amount
    }

The type declaration is somewhat verbose, but the implementation is quite succinct. The only ugly part IMO is 'move', which would be better if it weren't required here. (Without 'move' it tries to capture 'amount' by reference and complains because that wouldn't borrow check [because it would be a dangling pointer]. But it would be nice if the default lambda mode could decide to move in this case, either because of the lifetime issue or just because 'u32' is a primitive type that's always faster to copy than take an immutable reference to.)

...

You pays your money and you makes your choice. Neither language is really a good fit for anonymous functions in the sense that Lisp was.

---

might want to use "Apache License v2.0 with LLVM Exceptions": https://github.com/llvm-mirror/libcxx/blob/master/LICENSE.TXT

" Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. See https://llvm.org/LICENSE.txt for license information. SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception "

---

shallow const deep const

---

 lajawfe 11 hours ago [-]

Watch 10 things I regret about node. js - https://youtu.be/M3BM9TB-8yA from the creator of both node and deno to undersatnd his motivations behind the deno project. A very intriguing talk.

reply

29athrowaway 10 hours ago [-]

> Access between V8 (unprivileged) and Rust (privileged) is only done via serialized messages defined in this flatbuffer.

Expect to see this in "n things I regret about deno"

reply

kevinkassimo 8 hours ago [-]

Replying to Flatbuffers concerns:

You are right, we will try to get rid of it for some faster serialization mechanisms (after some huge internal refactor lands). See the talk I posted, Ryan mentioned about it near the end.

reply

spraak 10 hours ago [-]

Can you explain why?

reply

29athrowaway 8 hours ago [-]

Every deno API function call goes through flatbuffer serialization + deserialization + more steps. Sounds like a lot of overhead.

reply

---

" Yoric 22 hours ago [-] ... For instance, I'm going to talk about the field that I know best: programming languages (to keep it simple, I'm not talking of VMs or compilers, just the languages). Pretty much everything you see in Java, C#, Python or Go dates back to the 70s (with broader testing and quality of life improvements, of course), Swift gets a few improvements imported from the 80s, but not that many. The only industrial languages that seem to have real innovations are F#, Rust and Scala, which are three cases in which the actual researchers managed to convince (or found) a company to support the language. "


kragen 7 hours ago [-]

For those not aware of the background, the author is a wizard from a secretive underground society of wizards known as the Familia Toledo; he and his family (it is a family) have been designing and building their own computers (and ancillary equipment like reflow ovens) and writing their own operating systems and web browsers for some 40 years now. Unfortunately, they live on the outskirts of Mexico City, not Sunnyvale or Boston, so the public accounts of their achievements have been mostly written by vulgar journalists without even rudimentary knowledge of programming or electronics.

And they have maintained their achievements mostly private, perhaps because whenever they've talked about their details publicly, the commentary has mostly been of the form "This isn't possible" and "This is obviously a fraud" from the sorts of ignorant people who make a living installing virus scanners and pirate copies of Windows and thus imagine themselves to be computer experts. (All of this happened entirely in Spanish, except I think for a small amount which happened in Zapotec, which I don't speak; the family counts the authorship of a Zapotec dictionary among their public achievements.) In particular, they've never published the source or even binary code of their operating systems and web browsers, as far as I know.

This changed a few years back when Óscar Toledo G., the son of the founder (Óscar Toledo E.), won the IOCCC with his Nanochess program: https://en.wikipedia.org/wiki/International_Obfuscated_C_Cod... and four more times as well. His obvious achievements put to rest — at least for me — the uncertainty about whether they were underground genius hackers or merely running some kind of con job. Clearly Óscar Toledo G. is a hacker of the first rank, and we can take his word about the abilities of the rest of his family, even if they do not want to publish their code for public criticism.

I look forward to grokking BootOS? in fullness and learning the brilliant tricks contained within! Getting a full CLI and minimalist filesystem into a 512-byte floppy-disk boot sector is no small achievement.

It's unfortunate that, unlike the IOCCC entries, BootOS? is not open source.

reply

http://www.biyubi.com/

kragen 7 hours ago [-]

I read all the articles on their web site (the ones in Spanish and not Zapotec, anyway), the articles in the press about them, and the comments sections of those articles, and I talked with hacker friends of mine who live in Mexico City. I'm interested in minimalist computing systems like Oberon, Plan9, stage0, MesCC?, Scheme, Forth, Squeak, OTCC, the LGP-30, and the Familia Toledo's Biyubi system; myself, I've written things like StoneKnifeForth? and httpdito, a web server in under 2000 bytes: https://news.ycombinator.com/item?id=6908064

But only in my dreams do I approach achievements like Biyubi.

reply

https://news.ycombinator.com/submitted?id=nanochess

---

mb change name to 'gull'

---

<=> spaceship operator (compare that returns -1,0,1)

---

pizlonator 7 hours ago [-]

Worth noting that this basically means that all of the JS engines are converging on what JavaScriptCore? pioneered:

It's weird that they kept the C++ interpreter. But not too weird, if the IC logic in the new interpreter is costly. In the LLInt, the IC/type logic is either a win (ICs are always a win) or neutral (value profiling and case flag profiling costs nothing in LLInt).

Also worth noting that this architecture - a JIT ABI interpreter that collects types as a bottom tier - is older than any JS engine. I learned it from HotSpot?, and I guess that design was based on a Strongtalk VM.

This is the current state of the art of JSC's bottom interpreter tier FWIW: https://webkit.org/blog/9329/a-new-bytecode-format-for-javas...

reply

---