soapdog 1 day ago [-]
Tell us more about your books. I'm addicted to scheme books :-)
Also, what made you choose Haskell for your project? Can you share some of the reasoning?
reply
mark_l_watson 1 day ago [-]
I think the deployment story for Haskell is better than Racket. It is easy enough make standalone Racket executables but with stack and cabal it is baked in to easily build multiple executables, separate libraries, keep everything tidy.
Racket is much better to get something done and working quickly. Same comment for Common Lisp.
Haskell has great support for strongly types web services (servant) and lots of great libraries. Racket has a very rich ecosystem of libraries, custom languages (like Typed Racket). Both are great.
EDIT: It takes me longer to get to working code in Haskell but once written the code has higher value to me because it is so much faster/easier to refactor, change APIs, reuse in other projects, etc. I just did a major refactoring/code-tidying this morning, and it was very simple to do.
reply
---
" But now suppose we want to use a conditional in place of the operator, not the right-hand value. In a Lisp, because everything is an expression—including the operator itself—this is easy:
((if (< 1 0) + *) 42 100)
((if (< 1 0) + *) 42 100)
But if you try the same thing in Python, it will raise a syntax error:
42 (+ if 1 < 0 else *) 100
42 (+ if 1 < 0 else *) 100
Why? Because Python operators are not expressions. " [1]
---
Parity's WASI (WASM interpreter) finds the structured control flow of WASM (e.g. IF...END) too hard to handle so it flattens it first! I knew it! (that's a concern that i had had). I think structured control flow might be good for optimization but bad for dead-simple implementation.
https://github.com/paritytech/wasmi/blob/master/src/isa.rs
---
" Io's core is about 400 lines of code, including evaluator. Hardly huge. It's the libraries that consume much of the rest. However it's not a bytecode VM, it's a tree walker. – jer Nov 8 '11 at 1:38 "
---
" As for the mod: it has never been published but if I have the time I might try submitting patches upstream, or passing them on to someone who will. It is likely they won't be accepted, though. LUA has never been receptive of people splitting the lexer and compiler apart from the interpreter. There is also the issue of bytecode architecture portability. LUA does not have a big-endian/little-endian agnostic interpreter. – soze Aug 13 '15 at 11:25 "
---
some earlier history of JVM's use for non-java languages: "One alternative is to make all these high-level services part of the abstraction of- fered by the portable assembler. For example, the Java Virtual Machine, which provides garbage collection and exception handling, has been used as a tar- get for languages other than Java, including Ada (Taft 1996), ML (Benton, Kennedy, and Russell 1998), Scheme (Clausen and Danvy 1998), and Haskell (Wakeling 1998). But a sophisticated platform like a virtual machine embodies too many design decisions. For a start, the semantics of the virtual machine may" -- C--:aportableassemblylanguagethatsupportsgarbage collection
---
" One alternative is to make all these high-level services part of the abstraction of- fered by the portable assembler. For example, the Java Virtual Machine, which provides garbage collection and exception handling, has been used as a tar- get for languages other than Java, including Ada (Taft 1996), ML (Benton, Kennedy, and Russell 1998), Scheme (Clausen and Danvy 1998), and Haskell (Wakeling 1998). But a sophisticated platform like a virtual machine embodies too many design decisions. For a start, the semantics of the virtual machine may not match the semantics of the language being compiled (e.g., the exception se- mantics). Even if the semantics happen to match, the engineering tradeo s may di er dramatically. For example, functional languages like Haskell or Scheme allocate like crazy (Diwan, Tarditi, and Moss 1993), and JVM implementations are typically not optimised for this case. Finally, a virtual machine typically comes complete with a very large infrastructure
| class loaders, veri ers and |
| that may well be inappropriate. Our intended level of abstraction is |
---
i just checked in with C--/Cmm again.
according to [2], C-- died because no one had time to work on it.
according to this search regarding Cmm since 2016, it looks like GHC/haskell stuff is still using it at least somewhat:
i skimmed the first part of the paper. It seems to be slightly higher-level than Boot; it's more of an OVM. E.g. it provides run-time facilities to treat activation records/frames as abstract things, so that a garbage collector can ask the Cmm runtime for the address of variables in activation records, even without knowing exactly how the platform implements activation records. The idea is that you'd write a garbage collector on top of Cmm.
---
" LLVM has been mentioned a few times recently, but unless I'm missing something, it doesn't really seem to be a good fit for functional languages at the moment. For a start, it apparently has self-tail-call support, but no general tail-call optimization, which is pretty important for languages like ML.
Also, LLVM's "low level" nature means it doesn't deal with things like nested lexical scoping at the VM level, so functional languages have to treat LLVM in the same way as they would a native target, but with a more limited range of possibilities, afaict. Granted, they achieve some portability in return.
VMs for functional languages are typically higher-level, supporting e.g. a 'closure' instruction which handles all of the issues related to nested lexical scoping, and the management of activation frames. There are a lot of advantages to this from the perspective of the language developer, including e.g. easier integration with debugging tools. " -- [3]
" > > Without an efficient representation of first-class continuations, it's > > too weak for Scheme. I also don't see direct support for proper tail > > recursion, which would also be needed to make things work well. > > The tail call documentation is unofficial and currently available from the > primary author of LLVM, Chris Lattner: " -- [4]
http://nondot.org/sabre/LLVMNotes/GuaranteedEfficientTailCalls.txt
" There's another document also dated Sep 5: http://nondot.org/sabre/LLVMNotes/ExplicitlyManagedStackFrames.txt , which describes a way to manage stack frames in LLVM to support "garbage collected closures". The technique seems to be to convert code to CPS, and allocate stack frames on the heap (standard stuff). But the document ends with "If someone is interested in support for this, guaranteed efficient tail call support and custom calling conventions are the two features that need be added to LLVM." Sounds like fun! " -- [5]
---
pytest's component system:
https://pluggy.readthedocs.io/en/latest/
---
gerbilly 2 days ago [-]
Does anyone remember reading K&R for the first time?
To me seemed like C was such a tight, perfect little design.
Only thirty keywords, and simple consistent semantics.¹
When I first learned it, it was still pre ANSI, and you declared a function like this
int f(a)
char *a
{
}
The ANSI style function declaration was maybe the only innovation that came after that that significantly improved the language.
I remember in the late '80s knowing my compiler so well that I could tell you what the stack frame would look like pretty much anywhere in my code. It was awesome to have that level of familiarity.
Soon after that things started to get more complicated, and I did a lot of Java, and I never again felt as dialed in to those languages as the late 80s C that I started out with.
The K&R book is worth a read if anyone missed it. It's beautifully written. A far cry from the 'opinionated' copy that you often find on the Go website.
Personally, I don't think you can make similar claims about Go's design or the 'because I told you so' tone that surrounds it.
1: Yes, I know, undefined behaviour, but this is my post and this is how I feel about C.
reply
lmm 2 days ago [-]
Can't say I got that feeling at all.
Smalltalk is a tight design. So is Lisp. So is Forth. So is APL. C's design just feels like a pile of... stuff. Arrays are almost but not quite the same as pointers. You can pass functions but not return them. There's a random grab-bag of control flow keywords, too many operators with their own precedence rules, and too many keywords given over to a smorgasbord of different integer types with arbitrary rules for which expressions magically upconvert to other ones.
reply
fao_ 2 days ago [-]
FYI: 2am possible rambling on design and software
Funny, I got the same feeling from Smalltalk and Lisp.
I own both "Common Lisp: The Language" and "Smalltalk-80: The Language and its Implementation", and while there are many ways those languages could be described as 'tight' (tightly-coupled, perhaps), at no point can you look at the C language and say "This could be smaller" without significantly removing functionality. Ok, perhaps there are some things around array/pointer syntax, etc. but the room for removing things from the language is very small.
LISP and Smalltalk are both 'kitchen-sink' languages. As I understand it (i.e. Unless I misread something or skipped a page) for an implementation to be a proper spec-conforming instance of Smalltalk-80, a screen and graphics server is required. Indeed, Smalltalk-80 requires a very specific model of graphics display that is no longer appropriate for the time. Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.
On the other hand, all of the C that is there serves a purpose.
Perhaps the only thing in your list that does feel like a tight design in addition to C, is FORTH. But FORTH puts the burden for the programmer to remember what is on the stack at any given time. It has some beauty, indeed, but all of the abstractions seem inherently leaky. I haven't programmed in FORTH, however, so I can't really talk more about how that plays out in practice.
If the "There is nothing else to remove" does not resonate with you, then I think the perspective of the OP, and myself, and others, when we call C a "small"/"tight" language, is that essentially, C was born out of necessity to implement a system. Conversely, the 'batteries included' aspect of Smalltalk and Lisp more or less presume the existence of an operating system to run. It feels like the designers often did not know where to stop adding things.
Most of the library functions in C, can be implemented very trivially in raw C. Indeed, much of K&R is just reinventing the library 'from scratch', there is no need to pull out assembly, or any more assumptions about the machine other than "The C language exists". Whereas, a lot of the libraries of Smalltalk and Lisp seem bound to the machine. Not to harp on too much about the graphics subsystem of smalltalk, but you couldn't really talk about implementing it without knowing the specifics of the machine. And while much of Lisp originally could be implemented in itself, Common Lisp kind of turned that into a bit of a joke. Half the time when using it, it is easier and faster to reimplement something than find whether it exists.
Apologies if this is repetitive or does not make much sense.
reply
nickloewen 2 days ago [-]
I agree with you, but perhaps you are reading “tight” slightly differently than the way the original poster intended it?
To me, ANSI C is “tight” in the sense that it is made up of a small set of features, which can be used together to get a lot done. But the design of the features, as they relate to each other, can feel somewhat inelegant. Those different features aren’t unified by a Simple Big Idea in the way that they are in Lisp or Smalltalk.
Lisp and Smalltalk, then, have “tight” designs (everything is an s-expression/everything is an object) which result in minimal, consistent semantics. But they also have kitchen sink standard libraries that can be challenging to learn.
(Although to be fair, Smalltalk (and maybe Common Lisp to a lesser extent) was envisioned as effectively your whole OS, and arguably it is a “tighter” OS + dev environment than Unix + C...)
FWIW, I am learning Scheme because it seems to be “tight” in both senses.
reply
lmm 1 day ago [-]
It sounds like you're talking about the standard library rather than the language? The examples I gave have a very small language where you really can't remove anything, whereas in C quite a lot of the language is rarely-used, redundant, or bodged: the comma operator surprises people, for and while do overlapping things, braces are mandatory for some constructs but not for others, null and void* are horrible special cases.
Standard libraries are a different matter, but I'm not too impressed by C there either; it's not truly minimal, but it doesn't cover enough to let you write cross-platform code either. Threading is not part of the pre-99 language spec, and so you're completely reliant on the platform to specify how threads work with... everything. Networking isn't specified. GUI is still completely platform-dependent. The C library only seems like a baseline because of the dominance of unix and C (e.g. most platforms will support BSD-style sockets these days).
I'm actually most impressed by the Java standard library; it's not pretty, but 20+ years on you can still write useful cross-platform applications using only the Java 1.0 standard library. But really the right approach is what Rust and Haskell are doing: keep the actual standard library very small, but also distribute a "platform" that bundles together a useful baseline set of userspace libraries (that is, libraries that are just ordinary code written in the language).
reply
simias 2 days ago [-]
>To me seemed like C was such a tight, perfect little design. Only thirty keywords, and simple consistent semantics.
Except that clearly history showed that it wasn't enough, and we ended up with about 50 millions (and counting) different meaning for "static" for instance. I like C but its simplicity is almost by accident more than by design. It's pretty far from "perfect" in my book.
There are so many weird features about the language that can bite you ass for no good reason. Why don't switch() break by default since it's what you want it to do the overwhelming majority of time (answer: if you generate the corresponding assembly jump table by hand "fall through" is the easiest and simplest case, so they probably kept it that way).
Why do we have this weird incestuous relationship between pointers and arrays? It might seem elegant at first (an array is a pointer to the first element or something like that) but actually it breaks down all over the place and can create some nasty unexpected behavior.
Why do we need both . and -> ? The compiler is always able to know which one makes sense from the type of the variable anyway.
String handling is a nightmare due to the choice of using NUL-terminated strings and string.h being so barebones that you could reimplement most of it under an hour.
Some of the operator precedences make little sense.
Writing hygienic macros is an art more than a science which usually requires compiler extensions for anything non-trivial (lest you end up with a macro that evaluates its parameters more than once).
Aliasing was very poorly handled in earlier standards and they attempted to correct that in more modern revisions while still striving to let old code build correctly and run fast. So you have some weird rules like "char can alias with everything" for instance. Good luck explaining why that makes sense to a newbie without going through 30+ years of history.
The comma operator.
Undefined function parameter evaluation order.
I suspect that with modern PL theory concepts you could make a language roughly the size of C with much better ergonomics. I'm also sure that nobody would use it.
reply
---
" Good languages come with integrated tests
You can be sure that if a language brings a testing framework -- even minimal -- in its standard library, the ecosystem around it will have better tests than a language that doesn't carry a testing framework, no matter how good the external testing frameworks for the language are. "
"
Good languages come with integration documentation
If the language comes with its own way of documenting functions/classes/modules/whatever and it comes even with the simplest doc generator, you can be sure that all the language functions/classes/modules/libraries/frameworks will have a good documentation (not great, but at least good).
Languages that do not have integrated documentation will, most of the time, have a bad documentation.
"
"
A language is much more than a language
A programming language is that thing that you write and make things "go". But it has much more beyond special words: It has a build system, it has a dependency control system, it has a way of making tools/libraries/frameworks interact, it has a community, it has a way of dealing with people. "
---
an argument for effect typing...
" Math.Round opens the browser print dialog (github.com) 264 points by gokhan 1 day ago
| flag | hide | past | web | favorite | 59 comments |
ChrisSD? 1 day ago [-]
var ASM_CONSTS = [(function(){
var err = new Error;
print('Stacktrace: \n');
print(err.stack)
} // ...The issue is that print call. They expect it to call their own print function. But that's not in scope so it falls back on window.print (I.e. the function defined in the global object). "
---
koolba 1 day ago [-]
I’d argue that type checking combined with editor auto completion leads to faster development.
The feedback loop is within your editor, that’s a step before the app console!
reply
ncphillips 1 day ago [-]
100% going from Java+IntelliJ? to Ruby+VSCode was shocking. In regards to tooling, the developer experience is way behind with Ruby. Sure there’s a lot more boilerplate to look at, but with tolls you don’t actually end up writing much. And then you get refactoring tools that are actually super useful and robust.
reply
---
" I like Go as a language for building implementations of ((distributed systems)) things. It's well-suited to writing network services. It compiles fast, and makes executables that are easy to move around. "
---
earenndil 38 minutes ago [-]
Nim and cython are not analogous. Most notably, nim is statically typed, and identifiers are statically determinable
---
nicwilson 16 hours ago [-]
> C, D, C++, rust, and nim should all have comparable performance;
For the same design; what sets fast code apart from slow code is the availability of designs enabled by the language. Compile time computation is a huge win for D (IIRC nim has similar capabilities).
reply
nimmer 5 hours ago [-]
> C, D, C++, rust, and nim should all have comparable performance
No, Nim is often among the fastests, sometimes surpassing C. This is because it targets C by default and uses data structures and code paths that GCC can optimize very well.
reply
---
"language semantics... such as whether mutable pointers are allowed to alias (or even the existence of non-mutable pointers) can play a huge role in what optimisations can be applied in a given situation and even how effective certain optimisations are. "
---
" D's contract mechanism (i.e. in/out, and invariant blocks) can provide very strong guarantees to the compiler (as a specified part of the language as opposed to a GCC pragma/builtin), which the LLVM D compiler definitely uses.
All of the on by defaults that D has are usually there for a reason, i.e. floats are NaN? initialized and bounds checking is on by default: These are very good idiot-proofing which can often be ignored unless profiling suggests there is a tangible issue. "
---
WalterBright? 16 hours ago [-]
> which ones lead you to the performant path naturally
D's big advantage is the plasticity of the code, meaning it's much easier to try out different data structures and algorithms to compare speed. My experience with C and C++ is they're hard to change data structures, meaning one tends to stick with the initial design.
For a smallish example, in C one uses s.f when s is a value of a struct, and s->f when s is a pointer to a struct. If you're switching from one to the other, you have to go through all your code swapping . and ->. With D, both are .
reply
---
pjmlp 12 hours ago [-]
Adding to your list:
reply
jamesmp98 4 hours ago [-]
Ada has always been intriguing to me. Is it used much anywhere these days?
reply
pjmlp 4 hours ago [-]
Avionics, trains, oil rigs, basically everything where human life's are at stake, deemed as High Integrity Computing.
Only 4 languages apply, Java with Real Time extensions, C and C++ with certification processes like MISRA and AUTOSAR among others, and Ada/SPARK.
reply
---
lenkite 15 hours ago [-]
Java's productivity is high with good choice of libraries and an IDE that Intellij that offers fantastic refactoring, code generation and code-intention abilities. And Kotlin's productivity is even more higher since all the above apply along with convenience language features for succinct and functional-style coding.
So I would rate Java as fast and Kotlin as very fast in the dev productivity scale. You can really push the pedal in these two if you are working in an IDE and you can change your design iteratively on the go thanks to excellent tooling.
reply
---
JoshuaScript?