todo add in stuff from [1], [2], [3], [4], [5], [6] and any non-opinions from [7]. and other wtf lists

add opinions from

should i have a chapter on my idea of neural primitive behaviors? (adaptation, delay, threshold, convolution, etc; and the stuff in fig. 2 of )

" A computer is a clock with benefits. They all work the same, doing second-grade math, one step at a time: Tick, take a number and put it in box one. Tick, take another number, put it in box two. Tick, operate (an operation might be addition or subtraction) on those two numbers and put the resulting number in box one. Tick, check if the result is zero, and if it is, go to some other box and follow a new set of instructions.

You, using a pen and paper, can do anything a computer can; you just can’t do those things billions of times per second. And those billions of tiny operations add up.


When you “batch” process a thousand images in Photoshop or sum numbers in Excel, you’re programming, at least a little.


Consider what happens when you strike a key on your keyboard. Say a lowercase “a.” The keyboard is waiting for you to press a key, or release one; it’s constantly scanning to see what keys are pressed down. Hitting the key sends a scancode.

Just as the keyboard is waiting for a key to be pressed, the computer is waiting for a signal from the keyboard. When one comes down the pike, the computer interprets it and passes it farther into its own interior. “Here’s what the keyboard just received—do with this what you will.”

It’s simple now, right? The computer just goes to some table, figures out that the signal corresponds to the letter “a,” and puts it on screen. Of course not—too easy. Computers are machines. They don’t know what a screen or an “a” are. To put the “a” on the screen, your computer has to pull the image of the “a” out of its memory as part of a font, an “a” made up of lines and circles. It has to take these lines and circles and render them in a little box of pixels in the part of its memory that manages the screen. So far we have at least three representations of one letter: the signal from the keyboard; the version in memory; and the lines-and-circles version sketched on the screen. We haven’t even considered how to store it, or what happens to the letters to the left and the right when you insert an “a” in the middle of a sentence. Or what “lines and circles” mean when reduced to binary data. There are surprisingly many ways to represent a simple “a.” It’s amazing any of it works at all.

" --

link to c2 page for each language, concept

"currying (aka "dependency injection for functions")" --


integer overflow example:

--- doesnt say much but it does recommend Let's Build a Compiler!, and A Nanopass Framework for Compiler Education above all else (and semi-recommends Let's Build a Compiler! in Forth: )

the HN discussion also recommends by Wirth, understanding and writing compilers" by Richard Bornat - written in the 1970s using BCPL as the implementation language


1: 2:


[1] [2]


Post-2008 I'd really push . Writing a compiler in the same way you'd write an ordinary program, this was the first explanation where I actually understood the rationale for the choices being made.


 toolslive 3 days ago

"Implementing functional languages: a tutorial" Simon Peyton Jones

Is very good and shows different strategies for the runtime.



" There are three basic computational models -- functional, logic, and imperative. In addition to the set of values and associated operations, each of these computational models has a set of operations which are used to define computation. The functional model uses function application, the logic model uses logical inference and the imperative model uses sequences of state changes. " --




regarding implementations, Lua, V8, smalltalk/self/strongtalk, SBCL, seem to be held out as good/efficient examples of dynamic language implementations

eg [8] (same author: [9]) [10] todo other citations

and JVM as a static VM:

todo other citations

--- impl

Mike Pall of LuaJIT? fame has the opinion:

" it's less effort to write and maintain a well-tuned interpreter than a simple compiler. Since performance is shown not to be better, why bother writing a stage 1 compiler at all? " -- [11]

as noted in [12]: " The author of LuaJIT? 2.0 says an interpreter written in assembly is just as fast as a baseline JIT and way easier to write.

He also has measurements to back this up: (this is just one place he talks about it, there's others in that thread and elsewhere) "



evaleverything2 6 days ago

I mean it inlines the the method itself at the callsite, eliding the overhead of a method invocation altogether, whereas a normal (P)IC simply elides the overhead of a class+selector method lookup.


((my note: PIC is 'polymorphic inline cache'))

chrisseaton 6 days ago

Yeah I get that - but this inlining of the method itself within an IC, removing the method call overhead, has been done in every non-trivial dynamic language VM since the early 90s.



random blog post has a list of some things you need to learn to make a language:

" To define a language today, you need to know lexing and parsing/context free grammars as well as tools to generate lexers and parsers. You need to understand abstract syntax trees, type systems (including inference), intermediate representations (e.g. LLVM), assembly languages (x86, WebAssembly?, ...), optimizations, interpreters, JIT compilers, and so on "


" Computer scientists should understand generative programming (macros); lexical (and dynamic) scope; closures; continuations; higher-order functions; dynamic dispatch; subtyping; modules and functors; and monads as semantic concepts distinct from any specific syntax. " [13]


this class looks good:


" Of Modern Languages

We've also come a long way in language design and implementation. Compilers, once slow, have gotten faster and smarter. Virtual machines like the JVM, JavaScript? and the CLR are becoming widely used deployment targets. The ML and Haskell families of languages have introduced us to concepts of real abstract types and abstract effects which can be used to build programs coupled only by the abstract properties of the data being consumed, generated and effects produced. Type inference is even making such fancy behavior manageable by mere mortals, while providing language implementations with more and more information with which to perform both program level optimization and micro-optimization not possible in traditional naive lisps. " -- [14]


the "Expression Problem":

Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".

I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.

userbinator 34 days ago [-]

Another alternative is "table-oriented programming", where you define the "classes" and "methods" as an m-by-n structure of code pointers; to add either "methods" or "classes", you would just add a new row/column to the table along with the appropriate code definitions.

and because "expression problem" is not a very catchy name.

It's also not particularly descriptive either, but the page mentions that it's a form of "cross-cutting concern", to which the table-oriented approach basically says "do not explicitly separate the concerns."

(More discussion and an article on that approach here: )

As a bit of a fun fact, doing table-oriented stuff in C is one of the few actual uses for a triple-indirection. :-)


calling a proceduring and passing a continuation is just like in assembly when you call something by pushing your return address onto the stack and then jumping to it



anatoly 3 hours ago [-]

I think most advice to "learn C" really aims to get you to learn how things work closer to the metal. What actually becomes of your code and data, broadly speaking, once your high-level language's interpreter or compiler have had their way with them.

So things like:

"Learn C" is just a useful way to force you to internalize all of the above, because you can't properly "learn C" without doing that. But it's the above that helps you back in your favorite language. That, and perhaps the fact that C gives you a feeling what it's like when you can look at a line of source code and understand immediately what happens in the machine (broadly) when executing it. No hidden effects. C++ doesn't have that (constructors you don't know about when looking at the line, exceptions etc.) That "local clarity" isn't the most important thing in the world, but if you feel and appreciate it, perhaps you'll strive for local clarity back in your favorite language, too.

reply "


pmahoney 1 hour ago [-]

There are two distinct features: immutability and single-assignment. Erlang is famous for single-assignment, and also happens to have largely immutable values, but they are not the same thing.

Immutability prevents things like in-place appending to an array, or in-place modification of a string.

Single-assigment means that the value bound to "someVariable" cannot be changed. E.g. `someVariable = new String("hello"); someVariable = new String("goodbye");` is illegal. But it still may be possible to mutate the value `someVariable.substitute("hello", "goodbye")` if the language allows mutation.



a principaled argument (by Dijkstra) for 0-based indexing (and for the 'open set' upper end of ranges) (ie like Python does it):


"...a language is not so much characterized by what it allows to program, but more so by what it prevents from being expressed." -- Niklaus Wirth's "Good Ideas, Through the Looking Glass"


" OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them." -- Alan Kay


kazinator 6 days ago [-]

In computer science, "formal parameters" is the name for those named variables that are established on entry into the function and immediately receive external values. "arguments" are the values that they receive. A function has only one set of parameters, but a new set of arguments in each invocation.


kentor 6 days ago [-]

My view is you define methods with parameters, you call methods with arguments.

`ArgumentError?` is consistent with an error during call time.


pbiggar 6 days ago [-]

Dont know if this applies, but my understanding is that in functions, a parameter is the name of a declaration which when called will receive an argument.


steveklabnik 6 days ago [-]

On a super pedantic level, "parameters" are the names that you write in the function definition, and "arguments" are the values you pass as parameters.

  def name_length(person)
  steve =

--- (note; i already read the first few comments there but NOT the rest)