proj-oot-ootNotes31

[1]

" Things Move

The biggest difference between Rust and C++ for me is the address-of operator (&). In C++ (like C) that thing just returns the address of whatever its applied to and while the language might put some restrictions on you when doing so is a good idea, there is generally nothing stopping you from taking an address of a value and then using it.

In Rust this is just usually not useful. First of all the moment you take a reference in Rust the borrow checker looms over your code and prevents you from doing anything stupid. More importantly however is that even if it's safe to take a reference it's not nearly as useful as you might think. The reason for this is that objects in Rust generally move around.

Just take how objects are typically constructed in Rust:

struct Point { x: u32, y: u32, }

impl Point { fn new(x: u32, y: u32) -> Point { Point { x, y } } }

Here the new method (not taking self) is a static method on the implementation. It also returns Point here by value. This is generally how values are constructed. Because of this taking a reference in the function does not do anything useful as the value is potentially moved to a new location on calling. This is very different to how this whole thing works in C++:

struct Point { uint32_t x; uint32_t y; };

Point::Point(uint32_t x, uint32_t y) { this->x = x; this->y = y; }

A constructor in C++ is already operating on an allocated piece of memory. Before the constructor even runs something already provided the memory where this points to (typically either somewhere on the stack or through the new operator on the heap). This means that C++ code can generally assume that an instance does not move around. It's not uncommon that C++ code does really stupid things with the this pointer as a result (like storing it in another object).

This difference might sound very minor but it's one of the most fundamental ones that has huge consequences for Rust programmers. In particular it is one of the reasons you cannot have self referential structs. While there is talk about expressing types that cannot be moved in Rust there is no reasonable workaround for this at the moment (The future direction is the pinning system from RFC 2349). "

" Refcounts are not Dirty

Another quite interesting case that is surprisingly easy to run into also has to do with the borrow checker. The borrow checker doesn't let you do stupid things with data you do not own and sometimes that can feel like running into a wall because you think you know better. In many of those cases the answer is just one Rc<T> away however.

To make this less mysterious let's look at the following piece of C++ code:

thread_local struct { bool debug_mode; } current_config;

int main() { current_config.debug_mode = true; if (current_config.debug_mode) { do something } }

This seems pretty innocent but it has a problem: nothing stops you from borrowing a field from current_config and then passing it somewhere else. This is why in Rust the direct equivalent of that looks significantly more complicated:

  1. [derive(Default)] struct Config { pub debug_mode: bool, }

thread_local! { static CURRENT_CONFIG: Config = Default::default(); }

fn main() { CURRENT_CONFIG.with(

config{
        // here we can *immutably* work with config
        if config.debug_mode {
            // do something
        }
    });}

This should make it immediately obvious that this API is not fun. First of all the config is immutable. Secondly we can only access the config object within the closure passed to the with function. Any attempt of trying to borrow from this config object and have it outlive the closure will fail (probably with something like “cannot infer an appropriate lifetime”). There is no way around it!

This API is clearly objectively bad. Imagine we want to look up more of those thread local variables. So let's look at both of those issues separately. As hinted above ref counting is generally a really nice solution to deal with the underlying issue here: it's unclear who the owner is.

Let's imagine for a second this config object just happens to be bound to the current thread but is not really owned by the current thread. What happens if the config is passed to another thread but the current thread shuts down? This is a typical example where one can think of logically the config having multiple owners. Since we might want to pass from one thread to another we want an atomically reference counted wrapper for our config: an Arc<Config>. This lets us increase the refcount in the with block and return it. The refactored version looks like this:

use std::sync::Arc;

  1. [derive(Default)] struct Config { pub debug_mode: bool, }

impl Config { pub fn current() -> Arc<Config> { CURRENT_CONFIG.with(

cc.clone())
    }}

thread_local! { static CURRENT_CONFIG: Arc<Config> = Arc::new(Default::default()); }

fn main() { let config = Config::current(); here we can *immutably* work with config if config.debug_mode { do something } }

The change here is that now the thread local holds a reference counted config. As such we can introduce a function that returns an Arc<Config>. In the closure from the TLS we increment the refcount with the clone() method on the Arc<Config> and return it. Now any caller to Config::current gets that refcounted config and can hold on to it for as long as necessary. For as long as there is code holding the Arc, the config within it is kept alive. Even if the originating thread died. "

" Kill all Setters

But the above pattern of effectively having Arc<RwLock?<Config>> can be a bit problematic and swapping it for RwLock?<Arc<Config>> can be significantly better.

Rust done well is a liberating experience because if programmed well it's shockingly easy to parallelize your code after the fact. Rust encourages immutable data and that makes everything so much easier. However in the previous example we just introduced interior mutability. Imagine we have multiple threads running, all referencing the same config but one flips a flag. What happens to concurrently running code that now is not expecting the flag to randomly flip? Because of that interior mutability should be used carefully. Ideally an object once created does not change its state in such a way. In general I think such a type of setter should be an anti pattern.

So instead of doing this what about we take a step back to where we were earlier where configs were not mutable? What if we never mutate the config after we created it but we add an API to promote another config to current. This means anyone who is currently holding on to a config can safely know that the values won't change. "

---

not relevant but just putting this here in case i need it:

some black magic in C relating to checking if something is a constant:

https://news.ycombinator.com/item?id=16719831

---

netheril96 2 days ago [-]

I often find it amusing how C advocates complain about the complexity of C++, and then proceed to implement the same complex functionality in even more brittle ways. Language features I have seen C developers emulate poorly in C: constepxr (here), virtual functions (with function pointers), templates (with macros), exceptions (with setjmp/longjmp).

reply

---

tootie 30 minutes ago [-]

I used to love Perl until the world told me to stop because the syntax was too noisy. Now I'm embedding JavaScript? expressions in CSS in JSX in ES6 and being told this is an evolved state.

reply

---

fp

https://hackernoon.com/two-years-of-functional-programming-in-javascript-lessons-learned-1851667c726

(...s omitted; i think this is my new convention for informal notes)

" Fortunately, there are excellent publications to start with. The most influential readings for me were:

    Mostly Adequate Guide to Functional Programming
    Thinking in Ramda
    Professor Frisby Introduces Composable Functional JavaScript
    Functors, Applicatives, And Monads In Pictures

https://mostly-adequate.gitbooks.io/mostly-adequate-guide/ http://randycoulman.com/blog/categories/thinking-in-ramda/ https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html

Pointless madness

One of the first unusual concepts you learn when starting to explore FP is tacit programming also known as point-free style or (ironically) pointless coding.

The basic idea is omitting function argument names or, to be more precise, omitting arguments at all:

export const snapNodeSizeToSlots = R.compose( nodeSizeInSlotsToPixels, pointToSize, nodePositionInPixelsToSlots, offsetPoint({ x: WIDTH * 0.75, y: HEIGHT * 1.1 }), sizeToPoint );

That’s a typical function definition which is entirely made with a composition of other functions. It has no input arguments declared although a call will require them. Even without a context, you can understand the function acts as some conveyor belt taking a size and producing some pixel coordinates. To learn concrete details, you dig into functions comprising the composition. They, in turn, might be a composition of other functions, and so on.

That’s a very powerful technique until you lift it to the point of absurd. When we started using FP tricks aggressively, we took the problem of converting everything to point-free as a puzzle we have to solve again and again:

Instead of const format = (actual, expected) => { const variants = expected.join(‘, ‘); return `Value ${actual} is not expected here. Possible variants are: ${variants}`; }

you write const format = R.converge( R.unapply(R.join(‘ ‘)), [ R.always(“Value”), R.nthArg(0), R.always(“is not expected here. Possible variants are:”), R.compose(R.join(‘, ‘), R.nthArg(1)) ] );

Argh, what? You’re a cool guy, you’ve solved it. Share the puzzle with others on the code review.

Next, you learn monads and purity. OK, my functions can’t have any side effects from now on. They can’t refer this (that’s fine), they can’t refer time and random (o-o-ok), they can’t refer anything other than the arguments they are given, even the global string constants, even the math Pi. You carry the necessary args, factories, and generators from the outermost function through the nesting chain down to the internals, you explode the signatures, and then you learn the Reader or State monad. Ouch, you infect all your code with sporadic monadic maps and chains, and the bowl of spaghetti is ready!

So, combinators! What the funny beasts. Oh, Y-combinator is not only a startup accelerator but a recursion replacement. Let’s use it the next time I came with a problem trivially solvable by recursion or a simple `reduce` call.

Point #2. Functional programming is not about lambda calculus, monads, morphisms, and combinators. It’s about having many small well-defined composable functions without mutations of global state, their arguments, and IO.

In other words, if point-free style helps to communicate better in a particular case, use it. Otherwise, don’t. Don’t use monads because you can, use them when they precisely solve a problem. BTW, do you know that an Array and Promise are monads? If not, it does not stop you from applying them correctly. You should train your intuition to an extent when you understand what monad is required or, better, it is not required at all. It comes with practice, don’t overuse new stuff until you reason about it comfortably.

Alone, switching to small composable functions without side-effects where possible will give you most of the benefits. Start with it.

One aspect of switching to FP style used to annoy me a lot. In classical JS you have at least two options to show an error:

    Return null/undefined instead of a result
    Throw an exception

When you pick up FP, you still have these options and as a bonus get Either and Maybe monads. How should I handle errors now? What should the public API of my lib look like?

Point #3. Don’t be afraid of error handling through Maybes and Eithers. This couple is your best acquisition in the monadic world.

Take a look at the excellent Railway oriented programming pattern for the aha-moment. Use Maybes in your public API and if you afraid you won’t be understood, provide thin-wrapper satellites with suffixes like `Unsafe`, `Nullable`, `Exc` for consumption by the imperative JS

https://fsharpforfunandprofit.com/rop/

Functional programming is so much different than the mainstream that the mainstream-targeted tools you’re using will stop to work.

Flow and Typescript fail to work correctly because it’s hard for them to express all that currying and argument polymorphism. Although there’re bindings for Ramda, for example, they often give you false alarms, and when there’s indeed an error, the message is very cryptic and unclear.

Code coverage and breakpoints also break. FP code is more like CSS than JS. Take a look at XOD sources. Does it make much sense to place a breakpoint to CSS and execute it step-by-step? What’s the coverage of CSS file? Of course, the effect is not 100%. At the places where you switch back from declarative to imperative style, these tools still work; but now your code is fragmented for the devtools and the experience change wildly.

Point #5. Once you touch FP you will be unhappy and angry. I had experienced the same emotion as when I switched from Windows to Linux and understood that both suck and I have no way to undo the knowledge. The same with a switch from full-blown IDE to Vim. Hope, you understand the idea.

Can we take the best of both worlds? Get the functional programming without madness and excellent developer experience at the same time? I think so. There’re other JS-targeted languages that are functional from the very beginning: Elm, PureScript?, OCaml (BuckleScript?), ReasonML?. "

---

stephen 1 day ago [-]

...ships in 2020?

I'm impressed with what they're doing (and love the language), and it's hard work, and their speed is faster than, say, Java's dead-pace evolution in the 2000s.

But, poking around at TypeScript?, I've been blown away with the MS/TS speed of development. ~2-3 month release cycles, with non-trivial changes to the language (mapped types, conditional types, etc.), that are themselves unique/novel type system features, not like Java finally getting around to copying the obvious/best-practice approach to case/data classes.

Granted, I'm sure the MS/TypeScript? budget is huge comparatively...

Seems like that's the "best" (realistic) way for dev tooling to evolve lately: come from, or attach yourself to, a ~top-5 tech company that makes their money somewhere else and can bankroll developer tools/languages (e.g. MS with TS, FB with React/Flow, Google with a myriad of things e.g. Dart & AdWords?, Go).

Bringing it back to Scala, seems like they were close to this (flirted with adoption/sponsorship from a few startups like FourSquare?, etc.) but, thinking about it now, "software development in the large" is a huge concern for those companies (e.g. anyone with enough extra money to actually pay for language/tooling improvements), and that's never really been Scala's strong suit (compile times, painless compiler upgrades).

reply

munificent 1 day ago [-]

TypeScript? has a comparatively easier job because the type system is unsound. They can add new type features without needing to be paranoid about a hole in the system or a usability pitfall because the tools don't rely on types for correctness or optimization, and because users can easily cast to "any" or otherwise work around an annoyance.

When you want the whole system to actually deeply rely on the static type system, it needs to be much more robust, and that takes a lot of effort.

I'm not trying to minimize what the TypeScript? folks are doing — it's a really nice, well-designed language. But the complexity required to evolve an optionally-typed language is closer to that of a dynamically-typed one than a statically-typed one.

reply

---

 chatmasta 2 days ago [-]

As someone who hasn't worked with Jupyter beyond a simple iPython shell, this comment sent me down a bit of a rabbit hole. If anyone's interested, here are some links.

Jupyter docs for making a kernel: http://jupyter-client.readthedocs.io/en/latest/kernels.html

List of jupyter kernels: https://github.com/jupyter/jupyter/wiki/Jupyter-kernels

Jupyter seems like a cleanly designed piece of software that is easy to integrate with. I like the use of ZeroMQ? for the wire protocol.

reply

---

Lox

Pedagogical language from Crafting Interpreters

[2]

C-style syntax. Dynamic. Automatic memory management.

Types: booleans, numbers, strings, nil

Statements and expressions.

Expressions:

+, -, *, /, also - (as arithmetic negation)

< <= > >= == !=

! and or ('and' and 'or' are short-circuiting and so are also control flow operators)

statements:

'print'

blocks {}

grouping by parens ()

variable declaration with 'var'

variable assignment with '='

if, while, for

functions. Define functions with 'fun' keyword. Functions are first-class. Closures.

classes with (single inheritance) subclasses, fields, methods, 'this', 'init' methods, 'super'. The syntax for class declaration is the 'class' keyword. The syntax for construction is just use the classname, eg "var breakfast = Breakfast();" The syntax for subclasses is '<'.

class Brunch < Breakfast { drink() { print "How about a Blood Mary?"; } }

stdlib: 'print', 'clock'

lox has statements and expressions but note that for languages where there are not statements and 'everything is an expression,

" for each “statement-like” construct in the language, you need to decide what value it evaluates to. Some of those are easy:

    An if expression evaluates to the result of whichever branch is chosen. Likewise, a switch or other multi-way branch evaluates to whichever case is picked.
    A variable declaration evaluates to the value of the variable.
    A block evaluates to the result of the last expression in the sequence.

Some get a little stranger. What should a loop evaluate to? A while loop in CoffeeScript? evaluates to an array containing each element that the body evaluated to. That can be handy, or a waste of memory if you don’t need the array.

You also have to decide how these statement-like expressions compose with other expressions — you have to fit them into the grammar’s precedence table. For example, Ruby allows:

puts 1 + if true then 2 else 3 end + 4

...

Turning every statement into an expression forces you to answer a few hairy questions like that. In return, you eliminate some redundancy. C has both blocks for sequencing statements, and the comma operator for sequencing expressions. It has both the if statement and the ?: conditional operator. If everything was an expression in C, you could unify each of those.

Languages that do away with statements usually also feature implicit returns – a function automatically returns whatever value its body evaluates to without need for some explicit return syntax. For small functions and methods, this is really handy. In fact, many languages that do have statements have added syntax like => to be able to define functions whose body is the result of evaluating a single expression. "

token types:

" enum TokenType? { Single-character tokens. LEFT_PAREN, RIGHT_PAREN, LEFT_BRACE, RIGHT_BRACE, COMMA, DOT, MINUS, PLUS, SEMICOLON, SLASH, STAR,

  // One or two character tokens.
  BANG, BANG_EQUAL,
  EQUAL, EQUAL_EQUAL,
  GREATER, GREATER_EQUAL,
  LESS, LESS_EQUAL,
  // Literals.
  IDENTIFIER, STRING, NUMBER,
  // Keywords.
  AND, CLASS, ELSE, FALSE, FUN, FOR, IF, NIL, OR,
  PRINT, RETURN, SUPER, THIS, TRUE, VAR, WHILE,
  EOF} " -- [3]

---

https://queue.acm.org/detail.cfm?id=3212479

" A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches.

...

On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running, which makes it inconvenient in a dark silicon era when transistors are cheap but powered transistors are an expensive resource.

...

Consider another core part of the C abstract machine's memory model: flat memory. This hasn't been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency.

...

For example, in C, processing a large amount of data means writing a loop that processes each element sequentially. To run this optimally on a modern CPU, the compiler must first determine that the loop iterations are independent. The C restrict keyword can help here. It guarantees that writes through one pointer do not interfere with reads via another (or if they do, that the programmer is happy for the program to give unexpected results). This information is far more limited than in a language such as Fortran, which is a big part of the reason that C has failed to displace Fortran in high-performance computing.

Once the compiler has determined that loop iterations are independent, then the next step is to attempt to vectorize the result, because modern processors get four to eight times the throughput in vector code that they achieve in scalar code. A low-level language for such processors would have native vector types of arbitrary lengths. LLVM IR (intermediate representation) has precisely this, because it is always easier to split a large vector operation into smaller ones than to construct larger vector operations.

Optimizers at this point must fight the C memory layout guarantees. C guarantees that structures with the same prefix can be used interchangeably, and it exposes the offset of structure fields into the language. This means that a compiler is not free to reorder fields or insert padding to improve vectorization (for example, transforming a structure of arrays into an array of structures or vice versa). That's not necessarily a problem for a low-level language, where fine-grained control over data structure layout is a feature, but it does make it harder to make C fast.

C also requires padding at the end of a structure because it guarantees no padding in arrays. Padding is a particularly complex part of the C specification and interacts poorly with other parts of the language. For example, you must be able to compare two structs using a type-oblivious comparison (e.g., memcmp), so a copy of a struct must retain its padding. In some experimentation, a noticeable amount of total runtime on some workloads was found to be spent in copying padding (which is often awkwardly sized and aligned).

...

The second optimization, loop unswitching, transforms a loop containing a conditional into a conditional with a loop in both paths. This changes flow control, contradicting the idea that a programmer knows what code will execute when low-level language code runs. It can also cause significant problems with C's notions of unspecified values and undefined behavior.

In C, a read from an uninitialized variable is an unspecified value and is allowed to be any value each time it is read. This is important, because it allows behavior such as lazy recycling of pages: for example, on FreeBSD? the malloc implementation informs the operating system that pages are currently unused, and the operating system uses the first write to a page as the hint that this is no longer true. A read to newly malloced memory may initially read the old value; then the operating system may reuse the underlying physical page; and then on the next write to a different location in the page replace it with a newly zeroed page. The second read from the same location will then give a zero value.

If an unspecified value for flow control is used (for example, using it as the condition in an if statement), then the result is undefined behavior: anything is allowed to happen. Consider the loop-unswitching optimization, this time in the case where the loop ends up being executed zero times. In the original version, the entire body of the loop is dead code. In the unswitched version, there is now a branch on the variable, which may be uninitialized. Some dead code has now been transformed into undefined behavior. This is just one of many optimizations that a close investigation of the C semantics shows to be unsound.

...

on the PDP-11, where each C expression mapped trivially to one or two instructions. Similarly, the compiler performed a straightforward lowering of local variables to stack slots and mapped primitive types to things that the PDP-11 could operate on natively.

...

several issues about the comprehensibility of C.3 For example, C permits an implementation to insert padding into structures (but not into arrays) to ensure that all fields have a useful alignment for the target. If you zero a structure and then set some of the fields, will the padding bits all be zero? According to the results of the survey, 36 percent were sure that they would be, and 29 percent didn't know. Depending on the compiler (and optimization level), it may or may not be.

This is a fairly trivial example, yet a significant proportion of programmers either believe the wrong thing or are not sure. When you introduce pointers, the semantics of C become a lot more confusing. The BCPL model was fairly simple: values are words. Each word is either some data or the address of some data. Memory is a flat array of storage cells indexed by address.

The C model, in contrast, was intended to allow implementation on a variety of targets, including segmented architectures (where a pointer might be a segment ID and an offset) and even garbage-collected virtual machines. The C specification is careful to restrict valid operations on pointers to avoid problems for such systems. The response to Defect Report 2601 included the notion of pointer provenance in the definition of pointer:

    "Implementations are permitted to track the origins of a bit pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."

Unfortunately, the word provenance does not appear in the C11 specification at all, so it is up to compiler writes to decide what it means. GCC (GNU Compiler Collection) and Clang, for example, differ on whether a pointer that is converted to an integer and back retains its provenance through the casts. Compilers are free to determine that two pointers to different malloc results or stack allocations always compare as not-equal, even when a bitwise comparison of the pointers may show them to describe the same address.

These misunderstandings are not purely academic in nature. For example, security vulnerabilities have been observed from signed integer overflow (undefined behavior in C) and from code that dereferenced a pointer before a null check, indicating to the compiler that the pointer could not be null because dereferencing a null pointer is undefined behavior in C and therefore can be assumed not to happen (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-1897).

...

We have a number of examples of designs that have not focused on traditional C code to provide some inspiration. For example, highly multithreaded chips, such as Sun/Oracle's UltraSPARC? Tx series, don't require as much cache to keep their execution units full. Research processors2 have extended this concept to very large numbers of hardware-scheduled threads. The key idea behind these designs is that with enough high-level parallelism, you can suspend the threads that are waiting for data from memory and fill your execution units with instructions from others. The problem with such designs is that C programs tend to have few busy threads.

ARM's SVE (Scalar Vector Extensions)—and similar work from Berkeley4—provides another glimpse at a better interface between program and hardware. Conventional vector units expose fixed-sized vector operations and expect the compiler to try to map the algorithm to the available unit size. In contrast, the SVE interface expects the programmer to describe the degree of parallelism available and relies on the hardware to map it down to the available number of execution units. Using this from C is complex, because the autovectorizer must infer the available parallelism from loop structures. Generating code for it from a functional-style map operation is trivial: the length of the mapped array is the degree of available parallelism.

Caches are large, but their size isn't the only reason for their complexity. The cache coherency protocol is one of the hardest parts of a modern CPU to make both fast and correct. Most of the complexity involved comes from supporting a language in which data is expected to be both shared and mutable as a matter of course. Consider in contrast an Erlang-style abstract machine, where every object is either thread-local or immutable (Erlang has a simplification of even this, where there is only one mutable object per thread).

...

Immutable objects can simplify caches even more, as well as making several operations even cheaper. Sun Labs' Project Maxwell noted that the objects in the cache and the objects that would be allocated in a young generation are almost the same set. If objects are dead before they need to be evicted from the cache, then never writing them back to main memory can save a lot of power. Project Maxwell proposed a young-generation garbage collector (and allocator) that would run in the cache and allow memory to be recycled quickly. With immutable objects on the heap and a mutable stack, a garbage collector becomes a very simple state machine that is trivial to implement in hardware and allows for more efficient use of a relatively small cache.

A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model. "

---

notes on the previous:

---

relevant parts of my notes on Hickey's talk [[notes-computers-programming-programmingLanguageDesign-simpleMadeEasy?]]:

environmental complexity

stuff like resources, memory, CPU. Inherent in implementation space. e doesn't know a good way to simplify this (you can do stuff like preallocate memory, segment memory, but that's wasteful). The main problem is that the policies around this stuff don't compose; eg you can't say 'i'll just size my threadpool to the number of cores' many times in the same program.

information is simple, don't ruin it

"information: it is simple. The only thing you can possibly do with information is ruin it. :) "

"

" Objects were made to encapsulate IO devices, so there's a screen, but I can't touch the screen, so I have the object...That's all they're good for. They were never supposed to be applied to information. And we apply them to information that's just wrong....It's wrong because it's complex...it ruins your ability to build generic data manipulation things. If you leave data alone, you can build things once that manipulate data, and you can reuse them all over the place, and you know they're right once and you're done.

The other thing about it, which also applies to ORM is that it will tie your logic to representational things, which again tying, complecting, intertwining. So represent data is data. Please start using maps and sets directly. Don't feel like I have to write a class now because I have a new piece of information. That's just silly. "

specific constructs

MORE COMPLICATED vs LESS COMPLICATED:

There are reasons why you might have to use the more complicated things, but boy, there's no reason why you shouldn't try to start with this less complicated ones.

State vs. Values

Why is state complicated? by definition, it couples value with time (note, this is true even with a non-concurrent program). State couples 'everything that touches it'. Related, when you debug a stateful program, you first have to try to reproduce the state that the program was in when the user encoutered the bug, which can be difficult.

How to get values in your language: most languages have something like 'final' or 'val'; in some languages it's harder to find aggregate values; you want to look for 'persistent collections'

Objects vs values

Objects couple state, identity, value.

(see also Information is Simple, above)

Methods vs. Functions, Namespaces

Methods couple function and state, and, in some languages, also namespaces.

why namespaces?:

How to get functions in your language: most languages have functions. "If you don't know what they are, they're like stateless methods :) "

How to get namespaces in your language: unfortunately this requires language support ("something you really need the language to do for you, and unfortunately it's not done very well in a lot of places")

vars vs. Managed refs

(although Managed refs are also complex)

refs and vars are both good because they mean that the language is stateless by default and require you to label where you have state.

Clojure and Haskell's references are superior though, because they "compose values and time". These constructs do 2 things: allow you to extract a value, and provide an abstraction of time.

"That's really important, because that's your path back to simplicity; if I have a way to get out of this thing and get a value out, i can continue with my program. ((but, by contrast,)) If I have to pass that variable to somebody else or a reference to something...i'm ((contaminating)) the rest of my system"

"We saw a picture during a keynote yesterday of this amazing memory where you could dereference an address and get an object out -- I want to get one of those computers! The only thing you can ever get out of a memory address is a word, a scalar... recovering a composite object from an address is not something that computers do...variables have the same problem"

How to get managed refs in your language: Clojure/Haskell refs

Inheritance, switch, matching vs. Polymorphism a la carte

Inheritance, by definition, couples types with other types.

Switch or matching couples "Multiple who/what pairs"; they couple "multiple pairs of who's doing to do something, and what happens... and they do it all in one place, in a closed way -- that's very bad" .

e emphasizes that Polymorphism a la carte is very important.

e later distinguishes between 'interfaces' and protocols/typeclasses and implies that protocols/typeclasses are better than interfaces; presumably e would say that (Java) interfaces are polymorphism but not quite 'a la carte'? Perhaps the difference is that a third-party can provide a protocol-extension/typeclass-implementation for a type, whereas (i think) in Java a class must declare its support for an interface in the class definition?

How to get polymorphism a la carte in your language: clojure protocols, haskell type classes

Syntax vs. Data

Syntax couples meaning and order.

"syntax...it's...inferior to data in every way"

"Data is actually really simple; there's not a tremendous number of variations in the essential nature of data. There are maps, there are sets, there are linear sequential things." but "We create hundreds of thousands of variations that have nothing to do with the essence of the stuff", "constructs we put around...and globbed on top of data"

also, "same thing for communications; are we all not glad we don't use the unix method of communicating on the web? Any arbitrary command string can be the argument list for your program and any arbitrary set of characters can come out the other end. ((sarcastic)) Let's all write parsers!"

(see also 'information is simple' above)

Imperative loops, fold vs. Set functions

Imperative loops couple 'what' with 'how'. Folds couple the things being folded with their ordering(?)

How to get set functions in your language: libraries

Actors vs. Queues

Actors couple 'what' with 'who'

How to get queues in your language: libraries

ORM vs. Declarative data manipulation

ORMs couple a whole bunch of things. (see also Information is Simple, above)

How to get declarative data manipulation in your language: SQL/LINQ/Datalog

Conditionals vs. Rules

Conditionals "have a bunch of rules about what our program's supposed to do strewn throughout the program. Can we fix that?" "Instead of embedding a bunch of conditionals in our raw language at every point of decision, it's nice to sort of gather that stuff and put it over some place else"

How to get conditionals in your language: libraries or Prolog

Inconsistency vs. Consistency

e means consistency in the concurrency/memory ordering consistency/distributed systems sense, not in the sense of a consistency of a design. Eg e mentions 'eventual consistency' as a form of inconsistency.

How to get consistency in your language: transactions/values

a note on paren syntax

" Is a language built all out of parens simple... is it free of interleaving and braiding? And the answer is no; Common Lisp and Scheme are not simple is this sense, in their use of parens. Because the use of parentheses in those languages is overloaded; parens wrap calls, they wrap grouping, they wrap data structures; and that overloading is a form of complexity... We can fix that; we can just add another data structure, it doesn't make Lisp not Lisp to have more data structures. It's still a language defined in terms of its own data structures, but having more datastructures in play means that we can get rid of this overloading in this case... " -- Rich Hickey, https://www.infoq.com/presentations/Simple-Made-Easy 0:26:03. Note: the slides say more specifically how Clojure addresses this by adding another data structure; they say, "Adding a data structure for grouping, e.g. vectors, makes each simpler"


Newsgroups: comp.compilers Date: 4 Jan 2003 22:46:28 -0500 Organization: Bell Sympatico Keywords: question Posted-Date: 04 Jan 2003 22:46:28 EST

Other than Smalltalk, Lisp, Forth and ahem Intercal what is the most simple programming language to create a ((relatively) useful) compiler for?

ML ?

It certainly has a number of features that make compiler writing easier.

Probably Forth, as Smalltalk and Lisp rely on a mildly complex run-time environment (with a GC, at least, and with a class library in the case of Smalltalk). I'd estimate 2000 lines for Forth, 4000 for Lisp (of course that's not Common Lisp!), and at least 30000 for Smalltalk and Common Lisp.

This will probably get me flamed mercilessly, but I still have a soft spot in my heart for good old-fashioned BASIC, line numbers and all. I didn't think much of it when I first encountered it, over thirty years ago now, but, with the benefit of a few decades of experience, I have learned to value something that lets me get results, easily, quickly.

Yes, it is ugly, yes, it is primitive, yes, there are better ways to do it. But: for SIMPLE number-crunching, like SIMPLE circuit design, I can get numbers back from BASIC faster than just about anything else out there.

If I had a decent FORTRAN IV compiler, linker, and runtime library, comparable to the old CDC 6600 RUN compiler, or the old Minnesota FORTRAN (MNF) compiler (also on 6600), I'd probably use that. But I don't. If I had something equivalent to the old Turbo PASCAL, say version 2 or version 3, before it became a serious commercial software development package, when it was still just a simple editor and a BLINDINGLY *FAST* compiler that quit and dropped me into the editor at the point of the first error, I'd probably use THAT.

BASIC? e.g. Dartmouth Basic, before it became Visial

Pascal compilers are also pretty straightforward, especially if you compile to some kind of p-code

Since you want simple, I assume you don't care about optimization, so most of the complexity of a production compiler is eliminated right there. If you don't mind user-hostile error recovery, use yacc/lex (or some other compiler generator tool) and you won't have to worry too much about syntax.

But if you want simple, I don't know how you can get much simpler than Forth. It's got a scanner to read a word and a lookup table to find an action to perform (okay, some actions involve mode changes, but still not that hairy). You can even write the kernel in machine language (hand-assembled) if you want a challenge or need to bootstrap bare metal. The rest of Forth can be implemented in Forth.

Lisp also has a pretty simple syntax for parsing, but you have to deal with more complex memory managment. Once you've got a kernel, you can implement a lot of Lisp in Lisp.

These days, from the user's perspective, the complexity and utility of a programming language is mostly in the library. How much of a library are you willing to implement, or can you use some open source library?

Pascal is nice for the programmer, but type-checking takes a lot of work for the compiler-writer (for me, anyway). I like C-variants myself.

VisualWorks?' compiler is around 9,000 lines of Smalltalk. The VM is much more, but a VM for the core language with usable performance (e.g. translation to direct threaded code) can easily be written in 15,000 lines of C.

From simplest to most complex:

Wirth's PL/0, Chung and Yuens Tiny Pascal, Gibsons Tiny-C, Cains Small-C, Wirths Oberon-0, van de Snepschuets version of Pascal-S.

PL/0 is about as simple as you can get, in an imperative language, but it does have (parameterless) subroutines and local variables. P-Code versions can be implemented in about 500 lines of C.

Tiny Pascal - basically, PL/0 with arrays and parameters, simple I/O, in about 600 lines of C. Note that the Tiny Pascal version of Tiny Pascal is self-compiling.

A P-Code version of Small-C (integers, chars, simple pointers and arrays) including interpreter can be implemented in less than 1000 lines of (Small) C. It also is self-compiling.

Pascal-S, with I/O, integers, chars, and bools, arrays and records, can be implemented in about 1500 lines of C, complete with the P-Code interpreter.

> Which of these do you like the most?

Small-C.

The compiler was for a language called something like Tiny-Lisp or Mock-Lisp, and was about 400 lines of Bliss-32 code. One of the programs that I ran was a Tiny-Lisp interpreter (it was really only a dozen or so lines of code - Tiny-Lisp only had a handful of primitives. It was much more primitive than the Tiny Lisp described in http://www.cis.rit.edu/~jerry/Software/lithp/README . https://web.archive.org/web/20050219051229/http://www.cis.rit.edu/~jerry/Software/lithp/README

IIRC, the Bliss compiled code was under 4kb of object code, excluding the terminal i/o routines - there was no VAX/VMS at that point, this was all executing on a bare-bones machine with a DecWriter? as a console.

What is the smallest self-hosting language? > > I found myself wondering what the smallest self-hosting language would > > look like. In the same way that programmers find it fun to write the > > smallest self-reproducing programs, it might be fun to try and write > > the smallest self-hosting language.

Um, I think the Turing machine theorists have beat this to death. There's a 3-state, 7 symbol Universal Turing machine. By definition, self hosting.

So I think the discussion need to focus on what's the smallest

-- IDB

Alex Colvin 2/5/03 What is the smallest self-hosting language? >> I found myself wondering what the smallest self-hosting language would >> look like. In the same way that programmers find it fun to write the >> smallest self-reproducing programs, it might be fun to try and write >> the smallest self-hosting language.

An old candidate might be Calvin Mooers' TRAC language. http://www.tracfoundation.org/ I believe it was the first language whose name was trademarked. -- mac the naïf [Fortran was a trademark, but IBM gave it up. I wrote a lot of Trac in my youth and don't ever recall seeing Trac in Trac, although it wouldn't be very hard. -John]

The pure untyped lambda calculus is arguably a small language and I find it very expressive as a programming language -- I can write a Turing-machine simulator in about 10 lines of lambda calculus, but I wouldn't want to try writing a lambda calculus evaluator in a Turing machine. And you can write a self-interpreter in the pure lambda calculus very compactly.

The problem boils down to how you represent lambda terms in the lambda calculus. The tempting idea of letting them be represented by themselves and just use the identity function as a self-interpreter doesn't really pan out: You, for example, can't compare represented terms for alpha-equality, so saying that it is a representation is stretching it a bit. But the representation function [·] described below does the trick:

                          _
    [M] = \ab.M
        _
        x = x
      __ _ _
      MN = a M N
    ____ _
    \x.M = b \x.M

This allows a self-interpreter to be written as

\m.m (\x.x) (\x.x)

which is hard to beat in any language that doesn't have this built in (like EVAL in LISP). For extra details, see the paper

Torben Æ. Mogensen, Linear-Time Self-Interpretation of the Pure Lambda Calculus, Higher-order and symbolic computation, vol. 13, no. 3, sep. 2000

or the shorter version in Springer LNCS 1755.

        Maurice Wilkes describes a set of bootstrapped compilers for"WISP" with the result acceptable by the EDSAC 2 Assembly Routine. I had fun years ago with this. The reference is: M.V.Wilkes, "An Experiment with a Self-compiling Compiler for a Simple List-processing Langauge", Annual Review in Automatic Programming, vol 4, pp 1-48, 1964. The paper also gives a WISP program for formal differentiation of some simple algebraic expressions.

WISP was certainly interesting and small and real. Many instructors used it to teach language methods. But as I recall it was an interpreter based on a macro processor. It would interpret itself nicely.

Going back to the (not so long ago) 8080 days, the original Small C [Hendrix] would self compile and occupied only a few pages of source code. Even further back, Halstead described an abbreviated version of Nelliac (Pilot) that would self compile and was very small.

---

"Things you may need in a compiler: string manipulation, graph manipulation, garbage collection, reference formantics, recursion, associative arrays, hashing,..." [4]

---

[5] [–]kragensitaker 32 points 8 years ago*x3 The Monitor

You'd need a minimal "monitor" to start with — something that would let you enter in some binary code on an input device and jump to it. Here's a C version that lets you enter code in octal:

typedef void (*function)(); char program[32]; int main() { char *t = program; unsigned i, n; for (;;) { for (i = 3; i; i--) { n = getch() - '0'; if (n > 7) (*(function)program)(); *t = *t * 8 + n; } t++; } }

Translate that into 8086 machine code with a BIOS call for getch(), put it on the boot sector of the floppy, and you're golden. GCC compiles it into 12 instructions, plus the function prologue for main(). I think that would be 32 bytes in 16-bit mode. (Maybe the BIOS call would push it a couple of bytes over.) (I don't actually remember if the alt-keypad thing that lets you type arbitrary bytes is in the BIOS. If so, you could probably simplify the minimal monitor program above by removing the loop.)

Traditionally a monitor like this was first built into the hardware, and a little later as software in ROM. If you want to use it for more than a very short period of time, it needs at least a few more features:

    the ability to correct keyboard errors;
    the ability to see what you're typing (or does the BIOS have a call for getche?);
    the ability to display the contents of memory;
    the ability to change the address at which new bytes will be put into memory;
    the ability to install themselves at some interrupt vector so that you can return to them when your program hits an infinite loop;

So your first task would be to write a more full-featured monitor program, on paper if possible — otherwise carve it into the desk surface with some removed part of the computer — and then very carefully type it in. Your second version, with a backspace, might be 40 bytes or so; you now need to type in 120 octal digits without making a single error, followed by some character that isn't an octal digit. If you do this correctly, you are rewarded by seeing your new program come to life!

Your next task is to enhance your monitor program to be truly usable, by adding the rest of the features listed above. And then you want to find a way to write it to the floppy drive.

At this point you are hoping that this is a 5¼" floppy drive so you can cut a couple of holes to make the disk into a "flippy": there's a substantial chance you're going to make a mistake and screw up writing your first boot sector, and if you do that, you're out of luck for the rest of your prison term.

So you write a call to the BIOS disk I/O routine, use it to write your new monitor program to the disk (hopefully on the back side of the disk), hold your breath, and test it.

Now you write another disk I/O routine that checks its argument to make sure it's not writing to the boot sector (or the original boot sector, which is at a different apparent location with the disk backwards) and start your system in earnest.

Edit: fixed two bugs in initial monitor code. Man, I would be so fucked if I were really in this situation. The Assembler

Your next task is to write an assembler, in memory, using your monitor. It doesn't have to be a fancy assembler with luxuries like multi-character mnemonics, the full instruction set, and stuff like that; it just needs to be an improvement over typing in x86 code in octal. Single-letter case-sensitive mnemonics for 25 or 30 opcodes, plus the ability to calculate jump offsets, are probably plenty to get to the next step. Save your assembler to the floppy in an unused sector. (You should be keeping a map of what's in what sectors, carved into the desk if necessary.)

This is also about the time you want to enhance your monitor program to show you the contents of registers and to be able to single-step.

At this point you have achieved your original goal of being able to implement Tetris or Freecell. The next step after here is roughly as hard as implementing one of these games in this impoverished assembler, so it isn't practical merely as a step toward that goal. But if you want to get to an operating system with a graphical user interface, read on. A Low-Level Language: Forth

Your next task is to bring up some kind of Forth, using your assembler. You can implement the basic primitives for a fairly reasonable Forth in about 200 instructions and 400 bytes of machine code, but that doesn't give you a text interpreter; that's another few hundred instructions. You can test each routine from your monitor program as you write it. (I have an incomplete token-threaded Forth in 399 bytes of machine code, and a complete self-compiling Forth compiler in 66 lines of code.)

Now you want to enhance your monitor program once more: you'll only be using it at boot time from now on, so you add a command to load and run a sector from elsewhere on the disk, so you can reboot more easily. You're going to be rebooting a lot, because every time you write a program that overwrites the wrong parts of memory, the machine is going to crash.

So at this point you've written somewhere around a thousand lines of code, which sounds like you ought to be able to do it in a day or two, but if you're like me, it's probably really more like two weeks because of the amount of effort involved in figuring out what went wrong each time you have a bug. And you're on the verge of having a usable programming environment: Forth has replaced your primitive assembler and monitor as your user interface of choice, you it can pretty easily be extended to loading programs from parts of the floppy disk, and now you probably want to implement some kind of minimal filesystem so that you don't accidentally overwrite your Forth operating system. It doesn't have to support fancy features like files that aren't contiguous on disk, files whose length isn't an exact number of sectors, nonsequential access, altering existing files, or multiple directories. It just has to keep you from accidentally overwriting data you meant to keep.

Now you probably want to write some utility programs: something to delete a file, something to defrag the disk, something to make a copy of a file, something to list the files on the disk, something to make a new version of a file. Also, you're probably becoming frustrated with waiting for the floppy disk to read and write data, so you probably want to implement LZW and huffman coding so that you can compress your files for speed. Also, you're going to have a hard time fitting the source code for an OS with a full-featured GUI on a single floppy disk without compressing it.

You can also very easily write a much more full-featured assembler now as a Forth vocabulary. Implementing your assembler as a Forth vocabulary magically gives your assembler macros and compile-time evaluation. You probably also want to write a disassembler now for debugging purposes.

You probably want a text editor now, with features like insert and delete, instead of poking strings of bytes into memory locations that contain your source code. Protected Mode and a Memory-Safe Language

By this time, you probably want to be in 32-bit protected mode, unless the machine is so old that it doesn't support it or doesn't have more than a few hundred K of memory. Programming the x86 in 16-bit real mode is a pain; you have these constant tradeoffs between performance and imposing arbitrary limitations on data structure size.

The downside of this is that all the machine code you've written up to now won't work any more. Hopefully you can alter your Forth assembler to generate 32-bit code, and generate a 32-bit version of your Forth from a Forth version of its source code.

Your next problem is probably to implement a memory-safe language, so you can stop spending so much time tracking down pointer errors. This is probably easier to do by implementing a dynamically-typed language like Lua than by implementing a statically-typed language that's advanced enough to be pleasant to use like OCaml. You might be able to do this by implementing a memory-safe vocabulary in Forth, but I've never heard of anyone doing this.

Alternatively, you could implement fancier syntax for an unsafe low-level language first, in the manner of C. You probably want to use PEGS, because you can implement a parser generator for those in under a hundred lines of code, and they're roughly as powerful for commonly-used languages as LALR parsers.

Given the constraint of a single floppy disk, you probably want to compile all of this stuff into memory from a minimal bootstrapping interpreter at boot time, rather than compiling source code and storing the compiled results. (I'm assuming you have several times as much memory as floppy-disk space.) Graphics

At this point you want to do graphics. Set the graphics mode to 640×480×16 via the BIOS, implement some graphics primitives by writing to the frame buffer, interface to the mouse so you can point at stuff on the screen easily, copy the font out of the VGA ROM, and implement a simple windowing system, a graphical text editor, and your games.

---