proj-plbook-plPartFormal

Table of Contents for Programming Languages: a survey


Part : Formal stuff

Note on academia research: academia is great but:

upshot: lots of hidden gems but huge time committment to find them and read them. Can't trust language in papers that claims that this or that is important for you to know.

http://www.cis.upenn.edu/~bcpierce/courses/670Fall04/GreatWorksInPL.shtml

http://lambda-the-ultimate.org/

c2.com

retrospective papers and blog posts by designers of great languages

preferentially read academic works that are designed to be read easily, rather than only to get past reviewers:

review papers textbooks talk slides introductory-ish encyclopedic works

note: if you ARE a programming languages academic, reading only these will be insufficient; you also have to keep up with some current literature, so that you are well-placed to make a new contribution.


todo:

Trevor Jim, What are principal typings and what are they good for? http://web.archive.org/web/20081118065537/http://www.church-project.org/reports/electronic/Jim:MIT-LCS-1995-532.pdf


universal combinatorial logic (not turing universal):

" Proposition 3 (Post) A necessary and sufficient condition that a set of operations form a basis for the nonzeroary Boolean operations is that it contain operations that are respectively nonaffine, nonmonotone, nonstrict, noncostrict, and nonselfdual " -- http://boole.stanford.edu/cs353/handouts/book3.pdf

NAND and NOR are universal

---

unconventional computer implementations:

https://news.ycombinator.com/item?id=7824588

---

https://surface.syr.edu/cgi/viewcontent.cgi?article=1012&context=lcsmith_other Definitional interpreters for higher-orderprogramming languages by John C. Reynold

about meta-circular interpreters, classifying them into four classes, and how to transform them between the classes

Abstract. Higher-order programming languages (i.e., languages in which procedures or labels can occur as values) are usually defined by interpreters that are themselves written in a programming language based on the lambda calculus (i.e., an applicative language such as pure LISP). Examples include McCarthy’s? definition of LISP, Landin’s SECD machine, the Vienna definition of PL/I, Reynolds’ definitions of GEDANKEN, and recent unpublished work by L. Morris and C. Wadsworth. Such definitions can be classified according to whether the interpreter contains higher-order functions, and whether the order of application (i.e., call by value versus call by name) in the defined language depends upon the order of application in the defining language. As an example, we consider the definition of a simple applicative programming language by means of an interpreter written in a similar language. Definitions in each of the above classifications are derived from one another by informal but constructive methods. The treatment of imperative features such as jumps and assignment is also discussed

---

https://aleksandarmilicevic.github.io/hola/ https://news.ycombinator.com/item?id=19265061

---

about verifying/proving properties of programs:

https://c9x.me/articles/trust/

---

https://infoscience.epfl.ch/record/278576

" The Simple Essence of Algebraic Subtyping: Principal Type Inference with Subtyping Made Easy Parreaux, Lionel

MLsub extends traditional Hindley-Milner type inference with subtyping while preserving compact principal types, an exciting new development. However, its specification in terms of biunification is difficult to understand, relying on the new concepts of bisubstitution and polar types, and making use of advanced notions from abstract algebra. In this paper, we show that these are in fact not essential to understanding the mechanisms at play in MLsub. We propose an alternative algorithm called Simple-sub, which can be implemented efficiently in under 500 lines of code (including parsing, simplification, and pretty-printing), looks more familiar, and is easier to understand. We present an experimental evaluation of Simple-sub against MLsub on a million randomly-generated well-scoped expressions, showing that the two systems agree. The mutable automaton-based implementation of MLsub is quite far from its algebraic specification, leaving a lot of space for errors; in fact, our evaluation uncovered several bugs in it. We sketch more straightforward soundness and completeness arguments for Simple-sub, based on a syntactic specification of the type system. This paper is meant to be light in formalism, rich in insights, and easy to consume for prospective designers of new type systems and programming languages. In particular, no abstract algebra is inflicted on readers. "

---

Uses of ADTs / sum types

boolean number string Array<JSON>Object<JSON>." -- [1]

felleisen paper defining expressiveness:

https://homepage.cs.uiowa.edu/~jgmorrs/eecs762f19/papers/felleisen.pdf

related:

https://fexpr.blogspot.com/2013/12/abstractive-power.html

---

Why is lambda calculus the way it is, where everything is functions? The motivation is just to have a very small mathematical definition so that you can prove theorems about it without many special cases/verbosity. The idea of doing everything with functions is perhaps a bug rather than a feature, although of course once formulated lambdof calculus is an interesting object of study on its own.


todos

"What inets..."

"It is the optimal runtime for functional programs (and arguably everything). It is basically what you get when you have a graph with arbitrary confluent rewrite rules. I need to write a post someday introducing the subject." [2]

https://github.com/MaiaVictor/inet

" This is a draft of a complete interaction net reducer, i.e., one that allows for arbitrary nodes, not just interaction combinator nodes. I now realize this is necessary if we want to be competitive with modern functional runtimes in performance. We can't afford the extra cost of λ-encoding everything. Compiling pattern-matching functions to interaction net nodes directly will be at least an order of magnitude more efficient in practice. This runtime is, right now, 100x slower than JS in a very primitive test. It is a good start for a JS "interpreter". It performs about 6m rewrites per second. 30m should be achievable by compiling to machine code (wasm, c, llvm). The remaining 20x could come from several micro optimizations, parallelism. Or not. But I have faith. Interaction net reduction is not heavy. The rules are lightweight, the system is memory efficient. It just requires some work. "

(my own thoughts): One way to look at turing-completeness is that you need: arithmetic, memory, conditional, and loop. However, you can also replace 'loop' with 'goto'. With just 'loop' (and the others), you can emulate any program with 'goto' by writing an interpreter, and with just 'goto' (and the others), you can make a 'loop'. So, in the context of the others, 'loop' and 'goto' are equivalent. But it seems like there is some other sense in which 'goto' is more powerful than 'loop'; because you can literally make 'loop' with 'goto', but emulating 'goto' by writing an entire interpreter seems less good somehow. What is this other sense, precisely?

--- the first section here (THE "FORTRAN FALLACY") is the clearest concise exposition of (one way) that linear logic can be relevant to programming language design:

Linear Logic and Permutation Stacks--The Forth Shall Be First by Henry G. Baker


https://assets.amazon.science/77/5e/4a7c238f4ce890efdc325df83263/using-lightweight-formal-methods-to-validate-a-key-value-storage-node-in-amazon-s3-2.pdf

---

https://semantic-domain.blogspot.com/2022/09/the-golden-age-of-pl-research.html?m=1


https://github.com/readme/guides/testable-requirements-feedback

---

https://flownet.com/ron/lambda-calculus.html


Links


Chapter ?: CS theory 101 Chapter ?: Models of computation (Turing-equivalent systems) (also, models of concurrent computation) Chapter ?: Sub-Turing models of computation Chapter ?: Halting Chapter ?: Semantics