proj-oot-ootToReads

see also [1]

higher priority / dont forget about these

General

urbit

introductions

interop

toreads unsorted

link Try adding a language with proof solving capabilities. ATS, Agda, Idris, Coq spring to mind.

compilers

PL textbooks and intros and courses

the stuff in [2]: 'Collections of comparisons of programming languages'

PL design / big picture

uncategorized papers

lists

designing and building language tutorials

language implementation

http://www.cs.indiana.edu/~dyb/pubs/nano-jfp.pdf A Nanopass Framework for Compiler Education [PDF] by Sarkar, Waddell, and Dybvig * https://github.com/akeep/nanopass-framework https://github.com/akeep/nanopass-framework/blob/master/doc/user-guide.pdf?raw=true * http://lambda-the-ultimate.org/node/1589

target languages / minimal languages / bytecode implementations

data Lam :: * -> * where Lift :: a -> Lam a Tup :: Lam a -> Lam b -> Lam (a, b) Lam :: (Lam a -> Lam b) -> Lam (a -> b) App :: Lam (a -> b) -> Lam a -> Lam b Fix :: Lam (a -> a) -> Lam a "

db queries

graphs

haskelly incld' category theory

algebraic effects

Morte and Om

vms

pythonic

c ish / c y / cy

rusty

lispy

  1. The base environment for an Ur-Lisp written in Ruby @env = { :label => proc {
(name,val), _@env[name] = eval(val, @env) },
 :car   => lambda { |(list), _| list[0] },
 :cdr   => lambda { |(list), _| list.drop 1 },
 :cons  => lambda { |(e,cell), _| [e] + cell },
 :eq    => lambda { |(l,r),ctx| eval(l, ctx) == eval(r, ctx) },
 :if    => proc { |(c,t,e),ctx| eval(c, ctx) ? eval(t, ctx) : eval(e, ctx) },
 :atom  => lambda { |(s), _| (s.is_a? Symbol) or (s.is_a? Numeric) },
 :quote => proc { |sexpr, _| sexpr[0] } }" 

continuations

J

continuations

concurrency (and parallelism, and distributed systems)

paradigms

functional

metaprogramming

macros

theory

DSLs

evaluation strategy

call-by-text and fexprs

types

provers

high assurance and small core stuff from nickpsecurity

math

OOP

" Another key point is his arguing for extreme late-binding. What does that mean? Well, consider this code:

my $order = OrderFactory?->fetch(%order_args); my $invoice = $order->invoice;

If you have multiple "order" classes, you may not know which class you are dealing with, so you can't know, at compile time which invoice method you're calling. OOP languages generally don't select (bind) the method (invoice) for the invocant ($order) until run time. Otherwise, polymorphism can't work.

But what's extreme late binding? Does the invoice method exist? In a language like Java, that code won't even compile if the method doesn't exist. It might even be dead code that is never called, but you can't compile it if that method isn't there. That's because Java at least checks to ensure that the method exists and can be called.

Another key point is his arguing for extreme late-binding. What does that mean? Well, consider this code:

my $order = OrderFactory?->fetch(%order_args); my $invoice = $order->invoice;

If you have multiple "order" classes, you may not know which class you are dealing with, so you can't know, at compile time which invoice method you're calling. OOP languages generally don't select (bind) the method (invoice) for the invocant ($order) until run time. Otherwise, polymorphism can't work.

But what's extreme late binding? Does the invoice method exist? In a language like Java, that code won't even compile if the method doesn't exist. It might even be dead code that is never called, but you can't compile it if that method isn't there. That's because Java at least checks to ensure that the method exists and can be called.

For many dynamic languages, such as Perl (I ♥ Perl), there's no compilation problem at all because we don't bind the method to the invocant until that code is executed, but you might get a panicked 2AM call that your batch process has failed ... because you might have encapsulation, but not isolation. Oops. . This is "extreme" late binding with virtually no checks (other than syntax) performed until runtime. ... Extreme late-binding is important because Kay argues that it permits you to not commit too early to the "one true way" of solving an issue (and thus makes it easier to change those decisions), but can also allow you to build systems that you can change while they are still running! "

this post goes into that too:

https://softwareengineering.stackexchange.com/questions/301919/object-oriented-late-binding

" “Binding” refers to the act of resolving a method name to a piece of invocable code. Usually, the function call can be resolved at compile time or at link time. An example of a language using static binding is C:

int foo(int x);

int main(int, char) { printf("%d\n", foo(40)); return 0; }

int foo(int x) { return x + 2; }

Here, the call foo(40) can be resolved by the compiler. This early allows certain optimizations such as inlining. The most important advantages are:

    we can do type checking
    we can do optimizations

On the other hand, some languages defer function resolution to the last possible moment. An example is Python, where we can redefine symbols on the fly:

def foo(): """"call the bar() function. We have no idea what bar is.""" return bar()

def bar(): return 42

print(foo()) # bar() is 42, so this prints "42"

  1. use reflection to overwrite the "bar" variable locals()["bar"] = lambda: "Hello World"

print(foo()) # bar() was redefined to "Hello World", so it prints that

bar = 42 print(foo()) # throws TypeError?: 'int' object is not callable

This is an example of late binding. While it makes rigorous type checking unreasonably (type checking can only be done at runtime), it is far more flexible and allows us to express concepts that cannot be expressed within the confines of static typing or early binding. For example, we can add new functions at runtime.

Method dispatch as commonly implemented in “static” OOP languages is somewhere in between these two extremes: A class declares the type of all supported operations up front, so these are statically known and can be typechecked. We can then build a simple lookup table (VTable) that points to the actual implementation. Each object contains a pointer to a vtable. The type system guarantees that any object we get will have a suitable vtable, but we have no idea at compile time what the value of this lookup table is. Therefore, objects can be used to pass functions around as data (half the reason why OOP and function programming are equivalent). Vtables can be easily implemented in any language that supports function pointers, such as C.

...

This kind of method lookup is also known as “dynamic dispatch”, and somewhere in between of early binding and late binding. I consider dynamic method dispatch to be the central defining property of OOP programming, with anything else (e.g. encapsulation, subtyping, …) to be secondary. ... While this is late-ish binding, this is not the “extreme late binding” favoured by Kay. Instead of the conceptual model “method dispatch via function pointers”, he uses “method dispatch via message passing”. This is an important distinction because message passing is far more general. In this model, each object has an inbox where other objects can put messages. The receiving object can then try to interpret that message. The most well-known OOP system is the WWW. Here, messages are HTTP requests, and servers are objects. ... The power of message passing is that it scales very well: no data is shared (only transferred), everything can happen asynchronously, and objects can interpret messages however they like. This makes a message passing OOP system easily extendable. I can send messages that not everyone may understand, and either get back my expected result or an error. The object need not declare up front which messages it will respond to. "

frameworks and libs

major languages , applications

other languages

math: Maple Mathcad Maxima Sage

quotes / move to book

learn to code , and ipython notebooks

high integrity

dgemm 19 hours ago

Is there a high level overview somewhere I can read? Don't even know what to google here.

reply

throwawayf2o 18 hours ago

In general, the answer typically involves formal specification and formal methods that check the code against these specifications, combined with testing and coding standards that result in analysable code.

More references:

https://www.cs.umd.edu/~mvz/cmsc630/clarke96formal.pdf

http://research.microsoft.com/en-us/um/people/lamport/tla/fo...

reply

pjmlp 12 hours ago

Search for high integrity software.

For example, with MISRA it may be C, but feels like Ada.

http://www.misra.org.uk/

Or Spark similarly for Ada

http://www.spark-2014.org/

reply

tormeh 18 hours ago

You might want to look into coding standards for C and special languages like Ada (like C, but less writeable, more readable with strong types) and Esterel (deterministic multithread scheduling). Seriously, Esterel is probably the coolest thing you'll read about this week.

There's also various specification languages for multithreaded behaviour, which allows you to analyse your programs behaviour using software tools, for example SPIN[0].

0: http://en.wikipedia.org/wiki/SPIN_model_checker

reply

roma1n 3 hours ago

Yup. How a DO-178-like integrity level is not mandatory for medical devices is troubling.

reply

errors

probabalistic programming

blogs

See also

See also [[ootToReadsAlreadyCopiedToPlBook?]].

See also [[ootToReadsFrameworks?]].

scala

smalltalk

logic

Low-level

concurrency

dataframes

R

dmbarbour's awelon

Overview (read that already). Trying to make programming easy for non-programmers by (Smalltalk-style?) 'IDE is the UI' (influences: Squeak Smalltalk, ToonTalk?, LambdaMOO?, Morphic, Croquet, Emacs, and HyperCard?). Thinks that Smalltalk failed because lack of recent advances in programming language design made it too hard to reuse code. Key innovations include: a streaming bytecode (and user input = streaming code, so that "user's input history can then be mined to develop user-macros, tools, and functions based on example"), "long-running behaviors and policies as accessible objects" (i don't yet understand this) via linear types (i don't yet understand how this relates) and Reactive Demand Programming (haven't read yet). There is a core language, ABC, Awelon Byte Code, and a language above that, AO (and is there a language above that?) and another language/shell script-ish thing above it, Claw. dmbarbour is a Haskell-y guy who is on LTU alot; he clearly has forgotten more about PLT than i'll ever know. To make awelon bytecode streamable (which apparently implies that you can forget old bytecode after executing it), there are no backwards jumps; instead, loops are done by fixpoint combinators, which i dont understand, but should learn.

data structures

concurrent/distributed

Lists of to-reads

parsing and grammar / syntax

modules

semantics

interop

dbs

bitc

reread http://www.coyotos.org/pipermail/bitc-dev/2012-March/003300.html http://www.coyotos.org/pipermail/bitc-dev/2012-April/003315.html http://lambda-the-ultimate.org/node/4490 bitc is still alive!: http://comments.gmane.org/gmane.os.coyotos.bitc.devel/4745 https://www.bitc-lang.org/node/9

capabilities

I/O

more on epoll vs kqueue vs iocp (ppl prefer iocp or kqueue):

VMs

Contracts preconditions etc for safety

Erlang and Elixir

Design tools

"Long term, we have to get to the point where we ship languages -- and implementations -- with strong, proven foundations. There are promising moves in this direction, both in designed-for-language-designer tools like K framework or Redex, and in the general set of libraries and projects being undertaken in general proof assistants like Isabelle and Coq."

Assembly

Core languages

For OotB

i guess i'm still feeling fairly eager to get-implementin' on an Oot Bytecode interpreter, even though there are many unanswered questions at the Oot Core level, and even though i haven't learned all of the other contender languages yet. What else is likely to make such a big difference to Oot Bytecode that beginning implementation now would be a waste of time? All i can think of is that i should:

Transpiling

OSs

Program transformation

Prolog

normal forms

3-lisp

clockless circuits

eve

design

scripting

https://hackage.haskell.org/package/turtle-1.3.3/docs/Turtle-Tutorial.html

quantum computing

QC: Quil

QC: IBM-Q and QASM

QC: Q#

QC: Liquid (F#)

QC: other

QCL (C-like) QML (Haskell-like) Quipper (Haskell) ProjectQ? (C++/Python)

misc discussions on interesting things

https://news.ycombinator.com/item?id=15051645 http://lambda-the-ultimate.org/node/5466 https://news.ycombinator.com/item?id=64225 http://www-cs-students.stanford.edu/~blynn/c/fortran.html also most of the rest of that site : Java vs C Haskell vs C C Cruft C Wishlist Bash vs C Go vs C https://zverok.space/blog/2023-11-10-syntax-sugar3-hash-values-omission.html

Misc

> #(contingent #(interval 3.0255 3.0322)

                (shadow super))What's so special about this? Well, the fall-time has been updated to a more narrow interval... but that last part (shadow and super) are the symbols of the other cells which propagated the information of this updated state. Pretty cool! And no fancy natural language parsing involved. "  what is Radul working on now? In a blog post on March 21, 2016, two years in to working on a probabilistic programming platform. So, since at least March 21, 2014, he was working on a probabilistic programming platform. On Aug 7, 2017, he posted something called " Musings on Probprog". His Nov 19, 2017 blog post "Compositional statistical unit testing", suggests that he was still working on it. Two more blog posts, on Dec 25, 2017 and Dec 23, 2018, are about probabilistic inference and statistical testing. So, from at least Mar 21, 2014 until approximately Dec 23, 2018 (possibly a little shorter or much longer), he was probably working on a probabilistic programming platform. His most cited paper appears to be "Automatic differentiation in machine learning: a survey" in 2015, followed by The Art of the Propagator (Jan 26 2009) and his thesis (3 Nov 2009? although the downloadable version appears to be a draft from Sep 2009; oh i see the final thesis is at http://dspace.mit.edu/bitstream/handle/1721.1/49525/MIT-CSAIL-TR-2009-053.pdf?sequence=1 ; but it doesnt have the section on Dataflow that i was looking, that's only in the draft). In a paper published Aug 18 2021 on which he was an author ("Getting to the point. index sets and parallelism-preserving autodiff for pointful array programming"), his affiliation is "Google Research". Before that, he had another paper in 2021 on "An adaptive-MCMC scheme for setting trajectory lengths in Hamiltonian Monte Carlo". In 2021, there appears to be exactly one paper on which he was first author, "The Base Measure Problem and its Solution", https://proceedings.mlr.press/v130/radul21a.html , which appears to be research supporting probabilistic programming systems. In 2020, there appears to be exactly one paper on which he was first author, "Automatically batching control-intensive programs for modern accelerators", which I'm guessing could be related to the implementation of probabilistic programming systems with the use of GPUs. Since the propagator stuff, he's been doing some work on probabilistic programming  https://github.com/ekmett/propagators 

IPC

data

tobemined

cmdline and shells

e.g. http://www.softpanorama.org/People/Scripting_giants/scripting_languages_as_vhll.shtml

for LOVM/Lo

for OVM

CPS ILs

general implementation tips

effects

misc/todo

noir_lord 60 days ago [-]

What a beautifully written article with informative links.

http://lucacardelli.name/Papers/TypefulProg.pdf is now next on my list when I finish reading A Philosophy of Software Design (which is brilliant if you haven't seen it).

nickpsecurity 59 days ago [-]

Cardelli's Modula-3, a C++ alternative, also illustrates excellent balance of simplicity, programming in large, compile time, and run time. Expanding on that carefully like with macros, a borrow checker, and LLVM integration would have made for a simpler, safer, systems language. Give it a C-like syntax with C compatibility, too, for adoption.

https://en.m.wikipedia.org/wiki/Modula-3

pjmlp 59 days ago [-]

You mean C# I guess. :)

Actually with .NET Native, the GC improvements in .NET 4.6 (TryStartNoGCRegion?() and other companion methods), and the C# 7.x improvements taken from Midori, it is quite close.

nickpsecurity 59 days ago [-]

C# looked a lot more complex than Modula-3 when I last looked at it. The book was thick. They definitely did nice things in C#. I just don't know it's fair to equate it with the simplicity vs get shit done vs small runtime of Modula-3.

I am up for you elaborating a bit on the second sentence since it sounds interesting. Not doing .NET, I don't know what any of those are except the middle one which sounds like SYSTEM/UNSAFE sections.

pjmlp 59 days ago [-]

It is more complex, but Modula-3 isn't that tiny either, around Algol 68/Ada 83 "tiny".

They integrated the improvements from M#/System C# (Midori) into C#.

Namely return ref, ref for local variables, stack allocation for arrays in safe code, spans (slices) across all memory types, allocation free pipelines.

---

https://people.inf.ethz.ch/wirth/CompilerConstruction/index.html

https://people.inf.ethz.ch/wirth/ProjectOberon/index.html

---

https://cygni.se/the-perfect-programming-language/ https://news.ycombinator.com/item?id=21543244

---

i haven't read the HN discussion yet on Notes on a Smaller Rust except i have read the subthread of the first comment: https://news.ycombinator.com/item?id=20465716

i have read the lobsters discussion.

now there is a followup:

https://without.boats/blog/revisiting-a-smaller-rust/

---

scala

memory management

syntax

tooling

npm etc) how is "minimal version selection" working out? ☶ ask go

control

ovm

libraries

related

tobemined

---

"

dunefox 2 days ago [–]

First-class logic/constraint programming like in Flix (chttps:flix.dev/). This would make complex logic easier and powerful.

reply "