proj-plbook-plChProofLangs

Table of Contents for Programming Languages: a survey

TODO: this is related to plChLogicLangs.txt

Chapter : Proof assistant languages

Coq

Not Turing-complete (termination condition)

"Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC? with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities." [1]

Comparative opinions:

Links:

Agda

Not Turing-complete (termination condition) (sorta)

"Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC? with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities." [3]

Tutorial:

Comparative opinions:

HOL

Succeeded by Isabelle.

Isabelle

Comparative opinions:

Sail

ISA specification language

https://github.com/rems-project/sail

can export to Isabelle, HOL4, Coq

ACL2

1996.

Not Turing-complete (termination condition)

Epigram

Not Turing-complete (termination condition)

Charity

Not Turing-complete (termination condition). Categorical semantics? [6]

Twelf

Lean

https://www.nature.com/articles/d41586-021-01627-2

https://xenaproject.wordpress.com/2021/06/05/half-a-year-of-the-liquid-tensor-experiment-amazing-developmen

https://news.ycombinator.com/item?id=27559917

https://www.quantamagazine.org/building-the-mathematical-library-of-the-future-20201001/

https://www.quantamagazine.org/lean-computer-program-confirms-peter-scholze-proof-20210728/

SPIN model checker and Promela

Some interesting constructs

'do' (do-od): a looping construct. Presumably loops until the command 'break' is executed. Like some forms of 'switch', it has multiple branches (delimited by '::'), and each branch has a guard (separated from the branch code by '->'). I think if multiple branch guards are True, then a non-deterministic choice is made. I think if all other branch guards are false, the 'else' guard is True.

bare expressions for blocking: when a bare expression, such as 'finish == 2', is encountered, the thread blocks until the expression returns True

'assert': assertions work as usual, but in addition, since Promela is a language for SPIN the model-checker, assertions are recognized by SPIN

'[]' is 'always': eg '[]!deadlock' means 'it is always true that deadlock is false'

'<>' is 'eventually': eg '[](request-resource -> <>granted-resource)' means 'always (if a resource is requested, eventually it is granted)'

Example

from http://spinroot.com/spin/Doc/p40-ben-ari.pdf (section 7):

byte n = 0, finish = 0;

active [2] proctype P() {
  byte register, counter = 0;
  do :: counter = 10 -> break
     :: else ->
             register = n;
             register++;
             n = register;
             counter++
  od;
  finish++
}

active proctype WaitForFinish() {
  finish == 2;
  printf(“n = %d\n”, n)
}

this is a program with two simultaneously executing threads with identical code ("active [2] proctype P()") and a third simultaneously executing thread ("active proctype WaitForFinish?()"). Each thread has access to two globals (n and finish) and two threadlocals (register and counter). Each thread effectively does a 'for' loop with 10 iterations; on each iteration it loads the current value of the global 'n' into threadlocal 'register', then increments 'register', then stores the result back into global 'n'. Before each thread terminates it (atomically, i presume) increments global 'finish'.

WaitForFinish? blocks until finish == 2, then prints out the value of global "n" to the console.

Links

Z3

MIT license.

ATS

Idris

https://eb.host.cs.st-andrews.ac.uk/writings/idris-tutorial.pdf

NuPRL

Mizar

1989

Otter

1996

F*

"Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC? with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities." [7]

LEAN

"Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC? with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities." [8]

Links:

Metamath

Used in GPT-f:

https://arxiv.org/abs/2009.03393

O-CPS

https://groupoid.space/lang/cps/

HTS: Homotopy Type System for Erlang

https://web.archive.org/web/20200920103810/https://github.com/groupoid/homotopy

OM/EXE

https://web.archive.org/web/20170615064335/https://github.com/groupoid/exe/

It appears this project has been shuttered and replaced with another (presumably similar) one: https://web.archive.org/web/20200920103810/https://github.com/groupoid/homotopy

https://github.com/groupoid/homotopy/wiki/Exe

Overview

http://groupoid.space/intro.htm speaks of untyped lambda calculus and LISP etc, and then ML ", founded merely on algebraic datatypes and algebra on higher terms rather than categorical semantic", and then ML being "fixed with categorical methods" in Charity etc, then HOL/Isabelle. It also mentions other 80s and 90s automated theory provers including Mizar, ACL2, Otter. It then says:

" MLTT

Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC? with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities. "

and

lists Om/EXE's strengths as:

and the Github page describes it as:

"Groupoid Infinity creates a new programming language with dependent types called EXE with small provable dependent core called OM. OM is an implementation of Calculus of Constructions (CoC?), the pure lambda calculus with dependent types. It can be compiled (code extraction) to bytecode of Erlang virtual machines BEAM and LING. EXE is an implementation of Calculus of Inductive Constructions (CiC?) that lives on top of CoC? OM model. You may think of EXE as AST transformation of higher language (with HITs) to OM. "

the Github page also says:

" Technically it is useful to distinguish different layers of EXE:

    Internal Core with record used as structures, modules, and namespaces for the entire language. It transparently translates to OM as Contexts (named types and terms).
    Metaprogramming Encodings layer which allows expanding macros to Internal Core. Macros are used to encoding programming that hides implementation details of CPS, Parigot or Church encodings. However, our EXE encodings are a bit different encodings with categorical semantics, proven in Lean model.
    End-user Top-level EXE Language with powerful primitives data and codata that uses underlying encoding macros. In the future with inductive-inductive types and HITs. Likely Top Language would be a superset of the Internal Core sharing most of the parser."

OM/EXE language

toread: i think a lot of explanation is in here: https://github.com/groupoid/exe/blob/master/doc/proposal.pdf

https://github.com/groupoid/exe/blob/master/doc/encoding.pdf might also be of use but it looks advanced

related?: https://github.com/drstrangelover/om

[9] " DATE: 19 MAR 2016 Compiler Passes

Logic

Macros

Base Library

"

[10] (the following is mostly direct or paraphrased quotes from that page):

Core Types Inductive Dependent Base Library Exe Prelude

... N : Not a R : Recursive D : Dependent + : Inductive

todo my note: 'data' is listed as 'inductive' and 'record' is listed as 'coinductive'. E notes that 'sum' is related to the 'data' declaration. Perhaps 'inductive' and 'coinductive' are somehow related to sum and product types? in https://github.com/groupoid/exe/blob/master/doc/proposal.pdf section 'Inductive Types' it says that indeed "We extend the core PTS language with inductive data and coinductive record definitions" and "we call induction recursive sums" and "such kind of recursion on infinite lists (codata...coinductive types) we can call recursive products".

(constructors are listed below; then sometimes ';' followed by other comments)

todo ((?? not in main list on top, but listed under 'inductive' lower down:))

todo ((?? not in main list on top, but listed under 'logic' lower down:))

NR ND +

R ND +

NR D +

R D +

NR ND *

R ND *

NR D *

R D * todo * GROUPOID: ob, hom; 'hom' has a syntactically similar thingee as in SETOID, except that there's two implications instead of one, and at the right of the second is 'groupoid' (so i can't really interpret this the way i interpreted the rest). Listed under 'logic', not 'coinductive', in the bottom. "Groupoid is a multidimensional generalization of Setoid type, which has equalities on equalities and operations between them. Gropoid was introduced in algebraїс topology for the purposes of homotopy theory". Wikipedia says "Group with a partial function replacing the binary operation; Category in which every morphism is invertible. A category of this sort can be viewed as augmented with a unary operation, called inverse by analogy with group theory.", and that setoid is a special case.

todo my note: generally the syntax appears to be similar to Haskell, but the things e lists under 'logic' have things that syntactically appear to be 'constructors' but whose types actually seem to be mathematical rules that may be assumed about this type of object; the implications in these types appear to mean logical implication rather than a constructor for a function. This interpretation might break down at 'Groupoid', however, which has two arrows on the right side instead of one, and which just has 'Groupoid' at the far right. My guess is that this is some sort of deep mathy category theory meta Curry–Howard?-correspondence-style stuff in which types are literally being unified with logical statements, and that Groupoid (which, you notice, is also the only recursive and dependent and coinductive thing here, and also has itself on the right-hand side of one of the types of its constructors) is more complicated than the rest, making its signature harder to interpret. Note that the DNS domain name is 'groupoid.space', so groupoid are probably some sort of be-all end-all meta thingee in this system. Indeed, [11] says "...the "higher dimensional analogs of sets" correspond to infinity groupoids and that categories should be considered as higher-dimensional analogs of partially ordered sets.".

todo my note: why does equality only have the reflexivity property, not symmetry and transitivity? Maybe this is some weird homotopy type theory thing.

todo my note: ":=" seems to precede giving a bunch of constructors

todo my note: note that IO, AST, and PROCESS don't have explanatory text at the bottom.

todo there are other pages at the Om/EXE website that i didn't quote here b/c i don't understand them enough; there's one that gives type judgement inference rules, and one called 'Lambek encoding' that gives "Categorical Semantics of Inductive Encodings". (they say "We call such encoding in honor of Lambek lemma that leads us to the equality of (co)initial object and (co)limit in the categories of (co)algebras. ")

http://groupoid.space/exe.htm again, mostly direct quotes and paraphrases:

Syntax Tree data Exe: * := (om: Om → Exe) (core: Macro → Exe) (eff: Effect → Exe)

Language Forms

    <> ::= #option
    [] ::= #list
     I ::= #identifier
     U ::= * < #number >
     O ::= I | ( O ) | U | O → O | O O | ∀ ( I : O ) → O | λ ( I : O ) → O
     L ::= I | L I
     A ::= O | A → A | ( L : O )
     F ::= ∅ | F ( I : O ) | ()
     E ::= O | E data L : A := F
             | E record L : A < extend F > := F
             | E let F in E
             | E case E [ | I O → E ]
             | E receive E [ | I O → E ]
             | E spawn E raise L := E
             | E send E to E

Calculus

The type system is completely described by the PTS SAR notation, given by Barendregt. Find more info in Axioms and Inference Rules of underlying Om intermediate language.

   record PTS: * :=
       (S: ∀ (n: nat) → star n)
       (A: ∀ (i: nat) → type i (succ i))
       (R: ∀ (i: nat) → ∀ (j: nat) → pitype i j (max i j))

Macrosystem

Om Om language is the macrosystem terminal language. Language forms of pure Macro or inpure Effect types are expanded to the terminal language. For the details please refer to the Om Assembler description.

Om has constructors: star, var, app, arrow, pi

Om

Macro:

"Macro language subsystem is the core of Exe language and dedicated to compile the prelude described in the Core Types document. Om language is the target language of macroexpansions of data and record inductive constructions, let construction and univariable case pattern matching."

data Macro: record, data, let, case

Effects: "Effect syntax extensions defined basic process calculus axioms, IO and exception handling."

RECEIVE, TRY, RAISE, DO are listed at the top but have no details below.

todo ??: not listed above, but listed below under 'effects': "record Process: (Sigma: *) → (X: *) → * := (action: Sigma → X → GenServer? X)"

http://groupoid.space/om.htm (the following is mostly direct or paraphrased quotes from that page):

Calculus of Constructions

The Om language is a dependently typed lambda calculus, an extension of Barendregt' and Coquand Calculus of Constructions with the predicative hierarchy of indexed universes. There is no fixpoint axiom needed for the definition of infinity term dependance.

Axioms and Inference Rules

All terms respect ranking A inside sequence of universes S and complexity of the dependent term is equal maximum complexity of term and its dependency R. The type system is completely described by the following PTS notation (due to Barendregt):

    S = *{n}, n : nat
    A = *{i} : *{i+1}
    R = *{i} -> *{j} : *{max i j}

Syntax The untyped lambda language is the simplest language used in Om to generate backend programs. This subset of Om is used as the output of type erasure.

    I ::= #identifier
    T ::= I | ( T ) | T T | λ I → T

The terms of Om consist of NN-indexed stars, variables, applications, lambda-abstractions and universal quantifications (Pi-types). This language is called Calculus of Construction and exists in various syntaxes. Om supports Morte's CoC? syntax.

   <> ::= #option
    I ::= #identifier
    U ::= * < #number >
    O ::= U | I | ( O ) | O O | λ ( I : O ) → O
                      | O → O | ∀ ( I : O ) → O

AST

  data om: star (nat), var (name), app (om -> om), lambda (name -> om -> om), arrow (om -> om), pi (name -> om -> om)

HTS, Pure, Infinity, EXE, Om project archeology

I think 'Om' changed its name to 'pure'; https://github.com/groupoid/om redirects to this: https://github.com/groupoid/pure (in the above README, and also in this old fork, you can see the old name Om: https://github.com/drstrangelover/om)

That repo points to this webpage:

https://groupoid.space/pure/

I think the old version of this page is: https://web.archive.org/web/20170102104151/http://groupoid.space:80/om/

The Om README says that this is a low-level intermediate target language for a higher-level language called EXE.

I think 'EXE' has changed its name to 'homotopy' or 'Homotopy type system (HTS)'; https://github.com/groupoid/exe redirects to this:

https://github.com/groupoid/homotopy

which points to this page;

https://groupoid.space/homotopy/

which points to this paper:

https://github.com/groupoid/groupoid.space/blob/master/articles/mltt/mltt.pdf

I think the old version of this page is: https://web.archive.org/web/20161009051404/http://groupoid.space/exe.htm

One of the old pages pointed to: https://web.archive.org/web/20161009032601/http://groupoid.space:80/types.htm

The old repos had some PDF documentation that was not archived. This seems to be present in the version history of the current repo but was deleted in a commit named 'remove Paul Work completely' on Jul 13, 2018 in the Om repo. I'm guessing someone named Paul left the project (or was expelled) and either demanded to rip out their contributions, or the project leader decided they didn't like the way Paul was doing things and ripped it out, or the project leader decided Paul's contributions were not enough to be worth sharing authorship credit, or the project leader couldn't get Paul to affirmatively agree to keep their stuff in and in order to prevent future trouble decided it was safest to rip it out, or some combinations of these. But, for me, the PDF docs were useful, so here is where it was. For Om:

https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf

(this is from the last version of Om before the 'remove Paul Work completely', which is: https://github.com/groupoid/pure/tree/1133b524a241bf381f10699b9c08d05acf81a99a/doc ; the repo also has latex source)

Unfortunately the 'groupoid/homotopy' repo appears to be a clean break from the old 'groupoid/exe' repo, with an initial commit on Mar 14, 2020, even though 'groupoid/exe' redirects to it.

https://github.com/groupoid/homotopy appears to be, rather, an evolution of https://github.com/groupoid/infinity, as old version of the fork from https://github.com/5HT/hts/ refer to it as 'Infinity Base Library' (https://github.com/5HT/hts/tree/3f8a4fc3262871c0fe50cf4ef592b60a33537b93), although an older version of it refers to it as 'Infinity language': https://github.com/5HT/hts/tree/50890456f030c003f39c04a6a3d26541842300c8

Commit https://github.com/5HT/hts/commit/a3c070de919ba85c9e6014159beac455a8445ee7 is 'cleanup legacy docs from infinity repo' and removes the old PDFs encoding.pdf and identity.pdf:

https://github.com/5HT/hts/blob/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/encoding.pdf https://raw.githubusercontent.com/5HT/hts/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/encoding.pdf

https://github.com/5HT/hts/blob/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/identity.pdf https://raw.githubusercontent.com/5HT/hts/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/identity.pdf

There's a lot of old versions of the README in old versions of https://github.com/5HT/hts which help, too.

e.g.

" Infinity Language

The idea of Infinity Language came from the need to unify and arrange different calculi as extensions to the core of the language with dependent types (or MLTT core). While working on distributed systems, data storages and stream processing, we discovered that two core languages: pi calculus and stream calculus were connected and being driven by a language with quantifiers as primitives.

Infinity is a dependently typed language for stream processing, distributed processing and text processing. We use Cubical to prototype the standard library including the theories needed to prove code invariants instead of testing them. Infinity is a source to source translator which lets you use different languages at runtime: Erlang, Rust, Clang, Futhark, Julia, etc.

Infinity language presents a solid and unified way of modeling inter-language computations and inter-system communications within a single language with compact core. We strive for utilizing parallel hardware such as GPU and SSE/AVX SIMD instructions and providing a robust and verified distributed environment with process and channels runtime. ... " -- https://github.com/5HT/hts/tree/82e9e5ac4bdb012bca5529f4ae5402f6f2bb7962

e.g. (another fork)

"

Layered Core

The idea of Infinity Language came from the need to unify and arrange different calculi as extensions to the core of the language with dependent types (or MLTT core). The language has pure functions, infinity number of universes, fixpoint, homotopy interval [0,1], inductive types and pi/sigma as a core. PTS

Pure Type Systems are type system based only on Pi-type. Infinity number of universes was added to avoid paradoxes. Sigma-type is derivable in PTS. MLTT

With usage of dependent fibrations and Pi,Sigma-types Martin-Lof introduced a Type Theory. Which is known to be sound and consistent with only one impredicative contractable bottom space and predicative hierarchy of infinitely many spaces. It can be used both as an operational calculus and a logical framework. Inductive data types is an extension of MLTT with recursors for case analysis and induction for reasoning. HTS

Homotopy Type System is an extension of MLTT with path types, composition on paths, gluening eliminators. It is used to model higher inductive types and stands as a contemporary math foundation.

You can read about Infinity Language layers at http://groupoid.space/mltt/infinity/ " -- https://github.com/adept/infinity

looks like the older EXE stuff started being removed at commit 'merge exe and infinity'

https://github.com/5HT/hts/commit/988f7b8e9093637be8aeab4c1976a01c7b773bbb

the commit immediately before that is:

https://github.com/5HT/hts/tree/535f6cdc50692efca62b8d644eb3faa806566bfc

this contains the old README, and the old docs including proposal.pdf:

https://raw.githubusercontent.com/5HT/hts/535f6cdc50692efca62b8d644eb3faa806566bfc/doc/proposal.pdf

The old 'infinity' page on the groupoid.space website seems somewhat similar to the current https://groupoid.space/homotopy/ page:

https://web.archive.org/web/20190102060830/https://groupoid.space/mltt/infinity/

Pure/Om and Henk and Morte

The docs mention that the 'Pure' (used to be called 'Om') intermediate language, which is sometimes referred to as a sort of assembly language, shares syntax or is similar to Henk and Morte, with Morte being identified as an implementation of Henk.

The Henk paper has a very good introduction to some of this type theory stuff and explains how it relates to programming languages:

https://www.microsoft.com/en-us/research/wp-content/uploads/1997/01/henk.pdf

The Henk paper makes it clear that Henk is a realization of a (set of?) type system(s) called Pure Type Systems (PTS) ( https://en.wikipedia.org/wiki/Pure_type_system ), which is a generalization of the older "lambda cube" ( https://en.wikipedia.org/wiki/Lambda_cube ). This explains the use of the words Pure and PTS in 'Pure'/'Om'.

Other docs explain further:

"We adress the problem of missing a programming language withdependent types for Erlang virtual machines. . Whileother platforms has dependently types languages: Scala (JVM), F*(CLR), Agda and Idris (GHC); Erlang virtual machines (LING andBEAM) lacks support of such languages. We creates a PL based onDependent Type Theory and Categorical Semantics of Lambda Encodings(called EXE), with small core of Pure Type System (called OM) forErlang virtual machines. OM is an implementation of Calculus ofConstructions (CoC?), the pure lambda calculus with dependent typesenriched with infinite universes. EXE is an implementation of Calculusof Inductive Constructions (CiC?) that compiles to OM. The core problemof building a dependently typed language is a semantics of inductivetypes encoding. We propose a system of pluggable encoding backendswhich are based on verified categorical models. ... OM and EXE code can be compiled (by codeextraction and type erasure) to bytecode of Erlang virtual machinesBEAM and LING ... Lambda AssemblerIntermediate? language resemble both the Henk[?]theory of PTS CoC? language and can be tooled with its Morte[?] implementation

...

Exe Language

We extend the core PTS language with inductivedataandcoinductiverecorddefinitions. This constructions is used to model any typein the universe hierarchy. Top level language supports only (co-)inductivedefinitions that compiles directly to CoC? lambda assembler using pluggableencodings. " -- https://raw.githubusercontent.com/5HT/hts/535f6cdc50692efca62b8d644eb3faa806566bfc/doc/proposal.pdf

" From PTS to HTS. We want to have flexible detachable layers on top of PTS core.Then Sigma for proving. Then well-founded trees or polynomial functors as known as dataand record. Then higher path types, interval arithmetic, glue and comp for HIT. Eachlayers is driven by differenth math, the common in only the method – category theory. " -- https://web.archive.org/web/20200920185340/https://raw.githubusercontent.com/5HT/hts/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/encoding.pdf

Perhaps the reason they used to use the name 'infinity' is that they use an infinite number of 'universes':

" Our variant of intuitionistic type system has aninfinitehierarchy of universes, thehierarchy isnon-cumulative, the universe of propositions isimpredicativewhileuniverses of values, types and higher types arepredicative. The next paragraphs brieflydescribe the reasons for such design choices.

While there are variants of MLTT with finite number of universes (see for example thelambda cube and PTS formulations of Calculus of Constructions in [?]), most modernprovers have an infinite hierarchy of universes as it’smore expressive. ... The first universe, the universe of propositions, has impredicative quantification whileall other universes are predicative. This setup is known to simplify the type system whilekeeping it free of paradoxes and thussound as a formal logic." -- https://web.archive.org/web/20200920185340/https://raw.githubusercontent.com/5HT/hts/26ef1aac59de757d3732a4646d7a9e046cc5bd87/doc/encoding.pdf

MLTT refers to "Martin-Lof Type Theory" -- https://raw.githubusercontent.com/groupoid/groupoid.space/master/articles/mltt/mltt.pdf

" This paper presents the design of theOmlanguage and an implementation of its type checker and bytecode extractorto Erlang. Om is an intermediate language based on a pure type system with the infinite number of universes, so it is known to beconsistent in dependent type theory. ThisOmlanguage? is a core part of the language family for verification purposes. The typechecker is built upon MLTT principles and can be switched between predicative and impredicative hierarchies of universes. Thissystem is expected to be usable as trusted core for certified applications which could be run inside Erlang virtual machines LINGand BEAM. The syntax is compatible with Morte language and supports its base library, however, it extends the indexed universes.We show how to program in this environment and link with Erlang inductive and coinductive free structures. A very basic preludelibrary is shipped as a part of the work with infinite I/O operations which immediately enablesOmfor long living runtime verifiedapplications. We briefly describe the top-level language which compiles to pure type system (PTS) core as the further work. Asthe results, we will show lambda evaluation performance on BEAM virtual machine. PTS approach take its roots starting fromAUTOMATH, MLTT, CoC?, but extracting to performant untyped lambda interpreters is a novel way of cerifying applications. Asa full vision we propose a stack of langauges, whereOmtakes a central part.

Introduction

IEEE1standard and ESA2regulatory documents define a number of tools and approaches for verification and vali-dation processes. The most advanced techniques involve mathematical languages and notations. The age of verifiedmath was started by de Bruin’s AUTOMATH prover and Martin-L ̈of[1]’s type theory. Today we have Coq, Agda,Lean, Idris, F* languages which are based on Calculus of Inductive Constructions or CiC?[2]. The core of CiC? isCalculus of Constructions or CoC?[3]. Further development has lead to Lambda Cube[4] and Pure Type Systems byHenk[5] and Morte3. Pure Type Systems are custom languages based on CoC? with single Pi-type and possibly otherextensions. Notable extensions are ECC, ECC with Inductive Types[6], K-rules[7]. The main motivation of Pure TypeSystems? is an easy reasoning about core, strong normalization and trusted external verification due to compact typecheckers. A custom type checker can be implemented to run certified programs retrieved over untrusted channels. Theapplications of such minimal cores are 1) Blockchain smart-contract languages, 2) certified applications kernels, 3)payment processing, etc. ... System Architecture Om as a programming language has a core type system, thePTS∞— the pure type system with the infinite numberof universes. This type system represents the core of the language. Higher languages form a set of front-ends to thiscore. Here is example of possible languages: 1) Language for inductive reasoning, based on CiC? with extensions; 2)Homotopy Core with interval [0,1] for proving J and funExt; 3) Stream Calculus for deep stream fusion (Futhark);3) Pi-calculus for linear types, coinductive reasoning and runtime modeling (Erlang, Ling, Rust). These languagesdesugar toPTS∞as an intermediate language before extracting to target language4.Not all terms from higher languages could be desugared to PTS. As was shown by Geuvers[8] we cannot buildinduction principle inside PTS, we need a fixpoint extension to PTS. And also we cannot build the J and funExt terms.But still PTS is very powerful, it’s compatible with System F libraries. The properties of that libraries could be provenin higher languages with Induction and/or [0,1] Homotopy Core. Then runtime part could be refined to PTS, extractedto target and run in an environment.We see two levels of extensions to PTS core: 1) Inductive Types support; 2) Homotopy Core with [0,1] and itseliminators. We will touch a bit this topic in the last section of this document

...

Consistent Pure Type System as Intermediate Language

TheOmlanguage? is a dependently typed lambda calculusPTS∞, an extension of Coquand’ Calculus ofConstructions[3] with the predicative hierarchy of indexed universes. There is no fixpoint axiom, so there is no infiniteterm dependence, the theory is fully consistent and has strong normalization property. ...

TheOmlanguage? is based on Henk languages described first by Erik Meijer and Simon Peyton Jones in 1997[5].Later on in 2015 Morte implementation of Henk design appeared in Haskell, using the Boem-Berrarducci encodingof non-recursive lambda terms. It is based only on one type constructorΠ, its introλand apply eliminator, infinitenumber of universes, andβ-reduction. The design of Om language resemble Henk and Morte both in design and inimplementation. This language intended to be small, concise, easy provable and able to produce the verifiable pieceof code that can be distributed over the networks, compiled at the target platform with a safe linkage

... Om syntax is compatible with CoC? presented in Morte and Henk languages. However, it has extension in a part ofspecifying universe index as a Nat number " -- https://raw.githubusercontent.com/groupoid/pure/1133b524a241bf381f10699b9c08d05acf81a99a/doc/om.pdf

" The PTS∞ programming language is a higher-order dependently typed lambda calculus, an extension of Coquand's Calculus of Constructions with the predicative/impredicative hierarchy of indexed universes. Universes

The infinite hierarchy of universes allows to avoid paradoxes in type theory. " -- https://groupoid.space/pure/

" Exe Language is built on top of CoC? pure type system, predicative universes and inductive constructions you may find in other MLTT provers like Lean, Coq, Idris, Agda, and F*. As algebraїс prover, Exe supports data and record polynomial functors as fundamental language constructions implemented in a form of hygienic macros over the AST. Everything in Exe is encoded as inductive construction: from the equality property up to Erlang VM process, everything is a type that could have constructors and/or projections. Both Exe language and its macrosystem are built using AST types, described in this document.

...

Om Om language is the macrosystem terminal language. Language forms of pure Macro or inpure Effect types are expanded to the terminal language " -- https://web.archive.org/web/20161009051404/http://groupoid.space/exe.htm

" Infinity Language

The idea of Infinity Language came from the need to unify and arrange different calculi as extensions to the core of the language with dependent types (or MLTT core). While working on distributed systems, data storages and stream processing, we discovered that two core languages: pi calculus and stream calculus were connected and being driven by a language with quantifiers as primitives.

Infinity is a dependently typed language for stream processing, distributed processing and text processing. We use Cubical to prototype the standard library including the theories needed to prove code invariants instead of testing them. Infinity is a source to source translator which lets you use different languages at runtime: Erlang, Rust, Clang, Futhark, Julia, etc.

Infinity language presents a solid and unified way of modeling inter-language computations and inter-system communications within a single language with compact core. We strive for utilizing parallel hardware such as GPU and SSE/AVX SIMD instructions and providing a robust and verified distributed environment with process and channels runtime. ... " -- https://github.com/5HT/hts/tree/82e9e5ac4bdb012bca5529f4ae5402f6f2bb7962

https://www.researchgate.net/profile/Maksym_Sokhatskyi/publication/332514322_Addendum_I_Pure_Type_System_for_Erlang/links/5cb8e9314585156cd7a2580d/Addendum-I-Pure-Type-System-for-Erlang.pdf

See alsos:

Morte: http://www.haskellforall.com/2014/09/morte-intermediate-language-for-super.html

Another doc linked on the groupoid 'homotopy', which is rather hard for me to understand: https://github.com/groupoid/groupoid.space/blob/master/articles/mltt/mltt.pdf

Old homotopy project wiki pages: https://github.com/groupoid/homotopy/wiki

See also https://groupoid.space/lang/cps/

" Designed for L1 caches

The motivation for building an interpreter that can run on L1 cache sizes (limited to 64KB of code and data) is based on success of LuaJIT?, V8, HotSpot?, and vector lanaguages such as K and J. If we can build a really fast VM for such interpreter (compact aligned bytecode) and make it stick to L1 sizes with enabled AVX instruction set then it can outperform any reasonable alternative.

This approach was chosen for building O-CPS interpreter in Rust. " -- https://groupoid.space/lang/cps/

TLA+

---

KIV

http://www.isse.uni-augsburg.de/en/software/kiv/

"Microsoft's Dafny -- an imperative, non-functional (I think) language -- is a very popular choice in verification challenges (alongside tools like KIV and Why3). " -- [13]

Dafny

Tutorials:

See also Wuffs and SAL

Ott

http://www.cl.cam.ac.uk/~pes20/ott/

Alloy

http://alloytools.org/

Sharing and Learning Alloy on the Web https://lobste.rs/s/xceayc/sharing_learning_alloy_on_web_2019

Statebox

https://statebox.org/

"formally verified process language"

VeriFast

https://github.com/verifast/verifast https://zenodo.org/record/1068185

Ivory

https://ivorylang.org/

Wuffs

https://github.com/google/wuffs/blob/main/doc/wuffs-the-language.md https://github.com/google/wuffs

PomPom

https://github.com/caotic123/PomPom-Language

discussion:

Kind

https://github.com/Kindelia/Kind

(used to be named Formality?)

Discussion:

Core: "The smallest core. Check FormCore?.js or Core.kind. Both are < 1000-LOC complete implementations!"

P

https://github.com/p-org/P

Formal Modeling and Analysis of Distributed Systems

comment on the talk by [14]: "Topic: Testing distributed systems is extremely difficult. Formal methods is more thorough, but programmers have trouble using it. P helps bridge that gap by being an FM that looks like a programming language. Developers model their system as message-passing state machines, which can then be model checked.

Probably the most useful talk for me. I’ve had my eye on P for a while, but never went deep. What surprised me most is that the specifications are also state machines. Unlike most FMs, where the invariant can be a mathematical expression, you have to write an implementation of the invariant. I feel like as an experienced specifier I’d find that extremely limiting, but if it makes things more approachable for programmers that might be a worthwhile tradeoff. The talk certainly inspired me to play more with it.

After the talk I asked him why he named it P. He said it was because it was one of the only one-letter languages remaining. He now regrets the name. "

Proof language Links