proj-plbook-plChModelsOfComputation

Difference between revision 13 and current revision

No diff available.

Table of Contents for Programming Languages: a survey

Chapter : models of computation

Turing-equivalent systems

http://link.springer.com/chapter/10.1007%2F978-3-540-74913-4_120

" Turing Complete Catalytic Particle Computers

    Anthony M. L. Liekens,
    Chrisantha T. Fernando

Purchase on Springer.com

$29.95 / €24.95 / £19.95 *

The Bare Bones language is a programming language with a minimal set of operations that exhibits universal computation. We present a conceptual framework, Chemical Bare Bones, to construct Bare Bones programs by programming the state transitions of a multi-functional catalytic particle. Molecular counts represent program variables, and are altered by the action of the catalytic particle. Chemical Bare Bones programs have unique properties with respect to correctness and time complexity. The Chemical Bare Bones implementation is naturally suited to parallel computation. Chemical Bare Bones programs are constructed and stochastically modeled to undertake computations such as multiplication. "

von Neumann architectures, von Neumann bottleneck, massively parallel computing, computational architecture of the brain, 100 step rule

what else?

see also https://en.wikipedia.org/wiki/Computability#Formal_models_of_computation , https://en.wikipedia.org/wiki/Turing_machine_equivalents, https://en.wikipedia.org/wiki/Category:Models_of_computation, https://www.google.com/search?client=ubuntu&channel=fs&q=minimal+turing-complete+programming+language&ie=utf-8&oe=utf-8

http://programmers.stackexchange.com/questions/132385/what-makes-a-language-turing-complete

http://cs.stackexchange.com/questions/10197/a-program-that-cannot-be-written-in-simply-typed-lambda-calculus-but-only-in-l?rq=1

" up vote 10 down vote accepted

I always though that μ-recursive functions nailed it. Here is what defines the whole set of computable functions; it is the smallest set of functions containing resp. closed against:

    The constant 0 function
    The successor function
    Selecting parameters
    Function composition
    Primitive Recursion¹
    The μ-operator (look for the smallest x such that...)

Check above link for details; you see that it makes for a very compact programming language. It is also horrible to program in -- no free lunch. If you drop any of those, you will lose full power¹, so it is a minimal set of axioms.

You can translate those quite literally into basic syntactical elements for WHILE programs, namely

    The constant 0
    Incrementation _ + 1
    Variable access x
    Program/statement concatenation _; _
    Countdown loops for ( x to 0 ) do _ end
    While loops while ( x != 0 ) do _ end

Here it is obvious that you can drop 5.; what remains is a minimal set of operations your language has to support if it is to be Turing complete.

¹ You should be able to drop primitive recursion; I am not sure wether that causes technical issues, though. Or, maybe everything breaks as you can not even add two numbers anymore. shareimprove this answer

edited Apr 3 '12 at 5:48

answered Apr 2 '12 at 16:57 Raphael♦ 17.3k535106

2

Without primitive recursion, how do you define, say, addition? – Gilles♦ Apr 2 '12 at 17:14

μ-operator is defined in terms of primitive recursion (i.e. you should not have a operator under an operator). So you can't do nothing without primitive recursion! Thanks for the comparison with WHILE language, never thought about that! – Fabio F. Apr 15 '12 at 21:20 " -- http://cs.stackexchange.com/questions/991/are-there-minimum-criteria-for-a-programming-language-being-turing-complete

" The combinators S and K where, (S x y z)=(x z (y z)) and (K x y)=x are sufficient to express any (closed) lambda term, therefore any computable function. See this wikipedia page for details.

In fact, the lambda term X=λx.((x S) K) is a sufficient basis to express all lambda terms. See later in the same wikipedia page.

"

interesting link regarding the SKI combinatory calculus: http://brandon.si/code/do-applicative-functors-generalize-the-s-k-combinators/

this link has an example of the use of the Y combinator: http://raganwald.com/2008/05/narcissism-of-small-code-differences.html

other small basis sets of combinators: https://en.wikipedia.org/wiki/Iota_and_Jot

Elements of models of computation

state (Turing tape, lambda calculus variables, combinatorial calculus explicit state argument)

isolated point(s) of execution (Turing head, lamdba calc and combinatorial calc current reduction, but not in cellular automata)

pointers (or references): (indirect addressing in most assembly languages; self-modifying code in Parallax Propeller; Turing self-modifying code? lambda calc variables? combinatorial calc eval?)

needs: pointers OR self-modifying code OR apply with variables OR eval ?

similary, needs: goto OR structured programming (see theorem) (call stack) OR eval OR recursion?

call stack? present in lambda calc certainly. in this merely a convenience elsewhere?

another writeup

todo: this is partially redundant with [1] section 'Tangent: Turing machines do not model interactive computation'

first, an aside; when Turing machines and lambda calculus were defined, there was a lot of batch processing; afaict the main way computers worked was you setup the program, put in your data, ran the program, and got a result out; my point is that the computer didn't need to provide results to the user and then to conditionally respond to user input in the middle of the computation; that is to say, it didn't have to do I/O in the middle of the program (i could be wrong about this though). The theory of computation, and the universality of it (the equivalence of Turing machines, lambda calculus, and other models of computation, which implies there is one unified idea of 'computation'), is developed from the point of view as the computer as a function from (all of) its input to (all of) its output. If the user input comes at different times, we can just map that into an array of inputs, where the array dimension represents the time the input came in, and similarly to outputs; but this doesn't solve the problem, because according to the typical definition of computation, the computation doesn't have to start until all of the inputs are in, and doesn't have to provide any output until it terminates.

This detail is easy to miss because the Turing machine formalism is easily adapted to handle I/O; memory map the I/O, so that reading certain tape locations tells you about the incoming input, and writing to certain tape locations causes output. However, this is a 'bonus' beyond what is required by the standard definition of a universal computer. This distinction (between the requirements for a formal machine model to be merely capable of batch computation, or also capable of interactive computation) becomes more apparent when you look at the lambda calculus model.

In the lambda calculus, functions are just mathematical functions, variables are immutable, and execution proceeds by mathematical substitution. There is no notion of a variable being one thing at one time, and another thing later; time is not in the model. To get I/O, people typically introduce 'side effects', which says when you evaluate certain primitive expressions, it causes output; and also introduce other special expressions for input; when you these evaluate other expressions, the value you get depends on input (and if you later evaluate the same expression again, you may get a different result if there is different input).

This causes the the order of expression evaluation to become important (as it is now defined to be situated in time). And it means that you cannot reason about lamda calculus programs with input and side effects the same way you can reason about the corresponding mathematical expressions (for example, mathematical functions have a property called 'referential transparency', which means that if f(x) = a, then whenever you see 'f(x)', you can substitue 'a', and this dosen't change the result; this isn't true in the presence of either side effects, or of functions which can return different things if they are called multiple times) ; which means that these expressions can no longer be simply interpreted as ordinary mathematical functions (which, in order to distinguish them from these new possibly-side-effectful pseudo-functions are called 'pure functions', which is defined to mean functions that have no observable side-effects and which return the same thing each time you call them; any expression composed only of pure functions has the referential transparency property mentioned above).

There are various ways to attenuate these issues; for example, Haskell's monads. In Haskell, the 'monadic' parts of the program, which contain all I/O instructions, are a wrapper around definitions of pure functions, allowing the programmer to easily see which expressions they can think of as just ordinary math (i.e. the expressions they can think of as ordinary math are those expressions which are pure). The monadic part of the program is built from side-effectful and input-giving expressions, however, the monads force the evaluation order of these functions by introducing data dependencies (e.g. if f(x) is defined as f(x) = g(x, y), and y is defined as y = h(z), and you don't have any other information that would give you an alternate way of computing f(x), then you're going to have to evaluate z first, and then use h(z) to evaluate y, and then use g(x,y) to evaluate f(x); monads do something not quite like this but intuitively similar). This is complicated stuff and i haven't explained it right. But it's good to keep in mind that this complexity is only necessary for I/O, and that when people say that lambda calculus is simple and of equivalent power to Turing-machines, they mean lambda calculus without side effects or changing input functions, and that is equivalent to Turing-machines only in the sense of the traditional formal definition of computation, which is BATCH computation, not interactive computation.

the key property is:

the key elements are: (1) abstraction: a way for the same code to behave differently depending on its inputs (2) extensible memory: a way to store and retrieve arbitrarily much data (3) conditionals: a way to execute different code depending on the data (4) looping: a way for the same code to be executed arbitarily many times

some key models of calculation are: turing machine, lambda calculus, combinatorial calculus, mu-recursion, cellular automata, logic programming, uniform families of logic circuits (logic gates and flip-flops), imperative languages with GOTO or WHILE, discrete-time neural networks (todo: what to do about analog time?)

turing machine (and conventional assembly languages): (1) abstraction is provided by the Turing tape; the Turing head can behave differently depending on what data is on the tape underneath it (2) extensible memory is provided by the Turing tape (3) conditionals are provided because the code is on the tape, and the head can move to different parts of the tape (4) looping is provided because the code is on the tape, and the head can move to different parts of the tape

lambda calculus: (1) abstraction is provided by variables (and variable substitution during function application) (2) extensible memory is provided by the fact that each single variable can hold an arbitrarily complex function (3) conditionals are provided by having the 'truth value' itself be a function (a "Church boolean") which takes two arguments, such that "False" returns one of those arguments but "True" returns the other one; then "if" can be constructed by just passing the two branches to the predicate value (4) looping is provided by functions that can return their arguments applied to themselves, such as the Y combinator

combinatorial calculus (and Nock): (1) abstraction is provided by expressions (todo, explain this better) (2) extensible memory is provided by the fact that 'state' can store an arbitrarily complex expression (3) conditions are provided by the selection instruction, combined with the ability to store code in 'state' (todo, explain this better) (4) looping is provided by the Nock instruction, which allows one to store code in the 'state', duplicate it, and execute both copies, with one copy getting the result of executing the other copy; the second copy still has the code in its own 'state' and so can repeat this process (todo, explain this better)

mu-recursion: (1) abstraction is provided by variables (2) extensible memory is provided by the fact that each single variable can hold an arbitrarily large integer (3) conditionals are provided by projection (4) looping is provided by mu

cellular automata: (1) abstraction is provided by the transition rules for cells, which can depend on the values of neighboring cells (2) extensible memory is provided by the values of cells (3) conditionals: todo (4) looping: todo

logic programming: todo

uniform families of logic circuits: (1) abstraction is provided because each logic gate, and flip-flops, behave differently (2) extensible memory is provided by the uniform family assumption, that is, the idea that for any given input, there is some member of the logic circuit family that is big enough for the computation required (3) conditionals are provided by logic gates such as AND gates (4) looping is provided by the uniform family assumption

imperative language with GOTO or WHILE: (1) abstraction is provided by variables (2) extensible memory is provided by the arrays (and/or, if the language permits variables to take arbitrarily large values, then by that) (3) conditionals are provided as a language primitive (4) looping is provided by GOTO or by WHILE

discrete-time neural networks: (emulate logic circuits)

todo: what about async time networks (logic circuits, neural nets, etc)?

todo: i feel like each of these may have a unique way of implementing 'execution of data as code'. In the Turing machine, code is literally in the same form as data, but i'm not sure if this is the case for the others; e.g. in imperative languages, the program variables are just integers and possibly strings, so program code must be serialized into these if you are writing a universal interpreter. otoh in Turing machines maybe that is like saying 'the "program code" of a single Turing machine is in its head transition table' so maybe these aren't so different?

mu recursion, lambda calc, cellular automata are 'weird machines' in that the code is 'hidden' in data which has other obvious semantics (integers, functions, cells) (appropriate for the word 'code'). By contrast, in Turing machines, there is no obvious prior semantics for tape symbols, so storing a program in this manner seems to make more sense.

Links:

misc

any expression 'with a variable in it' is a function; you 'call' a function by substitution, yielding an expression, which then must be reduced

this is the connection between substitution and function calling. This is why generic/parameterized types are really functions from types to types (because the type parameter is a variable within the type expression; assigning that type variable to a specific type is like calling the function). This is why a function with multiple arguments can be 'curried' into a function that takes one of those arguments and returns another function (it substitutes in the argument it has, and then there are still variables in the expression, so the expression still specifies a function).

Chapter : models of concurrent computation

pi calc, actors, csp, etc

Links

---

another attempt of mine oto provide a cross-model synthesis of the fundamental building blocks of computation:

You gotta have:

zero (that is, at least one constant data literal)

either a conditional (eg 'if' or BNZ), or selection (like indexing into an array, or Nock's index-into-tree)

data composition (eg successor; or ADTs; or cons; in mu-recursion, the composition function also does this, because you have many input arguments to 'h' and you specify a different 'g' function for each of them) or memory write (in a Turing machine, the memory as a whole can form your composite data structure)

data decomposition (eg projection functions as seen in mu-recursion; or indexing into an array) or memory read

program composition (either execution (eg assembly language dispatch or BASIC or Forth), where you execute instructions one after another, or function composition (as seen in mu-recursion and continuation-passing style (CPS))

while loops (or unbounded recursion), or both ((bounded recursion) + (unbounded minimization)) (as seen in mu-recursion: primitive recursion + mu)

---

todo: Iota, tag systems and cyclic tag systems:

implementations: https://github.com/tomstuart/computationbook/tree/master/universality_is_everywhere/iota https://github.com/tomstuart/computationbook/tree/master/universality_is_everywhere/tag_systems https://github.com/tomstuart/computationbook/tree/master/universality_is_everywhere/cyclic_tag_systems

from: http://computationbook.com/

---

some fancy encodings to replace the Church encoding when using lambda calculus:

http://maiavictor.github.io/2015/11/18/church-scott-perigot-encodings/

---

todo:

various process calculi, eg the pi-calculus, are Turing-complete alternative models of computation:

"the crucial insight is the modeling of environment bindings – for instance, "x is bound to term M" – as replicating agents that respond to requests for their bindings by sending back a connection to the term M

The features of the π-calculus that make these encodings possible are name-passing and replication (or, equivalently, recursively defined agents). In the absence of replication/recursion, the π-calculus ceases to be Turing-powerful. " -- https://en.wikipedia.org/wiki/%CE%A0-calculus#Turing_completeness

(which others? mb look thru:

https://en.wikipedia.org/wiki/Process_calculus https://en.wikipedia.org/wiki/Actor_model_and_process_calculi_history

)

---