proj-plbook-plChConcurrencyIntro

Table of Contents for Programming Languages: a survey

Chapter : introduction to concurrency

"For the past 30 years, computer performance has been driven by Moore's Law; from now on, it will be driven by Amdahl's Law. Writing code that effectively exploits multiple processors can be very challenging." -- Doron Rajwan

see also "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software", by Herb Sutter

"Concurrency is the next major revolution in how we write software" -- Herb Sutter

"Commonly used programming models are prone to subtle, hard to reproduce bugs, and parallel programs are notoriously hard to test due to data races, non-deterministic interleavings, and complex memory models,"

https://www.google.com/search?client=ubuntu&channel=fs&q=semaphor+monitor&ie=utf-8&oe=utf-8#fp=b07468abf853b2df&q=

optimistic vs pessimistic concurrency

memory models (architecture)

(contrast with programming language 'memory models', which we discuss later under the name 'memory order')

shared memory vs "shared nothing"

shared memory

NUMA

PGAS

(mb should put this one into a separate section about non-uniform memory models)

https://en.wikipedia.org/wiki/Partitioned_global_address_space

deterministic paradigms

nondeterministic paradigms

asynchronous

synchronous

note: sometimes the words 'synchronous' and 'synchronization' are used to mean 'two threads observe the same ordering of some events', rather than 'two threads are doing the same thing at the same time'.

composable

non-composable

defn data race

when the final result of the computation may differ depending on which ordering occurs out of a nondeterministic possible set of orderings of events? todo is that right?

e.g. two threads try to add to a counter simultaneously. One update gets lost todo expand example.

similar example: two ATMs try to withdraw from the same bank account at the same time. both succeed; 'double spend'; global invariant not maintained.

or mb defn is: two or more operations access one memory location concurrently, and at least one of them is a write

" assume the language allows distinguishing between synchronization and ordinary (non-synchronization or data) operations (see below). We say that two memory operations conflict if they access the same memory location (e.g., variable or array element), and at least one is a write. We say that a program (on a particular input) allows a data race if it has a sequentially consistent execution (i.e., a program-ordered interleaving of operations of the individual threads) in which two conflicting ordinary operations execute “simultaneously”. For our purposes, two operations execute “simultaneously” if they occur next to each other in the interleaving and correspond to different threads. Since these operations occur adjacently in the interleaving, we know that they could equally well have occurred in the opposite order; there are no intervening operations to enforce the order.... it is also possible to define data races as conflicting accesses not ordered by synchronization, as is done in Java. These definitions are essentially equivalent." -- Adve, Boehm. Memory Models: A Case for Rethinking Parallel Languages and Hardware

" First of all, let’s make sure we know what we’re talking about. In current usage a data race is synonymous with a low-level data race, as opposed to a high-level race that involves either multiple memory locations, or multiple accesses per thread. Everybody agrees on the meaning of data conflict, which is multiple threads accessing the same memory location, at least one of them through a write. But a data conflict is not necessarily a data race. In order for it to become a race, one more condition must be true: the access has to be “simultaneous.” ... In fact, most languages try to provide the so called DRF (Data Race Free) guarantee, which states that all executions of data-race-free programs are sequentially consistent. Don’t be alarmed by the apparent circularity of the argument: you start with sequentially consistent executions to prove data-race freedom and, if you don’t find any data races, you conclude that all executions are sequentially consistent. But if you do find a data race this way, then you know that non-sequentially-consistent executions are also possible. " -- http://corensic.wordpress.com/category/memory-model/

defn synchronization

when there is a data race, you must 'synchronize', here meaning use some mechanism to constrain the nondeterminism in orderings (possibly all the way to determinism) (todo is that defn right?)

to implement synchronization over shared memory you need some sort of read-modify-write primitive, in general one with infinite consensus number (see atomics, above)

definition liveness

threads should make forward progress, not get 'stuck' (todo is that defn right?)

(todo group under above)

Classification of Designs, "Computer Architecture" Chapter 1, M.Zargham

(todo which are built in terms of the others)

(todo which are more or less efficient)