proj-jasper-jasperConcurrencyNotes2

https://www.google.com/search?q=concurrency+primitives

https://www.google.com/search?q=concurrency+features

https://www.google.com/search?q=four+concurrency+primitives+for+haskell

https://en.wikipedia.org/wiki/Concurrent_computing

http://www.haskell.org/haskellwiki/Lightweight_concurrency

http://www.haskell.org/haskellwiki/Concurrency

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4173 Four Concurrency Primitives for Haskell (1995) by Enno Scholz

https://www.google.com/search?q=small+set+of+concurrency+primitives

http://research.microsoft.com/en-us/um/people/simonpj/papers/lw-conc/

http://julia.readthedocs.org/en/latest/manual/parallel-computing/

" Parallel programming in Julia is built on two primitives: remote references and remote calls. A remote reference is an object that can be used from any process to refer to an object stored on a particular process. A remote call is a request by one process to call a certain function on certain arguments on another (possibly the same) process. A remote call returns a remote reference to its result. Remote calls return immediately; the process that made the call proceeds to its next operation while the remote call happens somewhere else. You can wait for a remote call to finish by calling wait on its remote reference, and you can obtain the full value of the result using fetch. You can store a value to a remote reference using put. "

http://davidad.github.io/blog/2014/03/23/concurrency-primitives-in-intel-64-assembly/

---

http://berb.github.io/diploma-thesis/community/050_index.html

---

http://blog.ezyang.com/2014/01/so-you-want-to-add-a-new-concurrency-primitive-to-ghc/

---

http://www.cl.cam.ac.uk/~so294/documents/ec209.pdf

Relaxed memory models must be rigorous

---

http://www.cs-lab.org/historical_beam_instruction_set.html

---

http://www.cs-lab.org/historical_beam_instruction_set.html#2.9%20Inter-process%20Communication

---

" Concurrency is concerned with controlling non-determinism, which can arise in all sorts of situations having nothing to do with parallelism. Process calculi, for example, are best viewed as expressing substructural composition of programs, and have very little to do with parallel computing. (See my PFPL and Rob Simmons’ forthcoming Ph.D. dissertation for more on this perspective.) " -- http://existentialtype.wordpress.com/2012/08/26/yet-another-reason-not-to-be-lazy-or-imperative/

---

" ppgallardo says: May 2, 2011 at 6:37 am

2

0

Rate This

Dear Prof. Harper,

According to your own comments:

I simply meant that it is not clear whether monadic isolation of effects (esp store effects) is a good idea, because it precludes benign effects.

… for the sake of parallelism: you can’t evaluate a Sequence.map in parallel unless you know that the function being mapped is effect-free.

It seems that there is a conflict here. You can either have effects isolated by default (Haskell), which is good for parallelism, but requires some mechanism like unsafePerformIO to use begin effects, or you can have unsafe free access to effects by default (ML), which is good for begin(safe) effects, but you would need your own effect system to use parallelism. I don’t see clearly that this second approach is a better one. Reply

    ppgallardo says:	
    May 2, 2011 at 10:04 am	
     
    0
     
    0
     
    Rate This
    Sorry, I meant benign effects
    Abstract Type says:	
    May 2, 2011 at 11:30 am	
     
    5
     
    0
     
    Rate This
    Exceptions are perfectly compatible with parallelism; it is only storage effects (which includes I/O that creates a problem. If we isolate storage effects using a re-organized library in the manner I described, we get what we need for parallelism. I consider this significant, but you may not. The concept of purity is not entirely clear, it seems to me.
    neelk says:	
    May 3, 2011 at 4:00 am	
     
    0
     
    0
     
    Rate This
    When you say exceptions are compatible with parallelism, do you mean exceptions with or without handlers?
    In the absence of handlers you’re obviously correct (and it synergizes beautifully with strictness), but if there’s a catch-form then it seems to me that the naive parallel semantics becomes nondeterministic.
    Using par(e,e') to indicate a parallel tuple, consider the expression:
      par(raise (Error 1), raise (Error 2))
      handle 
        Error n => n 
    This seems like it may give different answers depending on the scheduling of the left and right components of the pair.
    Abstract Type says:	
    May 3, 2011 at 3:48 pm	
     
    3
     
    0
     
    Rate This
    With handlers, of course, otherwise they wouldn’t be called exceptions! Your example shows only that the join-point logic is more subtle than you are imagining it to be. The crucial point is to ensure that parallelism does not change the semantics of the program. So, in this case, we must ensure that the leftmost exception is the one that is propagated, if any, because that is what would happen in the sequential case. The join point logic ensures that this is the case; I won’t bother to describe here how that is done, it’s not hard to imagine.
    Interestingly, we can also admit other effects, such as printing, by the same means: suspend the output until the join-point is reached, then print in left-to-right order. Input doesn’t seem susceptible to this kind of treatment, but random seeds easily are, because they can be split. We rely on this to implement treaps in parallel (though there are other means based on fingerprinting that avoid the need for random numbers).
    Crucially, the forms of parallelism we consider always have a well-defined join point. Without that, I cannot envision a way to admit effects, even exceptions, in parallel without affecting the sequential meaning. So, for example, parallel futures are out of the question because there is no well-defined join point. Indeed, this is precisely the same reason why it is so awkward to manage exceptions in a lazy language, among other things.
    It seems that I ought to have explained my motivations for this post more thoroughly up front. It seemed that I could just break out the one point, re-defining the interfaces of some of the basis libraries, to get the isolation of effects that I want to ensure absence of race conditions in parallel. My mistake."

---

http://lambda-the-ultimate.org/node/1615 Event-Based Programming without Inversion of Control. Philipp Haller and Martin Odersky.

http://lambda-the-ultimate.org/node/1617#comment-19873

Still has limitations

The Scala paper acknowledges some limitations:

    To conclude, managing information about statements following a call to receive would require changes either to the compiler or the VM. Following our rationale for a library-based approach, we want to avoid those changes.

If you want statements to follow a receive, you have to manually-CPS them yourself, possibly duplicating code between different branches of the receive. Moreover, any function that calls a function that uses receive also cannot contains statements after the receive, and so on, recursively. Scala's type system will enforce that for you, but that doesn't stop it from being annoying.

For me, that's a dealbreaker. I could just as easily use Java and the Proactor pattern. I use threads so that I can write my control flow in direct-style; if I wanted to CPS things I can do so in other languages.

Now, it's certainly possible to get around this: just write a VM with first-class continuations. But then you start running into other problems, like first-class continuations not playing nicely with a C# or Java FFI (which is why Scala doesn't support them). Beware the Turing Tarpit. Once you've generalized your VM to fit every conceivable language, its functionality is probably so general that it becomes faster for a language designer to write his own VM than to target yours. By Jonathan Tang at Wed, 2006-07-19 00:50

login or register to post comments

---

[–]dons [F] 13 points 7 years ago

Right! Here's what's happening:

    Data Parallel Haskell on the GPU
    SSE for GHC

So that gives us 4 kinds of parallelism to play with:

    explicit threads (forkIO)
    implicit threads (par)
    transparent parallelism on multicore (DPH-based nested parallel arrays)
    GPU-based (nested) parallel arrays (Sean's DPH + GPU) ongoing

The GPU is really the "FP chip that never was", with its hardware supported folds, scans and maps.

Btw, Microsoft is helping fund the Data Parallel Haskell work. I guess they really need a solution for massively multicore programming :-)

    permalink
    save
    parent

'join' functions with arguments from multiple threads is natural if you allow 'shared memory' partially applied functions and closures


kyrra 1 day ago

link

Single-threaded Go programs also run faster for some (many?) cases. The runtime is able to bypass a number of locks when running in a single-threaded mode. And depending on the profile of your code, turning on multiple threads for a Go program can slow it down even if you use channels (as moving data from one CPU core to another can be much slower than just doing a context switch in a single core).

reply