proj-plbook-plChConcurrencyLanguagesAndLibraries

Table of Contents for Programming Languages: a survey

Chapter : languages and libraries

Popular:

Languages i've heard about in multiple places todo:

GPGPU-focused:

Others (for me to look at todo):

" here are already a large num- ber of research and commercial projects developing new disciplined parallel programming models for determinis- tic and non-deterministic algorithms [5]; e.g., Ct [24], CnC? [17], Cilk++ [11], Galois [33], SharC? [7], Kendo [44], Prometheus [6], Grace [10], Axum [26], and DPJ [14]. Most of these, including all but one of the commercial sys- tems, guarantee the absence of data races for programs that type-check, satisfying the first requirement of our work im- mediately. Moreover, most of these also enforce a require- ment of structured parallel control (e.g., a nested fork join model, pipelining, etc.), which is much easier to reason about than arbitrary (unstructured) thread synchronization. "

" 7. Related Work Type and Effect Systems: Several researchers have described ef- fect systems for enforcing a locking discipline in nondeter ministic programs that prevents data races and deadlocks [5, 20, 34] o r guar- antees isolation for critical sections [29]. Matsakis et al . [41] have recently proposed a type system that guarantees race-freed om for locks and other synchronization constructs using a constru ct called an “interval” for expressing parallelism. While there is so me over- lap with our work in the guarantees provided (race freedom, d ead- lock freedom, and isolation), the mechanisms are very diffe rent (ex- plicit synchronization vs. atomic statements supported by STM). Further, these systems do not provide determinism by defaul t. Fi- nally, there is no other effect system we know of that provide s both race freedom and strong isolation together ...

Beckman et al. [13] show how to use access permissions to re- move STM synchronization overhead. While the goals are the s ame as ours, the mechanisms are different (alias control vs. typ e and effect annotations). The two mechanisms have different tra deoffs in expressivity and power: for example, Beckman et al.’s met hod can eliminate write barriers only if an object is accessed th rough a unique reference, whereas our system can eliminate barrie rs for access through shared references, so long as the access does not cause interfering effects. However, alias restrictions ca n express some patterns (such as permuting unique references in a data struc ture) that our system cannot. As future work, it would be inte resting to explore these tradeoffs further ... Nondeterministic Parallel Programming: Several research efforts are developing parallel models for nondeterminist ic codes with irregular data access patterns, such as Delaunay mesh r efine- ment. Galois [36] provides a form of isolation, but with iter ations of parallel loops (instead of atomic statements) as the isolat ed compu- tations. Concurrency is increased by detecting conflicts at the level of method calls, instead of reads and writes, and using seman tic commutativity properties. Lublinerman et al. [39] have pro posed object assemblies as an alternative model for expressing irregular, graph-based computations ... Kulkarni et al. [35] have recently proposed task types as a way of enforcing a property they call pervasive atomicity . This work shares with ours the broad goal of reducing the number of concurrent interleavings the programmer must consider. Ho wever, Kulkarni et al. adopt an actor-inspired approach, in which d ata is non-shared by default, and sharing musk occur through speci al “task objects.” This is in contrast to our approach of allowi ng familiar shared-memory patterns of programming, but using effect annotations to enforce safety properties. Finally, none of the work discussed above provides any deterministic-by-default gu arantee. "

---

Some Intel-related parallel libraries and similar:

(maybe toread: https://www.researchgate.net/publication/255791855_A_Comparative_Study_and_Evaluation_of_Parallel_Programming_Models_for_Shared-Memory_Parallel_Architectures )

---

Intel Threaded Building Blocks (TBB)

https://en.wikipedia.org/wiki/Threading_Building_Blocks https://www.threadingbuildingblocks.org/

" Library contents

TBB is a collection of components for parallel programming:

    Basic algorithms: parallel_for, parallel_reduce, parallel_scan
    Advanced algorithms: parallel_while, parallel_do, parallel_pipeline, parallel_sort
    Containers: concurrent_queue, concurrent_priority_queue, concurrent_vector, concurrent_hash_map
    Memory allocation: scalable_malloc, scalable_free, scalable_realloc, scalable_calloc, scalable_allocator, cache_aligned_allocator
    Mutual exclusion: mutex, spin_mutex, queuing_mutex, spin_rw_mutex, queuing_rw_mutex, recursive_mutex
    Atomic operations: fetch_and_add, fetch_and_increment, fetch_and_decrement, compare_and_swap, fetch_and_store
    Timing: portable fine grained global time stamp
    Task scheduler: direct access to control the creation and activation of tasks" -- [8]

---

Intel Array Building Blocks

Chapter 8: Statements Function Execution Semantics Basic Block Statements Notational Conventions for Statements Elementwise Statements Function Call Statements Reordering Statements gather scatter pack unpack shuffle unshuffle repeat distribute repeat_row, repeat_col, repeat_page transpose swap_row, swap_col, swap_page shift_constant, shift_constant_reverse shift_clamp, shift_clamp_reverse rotate, rotate_reverse reverse Facility Statements const_vector bitwise_cast cast cat extract replace replace_row, replace_col, replace_page replace_element section index extract_row, extract_col, extract_page get_elt_coord get_neighbor expect_size mask sort, sort_rank alloc length get_nrows, get_ncols, get_npages Nesting Statements Collective Statements Reduction Statements Scan Statements Merge Statements Control Flow Statements if Statements Loops for Loops while Loops do Loops break Statements continue Statements return Statements Special Statements when Statements

---

MPI

https://www.open-mpi.org/ https://en.wikipedia.org/wiki/Message_Passing_Interface https://www.mpi-forum.org/

discussion:

---

Links: