proj-oot-ootConcurrencyNotes8

https://tokio.rs/blog/2019-10-scheduler/ https://news.ycombinator.com/item?id=21249708

---

https://blog.rust-lang.org/2019/11/07/Async-await-stable.html https://github.com/rust-lang/async-book https://rust-lang.github.io/compiler-team/working-groups/async-await/

" async fn first_function() -> u32 { .. } ... let result: u32 = future.await; ... This example shows the first difference between Rust and other languages: we write future.await instead of await future. This syntax integrates better with Rust's ? operator for propagating errors (which, after all, are very common in I/O). You can simply write future.await? to await the result of a future and propagate errors. It also has the advantage of making method chaining painless. Zero-cost futures

The other difference between Rust futures and futures in JS and C# is that they are based on a "poll" model, which makes them zero cost. In other languages, invoking an async function immediately creates a future and schedules it for execution: awaiting the future isn't necessary for it to execute. But this implies some overhead for each future that is created.

In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them. " -- https://blog.rust-lang.org/2019/11/07/Async-await-stable.html

Dowwie 6 hours ago [-]

This is a major milestone for Rust usability and developer productivity.

It was really hard to build asynchronous code until now. You had to clone objects used within futures. You had to chain asynchronous calls together. You had to bend over backwards to support conditional returns. Error messages weren't very explanatory. You had limited access to documentation and tutorials to figure everything out. It was a process of walking over hot coals before becoming productive with asynchronous Rust.

fooyc 5 hours ago [-]

This is a big improvement, however this is still explicit/userland asynchronous programming: If anything down the callstack is synchronous, it blocks everything. This requires every components of a program, including every dependency, to be specifically designed for this kind of concurency.

Async I/O gives awesome performance, but further abstractions would make it easier and less risky to use. Designing everything around the fact that a program uses async I/O, including things that have nothing to do with I/O, is crazy.

Programming languages have the power to implement concurrency patterns that offer the same kind of performances, without the hassle.

reply

brandonbloom 2 hours ago [-]

Yup. See https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ for a pretty good explanation of the async/await problem.

...

An alternative design would have kept await, but supported async not as a function-modifier, but as an expression modifier. Unfortunately, as the async compiler transform is a static property of a function, this would break separate compilation. That said, I have to wonder if the continuously maturing specialization machinery in Rustc could have been used to address this. IIUC, they already have generic code that is compiled as an AST and specialized to machine code upon use, then memoized for future use. They could specialize functions by their async usage and whether or not any higher-order arguments are themselves async. It might balloon worst-case compilation time by ~2X for async/non-async uses (more for HOF), but that's probably better than ballooning user-written code by 2X.

ameixaseca 1 hour ago [-]

Async adds a runtime requirement, so Rust cannot just take the same approach as Go. You only have a scheduler if you instantiate one. And having a runtime or not has nothing to do with the C abstract machine, but with precise control on what the code does.

For instance, Go does not give you the option to handle this yourself: you cannot write a library for implementing goroutines or a scheduler, since it's embed in every program. That's why it's called runtime. In Rust, every bit of async implementation (futures, schedulers, etc) is a library, with some language support for easing type inference and declarations. This should already tell you why they took this approach.

Regarding async/await and function colors (from the article you posted), I would much rather prefer Rust to use an effect system for implementing this. However, since effects are still much into research and there is no major language which is pushing on this direction (maybe OCaml in a few years?) it seems like a long shot for now.

reply

MuffinFlavored? 5 hours ago [-]

For JavaScript? developers expecting to jump over to Rust and be productive now that async/await is stable:

I'm pretty sure the state of affairs for async programming is still a bit "different" in Rust land. Don't you need to spawn async tasks into an executor, etc.?

Coming from JavaScript?, the built in event-loop handles all of that. In Rust, the "event loop" so to speak is typically a third party library/package, not something provided by the language/standard itself.

reply

steveklabnik 5 hours ago [-]

> I'm pretty sure the state of affairs for async programming is still a bit "different" in Rust land.

There are some differences, yes.

> Don't you need to spawn async tasks into an executor, etc.?

Correct, though many executors have added attributes you can tack onto main that do this for you via macro magic, so it'll feel a bit closer to JS.

reply

ComputerGuru? 7 hours ago [-]

I’ve been playing with async await in a polar opposite vertical than its typical use case (high tps web backends) and believe this was the missing piece to further unlock great ergonomic and productivity gains for system development: embedded no_std.

Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.

reply

vanderZwan 7 hours ago [-]

Have you ever heard of Esterel or Céu? They follow the synchronous concurrency paradigm, which apparently has specific trade-offs that give it great advantages on embedded (IIRC the memory overhead per Céu "trail" is much lower than for async threads (in the order of bytes), fibers or whatnot, but computationally it scales worse with the nr of trails).

Céu is the more recent one of the two and is a research language that was designed with embedded systems in mind, with the PhD? theses to show for it [2][3].

I wish other languages would adopt ideas from Céu. I have a feeling that if there was a language that supports both kinds of concurrency and allows for the GALS approach (globally asynchronous (meaning threads in this context), locally synchronous) you would have something really powerful on your hands.

EDIT: Er... sorry, this may have been a bit of an inappropriate comment, shifting the focus away from the Rust celebration. I'm really happy for Rust for finally landing this! (but could you pretty please start experimenting with synchronous concurrency too? ;) )

[0] http://ceu-lang.org/

[1] https://en.wikipedia.org/wiki/Esterel

[2] http://ceu-lang.org/chico/ceu_phd.pdf

[3] http://sunsite.informatik.rwth-aachen.de/Publications/AIB/20...

Kaladin 6 hours ago [-]

Is there any resources you could point to learn more about how to use this async programming with?

reply

Dowwie 6 hours ago [-]

https://rust-lang.github.io/async-book

https://book.async.rs

https://tokio.rs

reply

---

5.2 Using virtual machine abstraction to supportconcurrency

One abstract extension to a VM that was found to providegreatoverallvaluetothesystemisnonpreemptive,user-levelconcurrency. As shown in Fig.11, a VM that maintains itsown concurrency has advantages over a real-time operatingsystem.SwitchingcontextsinaVMcanbedoneinonevirtualinstruction? whereas a native RTOS-driven system requiressaving and restoring contexts, which may involve dozens of variables and the saving of a task state. Also, a concurrentVM is more controlled than a real-time operating systemtask switch, which can occur at almost any time requiringcritical regions of code to protect against race conditionsand errors. ...

.2.1 Implementing a task instruction to support concurrency New instructions were added to the Pascal Virtual Machineto support multitasking and message passing as high levelabstract instructions. A new task instruction was added tothe interpreter to support multitasking along with the neces-sary Pascal language extension as follows:

status := task(function); function: TASKINIT=0: status: 1=parent, 2=child TASKPEND=1: status: is new task ID TASKWHO=2: status: is my task ID TASKKILL=3: No status

With this new special function call, new tasks could becreated using the subfunction TASKINIT which performs asimilar function as the UNIX fork() command within a sin-glePCodeinstruction.ModelingtaskcreationaftertheUNIXforkcommandwasamatterofconvenienceforthisresearch?,although itdoes have some advantages.

...

Theinterpreter was modified to support up to five tasks (an arbi-trary choice that depends on the memory available). When anew task is created, an exact duplicate of the working stack of the parent task is created and a new entry is made in aTaskState table for the new child task. The return value fromthis function call indicates whether the currently executingtask is still the parent task (1) or the child task (0) that is anexact copy.Thereturnvaluecanthenbeusedtocauseeachofthetasksto follow their own execution path depending on whether itis the parent or the child task. As shown in Fig. 12, when a new task is created only four values need to be rememberedto save and restore the task; namely the PC, SP, MP, andEP registers. PC is the program counter, which is saved toremember what instruction this task was executing when itwas suspended so that it can pick up where it left off whenit starts running again. SP is the stack pointer which indi-cates where the current top of the stack is for this task. Sinceeach new task has its own copy of the stack, each task mustremember where that is in memory for its own stack. MP isthe mark pointer which indicates where the base of the cur-rentstackframeis.EPistheextremepointerwhichindicatesthetopofthecurrentstackframesothattheVMknowswhereto? put the next stack frame if another procedure or functionis called from the current context. There is no need to saveany working registers because the PMachine is stack basedand nothing else is stored in registers between instructionsbesideswhatisonthestack,makingtaskswitchingrelativelyfast and efficient.Another subfunction, TASKPEND, is to allow the cur-rentlyexecutingtasktovoluntarilygiveupthevirtualproces-sorandallowadifferenttasktorun. ...

TASKKILL can becalledfor a task to permanently kill itself and allow other tasks torun instead

5.3 Implementing a mail instruction to support concurrencyA second instruction was implemented to support messagepassing between tasks in the VM. The mail() instruction wascreated to handle this functionality:

Status := mail(function, parameter1, parameter2, parameter3); Function: MAILSEND=0 parameter1: destination task ID parameter2: length of message to send parameter3: buffer address of message to send

   MAILRCV=1
     parameter1: my task ID
     parameter2: size of message buffer for receiving
     parameter3: buffer address where to put received message

... In this implementation, a callby a task to receive mail will cause the task to pend if nomail is available. The task then becomes ready to run againwhen mail arrives. The task that sent the mail continues torun until it voluntarily gives up the processor or pends onits own mailbox. This implementation was chosen becausethe native operating system used in the timing comparisonperforms this way allowing the timing comparison to be fair.Another faster implementation might be to allow the receiv-ing task to wake up immediately which would make the sys-tem appear to switch tasks faster but is not commonly howembedded operating systems work.

...

Others showed that VMs are capable of doing things thatnative machine hardware is not effective at doing by imple-menting VMs that do not emulate real machines. The Esterel and the Harrison VMs are good examples of this [10,41]. Additionally,other researchshowed thatabstractionisawayto increase performance in a VM as shown by the Carbayo-nia and Hume VMs [18,19].

10. Craig ID (2005) Virtual Machines. Springer, New York 41. Plummer B, Khajanchi M, Edwards SA (2006) An Esterel VirtualMachine?