notes-computer-programming-programmingLanguageDesign-prosAndCons-coroutine


i finally found an explanation that makes sense:

https://news.ycombinator.com/item?id=5401324

" fzzzy 1 day ago

link

From a certain perspective it is a rational decision. Because the CPython API relies so heavily on the C stack, either some platform-specific assembly is required to slice up the C stack to implement green threads, or the entire CPython API would have to be redesigned to not keep the Python stack state on the C stack.

Way back in the day [1] the proposal for merging Stackless into mainline Python involved removing Python's stack state from the C stack. However there are complications with calling from C extensions back into Python that ultimately killed this approach.

After this Stackless evolved to be a much less modified fork of the Python codebase with a bit of platform specific assembly that performed "stack slicing". Basically when a coro starts, the contents of the stack pointer register are recorded, and when a coro wishes to switch, the slice of the stack from the recorded stack pointer value to the current stack pointer value is copied off onto the heap. The stack pointer is then adjusted back down to the saved value and another task can run in that same stack space, or a stack slice that was stored on the heap previously can be copied back onto the stack and the stack pointer adjusted so that the task resumes where it left off.

Then around 2005 the Stackless stack slicing assembly was ported into a CPython extension as part of py.lib. This was known as greenlet. Unfortunately all the original codespeak.net py.lib pages are 404 now, but here's a blog post from around that time that talks about it [2].

Finally the relevant parts of greenlet were extracted from py.lib into a standalone greenlet module, and eventlet, gevent, et cetera grew up around this packaging of the Stackless stack slicing code.

So you see, using the Stackless strategy in mainline python would have either required breaking a bunch of existing C extensions and placing limitations on how C extensions could call back into Python, or custom low level stack slicing assembly that has to be maintained for each processor architecture. CPython does not contain any assembly, only portable C, so using greenlet in core would mean that CPython itself would become less portable.

Generators, on the other hand, get around the issue of CPython's dependence on the C stack by unwinding both the C and Python stack on yield. The C and Python stack state is lost, but a program counter state is kept so that the next time the generator is called, execution resumes in the middle of the function instead of the beginning.

There are problems with this approach; the previous stack state is lost, so stack traces have less information in them; the entire call stack must be unwound back up to the main loop instead of a deeply nested call being able to switch without the callers being aware that the switch is happening; and special syntax (yield or yield from) must be explicitly used to call out a switch.

But at least generators don't require breaking changes to the CPython API or non-portable stack slicing assembly. So maybe now you can see why Guido prefers it.

Myself, I decided that the advantages of transparent stack switching and interoperability outweighed the disadvantages of relying on non-portable stack slicing assembly. However Guido just sees things in a different light, and I understand his perspective.

  [1] http://www.python.org/dev/peps/pep-0219/
  [2] http://agiletesting.blogspot.com/2005/07/py-lib-gems-greenlets-and-pyxml.html"

thank you, fzzzy!!!

--- random unrelated (fxxxy made this):

http://wiki.secondlife.com/wiki/Mulib/Documentation

---

" greenlet 0.4.0 Downloads ↓

Lightweight in-process concurrent programming

Package Documentation

The greenlet package is a spin-off of Stackless, a version of CPython that supports micro-threads called "tasklets". Tasklets run pseudo-concurrently (typically in a single or a few OS-level threads) and are synchronized with data exchanges on "channels".

A "greenlet", on the other hand, is a still more primitive notion of micro-thread with no implicit scheduling; coroutines, in other words. This is useful when you want to control exactly when your code runs. You can build custom scheduled micro-threads on top of greenlet; however, it seems that greenlets are useful on their own as a way to make advanced control flow structures. For example, we can recreate generators; the difference with Python's own generators is that our generators can call nested functions and the nested functions can yield values too. Additionally, you don't need a "yield" keyword. See the example in tests/test_generator.py.

Greenlets are provided as a C extension module for the regular unmodified interpreter.

Greenlets are lightweight coroutines for in-process concurrent programming. "

what they mean (i think) by with 'yield' "the entire call stack must be unwound back up to the main loop instead of a deeply nested call being able to switch without the callers being aware that the switch is happening":

if from f() i call g(), and g() yields, then control returns to f().

so if you want g() to be able to say 'i'm not done yet with what f() wants me to do, but i want to yield control temporarily to other greenthreads', then you have to use a trampoline: http://stackoverflow.com/a/8088242/171761

not 100% sure about this but that's my current impression.


http://lambda-the-ultimate.org/node/2868

---

https://news.ycombinator.com/item?id=2101210

---

http://lambda-the-ultimate.org/node/4592

hmm, makes the interesting point that you need to ask the programmer to distinguish between a temporary yielding of control (to allow, e.g., nonblocking I/O), and an actual non-local exit, to allow dynamic-unwind-like-things (e.g. Python's try/finally blocks) to know when they need to run the finalizer. in general, of course, a coroutine might never have control returned to it, so when is the finalizer actually run?

---

" For those that already know something about coroutines, it is important to clarify some concepts before we go on. Lua offers what I call asymmetric coroutines. That means that it has a function to suspend the execution of a coroutine and a different function to resume a suspended coroutine. Some other languages offer symmetric coroutines, where there is only one function to transfer control from any coroutine to another.

Some people call asymmetric coroutine semi-coroutines (because they are not symmetrical, they are not really co). However, other people use the same term semi-coroutine to denote a restricted implementation of coroutines, where a coroutine can only suspend its execution when it is not inside any auxiliary function, that is, when it has no pending calls in its control stack. In other words, only the main body of such semi-coroutines can yield. A generator in Python is an example of this meaning of semi-coroutines.

Unlike the difference between symmetric and asymmetric coroutines, the difference between coroutines and generators (as presented in Python) is a deep one; generators are simply not powerful enough to implement several interesting constructions that we can write with true coroutines. Lua offers true, asymmetric coroutines. Those that prefer symmetric coroutines can implement them on top of the asymmetric facilities of Lua. It is an easy task. (Basically, each transfer does a yield followed by a resume.) "

http://www.crystalclearsoftware.com/soc/coroutine/coroutine/symmetric_coroutines.html


https://news.ycombinator.com/item?id=2513233

" aliguori 686 days ago

link

Most C++ implementations throw their hands up at setjmp/longjmp/setcontext/makecontext.

Even if it appears to work, it's a dangerous set of routines to use in C++.


pja 686 days ago

link

Yeah, this is going to mix very badly with exceptions.

Also, a little rootling around on the web reveals the following statement in IBM's z/OS docs:

"Do not issue getcontext() in a C++ constructor or destructor, since the saved context would not be usable in a subsequent setcontext() or swapcontext() after the constructor or destructor returns."


ncarlson 686 days ago

link

That's how I've always felt. Coroutines are fun to play with in C++, but I'd never be caught dead using them in production code. "

http://en.wikipedia.org/wiki/Setcontext

---

" search

In computer science, a fiber is a particularly lightweight thread of execution.

Like threads, fibers share address space. However, fibers use co-operative multitasking while threads use pre-emptive multitasking. Threads often depend on the kernel's thread scheduler to preempt a busy thread and resume another thread; fibers yield themselves to run another fiber while executing. The article on threads contains more on the distinction between threads and fibers. Contents

    1 Fibers and coroutines
    2 Advantages and disadvantages
    3 Operating system support
    4 See also
    5 References
    6 External links

Fibers and coroutines

Fibers describe essentially the same concept as coroutines. The distinction, if there is any, is that coroutines are a language-level construct, a form of control flow, while fibers are a systems-level construct, viewed as threads that happen not to run concurrently. Priority is contentious; fibers may be viewed as an implementation of coroutines,[1] or as a substrate on which to implement coroutines.[2] "


"