proj-oot-ootControlNotes2

"I would encourage anyone exploring the design for a different language’s control flow effects to seriously consider the impact of TCP-preserving closures. I think a combination of explicit effect operators, closures which properly preserve the control flow environment, and an ownership/borrowing system to manage some memory and IO effects, would be the ideal type system pattern for a concurrency-forward imperative programming language."

-- https://without.boats/blog/the-problem-of-effects/

(kinda dense and i don't fully understand it, recommend rereading that entire blog post)

---

Sophistifunk 1 month ago

link

What is meant by “control-flow-capturing closures” in this article?

    5
    kornel 1 month ago | link | 

Ability to use break and continue inside the closure to affect outer loops.

    2
    soc edited 1 month ago | link | 

An alternative is to remove break, continue, return and throw altogether and benefit from control-flow that is vastly easier to follow due to all the things that cannot happen.

It’s a bit like not having null as an accepted value for references, as nulls doubles the amount of branches you have to consider every time you touch a reference.

--- https://lobste.rs/s/sqdqtj/revisiting_smaller_rust#c_taup9x

---

reply

zachrose 3 days ago [–]

Programming languages with built-in inversion of control by way of an effects system, like in Elm.

reply

jgwil2 2 days ago [–]

Could you elaborate on that a little?

reply

zachrose 1 day ago [–]

Elm is a purely functional language, meaning all you write are the insides of functions that take a value and return a value. There's no imperative way to "do anything" like make an HTTP request, all you can do is return a "command" value from a function and the core libraries will do it.

This means that the core libraries define the signatures of the functions that an Elm application must implement. (All Elm apps "work the same way" in that sense.) It also means that you never have to mock or stub anything in a test. There are also great implications for performance and maintainability.

---

WalterBright? 17 hours ago

parent flag favorite on: How C++ Resolves a Function Call

I was able to use my experience implementing C++ name lookup rules to make a less complex one for D:

There's no notion of a "primary" template. Templates are looked up like functions are.

C++ has rather complex function overloading rules that differ substantially from template overloading rules. The former is a hierarchical list of rules, the latter is based on partial ordering. D uses partial ordering for both.

D doesn't need argument dependent lookup, because it has modules.

D lookup sees all the declarations in a module at once, not just ones prior to the point of use.

The end result works pretty much the same, and hardly anyone notices the difference. Except for that last point - in C and C++, the order of declarations is inverted. The private functions come first, the public ones last, because lookup is dependent on the lexical order. D code is written with the public ones first, private ones last.

---

" Returning Values from Loops

One of the uses of a loop is to retry an operation you know might fail, such as checking whether a thread has completed its job. However, you might need to pass the result of that operation to the rest of your code. To do this, you can add the value you want returned after the break expression you use to stop the loop; that value will be returned out of the loop so you can use it, as shown here:

fn main() { let mut counter = 0;

    let result = loop {
        counter += 1;
        if counter == 10 {
            break counter * 2;
        }
    };
    println!("The result is {}", result);} "

---

Icon's generators combined with goal-directed execution leads to some admirably concise code, e.g.:

while write(read())

s := "All the world's a stage. And all the men and women merely players" every write(find("the", s))

every write(someFunction(i to j))

-- https://en.wikipedia.org/wiki/Icon_(programming_language) (210419 version)

---

https://www.reddit.com/r/ProgrammingLanguages/comments/ojgr01/a_better_name_for_monad/

---

i asked a question once about the uses for irreducible control flow which went unanswered. I still have the question tho

https://cs.stackexchange.com/questions/127891/uses-of-irreducible-control-flow

---

"It’s depressing how many features have died in python-ideas because it takes more than a few seconds for an established programmer to grok them. Function composition comes to mind." [1]

---

"I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly. I wrote a thing for pattern matching regexps to see how it’d turn out (admittedly against an early version of the PEP; I haven’t checked it against the current state) and I think the results speak for themselves."

"

The part that does feel very Pythonic is that destructuring/unpacking is already pretty pervasive in Python. Not only for basic assignments, but also integrated into control flow constructs. For example, it’s idiomatic to do something like:

for key, val in some_dictionary.items(): # ...

Rather than:

for item in some_dictionary.items(): key, val = item # ...

Or something even worse, like explicit item[0] and item[1]. So the lack of a conditional-with-destructuring, the way we already have foreach-with-destructuring, did seem like a real gap to me, making you have to write the moral equivalent of code that looks more like the 2nd case than the 1st. That hole is now filled by pattern matching. But I agree there are pitfalls around how all these features interact. "

---

~ ubernostrum 12 hours ago

link flag

Ah yes, Python is famously closed-minded and hateful toward useful features. For example, they’d never adopt something like, say, list comprehensions. The language’s leaders are far too closed-minded, and dogmatically unwilling to ever consider superior ideas, to pick up something like that. Same for any sort of ability to work with lazy iterables, or do useful combinatoric work with them. That’s something that definitely will never be adopted into Python due to the closed-mindedness of its leaders. And don’t get me started on basic FP building blocks like map and folds. It’s well known that Guido hates them so much that they’re permanently forbidden from ever being in the language!

(the fact that Python is not Lisp was always unforgivable to many people; the fact that it is not Haskell has now apparently overtaken that on the list of irredeemable sins; yet somehow we Python programmers continue to get useful work done and shrug off the sneers and insults of our self-proclaimed betters much as we always have)

    18
    gasche 11 hours ago | link | flag | 

It is well-documented that Guido Van Rossum planned to remove lambda from Python 3. (For the record, I agree that map and filter on lists are much less useful in presence of list comprehensions.) It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.

---

effect handlers in Retrofitting Effect Handlers onto OCaml (the paper cited in https://discuss.ocaml.org/t/the-road-to-ocaml-5-0/8584 ) is reminiscent my old idea of the lock-and-key-biological-receptor-like-generalized-exception-like thingee. So let's have effect handlers.

---

comment on https://lobste.rs/s/sgn2dk/goto :

" FeepingCreature? edited 4 days ago

link flag

It seems that all these cases, possibly except the retry loop, can be handled cleanly with these two language features: as the article notes, labelled loops

label: for (int i ... ) { continue label; break label; }

and scope guards.

void *mem = malloc(16); scope(exit) free(mem); "free" will be called if we return from here.

These also avoid the unclear interaction of goto with variable declarations.

(Of course, they aren’t in C. This is why you should use D in -betterC mode :p)

" -- https://lobste.rs/s/sgn2dk/goto#c_lztnqk

---

https://arxiv.org/abs/2108.11155

Latent Effects for Reusable Language Components

Latent Effects for Reusable Language Components, by Birthe van den Berg, Tom Schrijvers, Casper Bach Poulsen, Nicolas Wu:

    The development of programming languages can be quite complicated and costly. Hence, much effort has been devoted to the modular definition of language features that can be reused in various combinations to define new languages and experiment with their semantics. A notable outcome of these efforts is the algebra-based “datatypes "a la carte" (DTC) approach. When combined with algebraic effects, DTC can model a wide range of common language features. Unfortunately, the
    current state of the art does not cover modular definitions of advanced control-flow mechanisms that defer execution to an appropriate point, such as call-by-name and call-by-need evaluation, as well as (multi-)staging. This paper defines latent effects, a generic class of such control-flow mechanisms. We demonstrate how function abstractions, lazy computations and a MetaML-like staging can all be expressed in a modular fashion using latent effects, and how they can be combined in various ways to obtain complex semantics. We provide a full Haskell implementation of our effects and handlers with a range of examples.

Looks like a nice generalization of the basic approach taken by algebraic effects to more subtle contexts. Algebraic effects have been discussed here on LtU? many times. I think this description from section 2.3 is a pretty good overview of their approach:

    LE&H is based on a different, more sophisticated structure than AE&H’s free monad. This structure supports non-atomic operations (e.g., function abstraction, thunking, quoting) that contain or delimit computations whose execution may be deferred. Also, the layered handling is different. The idea is still the same, to replace bit by bit the structure of the tree by its meaning. Yet, while AE&H grows the meaning around the shrinking tree, LE&H grows little “pockets of meaning” around the individual nodes remaining in the tree, and not just around the root. The latter supports deferred effects because later handlers can still re-arrange the semantic pockets created by earlier handlers.

By naasking at 2021-10-14 14:02

Effects Functional Theory login or register to post comments other blogs 4635 reads

Tackling the Awkward Squad for Reactive Programming

https://2020.ecoop.org/details/ecoop-2020-papers/19/Tackling-the-Awkward-Squad-for-Reactive-Programming-The-Actor-Reactor-Model

Sam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, Wolfgang De Meuter

Reactive programming is a programming paradigm whereby programs are internally represented by a dependency graph, which is used to automatically (re)compute parts of a program whenever its input changes. In practice reactive programming can only be used for some parts of an application: a reactive program is usually embedded in an application that is still written in ordinary imperative languages such as JavaScript? or Scala. In this paper we investigate this embedding and we distill “the awkward squad for reactive programming” as 3 concerns that are essential for real-world software development, but that do not fit within reactive programming. They are related to long lasting computations, side-effects, and the coordination between imperative and reactive code. To solve these issues we design a new programming model called the Actor-Reactor Model in which programs are split up in a number of actors and reactors. Actors and reactors enforce a strict separation of imperative and reactive code, and they can be composed via a number of composition operators that make use of data streams. We demonstrate the model via our own implementation in a language called Stella.

By raould at 2020-09-15 17:48

LtU? Forum 1 comment other blogs 44514 reads

---

should probably have algebraic effects like https://antelang.org/docs/language/#algebraic-effects

---

" One of the reasons that I liked Objective-C (and other Smalltalk-derived languages) is that they provide a really nice pattern for implementing state machines. You define a class with one subclass per state and one selector per message and the swizzle the isa pointer on each state transition, so each subclass just has the methods implemented for the valid transitions from that state and you get an exception if you try to perform an invalid transition (or your superclass implements default behaviours that transition to an invalid state and do some error recovery).

You can do this in languages like C++ with a delegate object and Boost also has some nice libraries that let you implement states with one class per state and static checks that impossible transitions can’t occur.

C programmers typically implement FSMs with an explicit state enum and a load of switch statements but pretty much every other language I’ve used (systems languages, application languages and hardware languages) either have nice design patterns or libraries with nice DSLs for implementing them. " -- [2]

---

eatonphil 25 hours ago

link flag

How does Zig’s defer work differently than Go’s?

    5
    andrewrk 24 hours ago | link | flag | 

In Go, deferred function calls inside loops will execute at the end of the function rather than the end of the scope.

    ~
    eatonphil 24 hours ago | link | flag | 

Oh I didn’t realize Zig had block scoped defer. I assumed they were like Go. Awesome! Yeah that’s a huge pain with Go.

---

~ amw-zero 4 hours ago

link flag

I am calling it now - algebraic effects are the “next big thing.” The syntax for it in OCaml 5 is a little clunky, precisely because there’s actually not much syntactic support for it yet and it’s just implemented as a library. But, if you look at languages like Eff or Koka where the syntax is more integrated, the whole concept is very lightweight and solves a ton of problems.

---

~ thequux 41 minutes ago

link flag

I’ve wanted to see a language with a 2-armed while statement for a while.

In C-ish syntax, we have the classic while loop: while (condition) { things(); } and the do-while loop do { things(); } while(condition);, depending on whether to check the condition at the beginning or the end.

However, I’d love to see a combination of the two: do { some_things(); } while(condition) { other_things();}, which would have similar semantics to while(true) { some_things(); if(!condition) break; other_things();}.

The two-armed while has the benefit of making it clearer where the loop condition is tested and satisfying colleagues that are picky about while loops that appear infinite, while nicely unifying the two types of while loop and being completely backwards compatible with traditional C syntax. The only cost is that you’d lose the ability to emit a diagnostic if you forget the ; after a do-while loop.

(for the nitpickers, yes, Forth has this in the form of its BEGIN WHILE REPEAT construct. However, Forth tends to be a difficult sell in most development shops)

    ~
    Moonchild 21 minutes ago | link | flag | 
        Common lisp has this. E.G.:
        (loop
          with x = 3
          do (format t "a~%")
          until (zerop (decf x))
          do (format t "b~%"))
        This prints a b a b a.

---

https://news.ycombinator.com/item?id=24963243

"

curryhoward on Nov 1, 2020

parent context favorite on: Algebraic Effects for React Developers

One important difference between algebraic effects and resumable exceptions is that with algebraic effects, the handler gets access to a reified version of the current continuation. The handler can then use that continuation to resume an arbitrary number of times, or not at all. For example, the handler could resume once for each item in a list, essentially using the continuation to map the list to a new list. The handler can even return the continuation to be invoked later, after the handler has finished. This is very unlike the restricted kind of resumable exceptions that you see in hardware.

Algebraic effects are a structured form of delimited continuations, for people who know about that concept. They give you nearly the full power of monads.

dan-robertson on Nov 1, 2020 [–]

I feel like the reification (or in particular the performance characteristics thereof) are more important than the repeatability.

I think delimited continuations are a much more accurate description of what algebraic effects do. There are implementations of delimited continuations in Common Lisp based on the condition system but if you have too many continuations (or if you switch between them too often) you will blow up the stack. Other implementations (of async rather than continuations) use macros to translate source to continuation passing style and then rely on the compiler being clever, which sbcl is ok at.

Note that you can implement resumable exceptions so long as you have a reliable way to do nonlocal transfers of control (eg return-from or go in CL; or some kind of exception which can only be caught by a try that you write and not by someone else’s, though the finally would still run), and some dynamically scoped list of first clsss functions. Implementing delimited continuations is much more involved. "

---

"Here are just a few of the places I literally didn't want, and/or currently don't like, what Rust wound up with:... Exterior iteration. Iteration used to be by stack / non-escaping coroutines, which we also called "interior" iteration, as opposed to "exterior" iteration by pointer-like things that live in variables you advance. Such coroutines are now finally supported by LLVM (they weren't at the time) and are actually a fairly old and reliable mechanism for a linking-friendly, not-having-to-inline-tons-of-library-code abstraction for iteration. They're in, like, BLISS and Modula-2 and such. Really normal thing to have, early Rust had them, and they got ripped out for a bunch of reasons that, again, mostly just form "an argument I lost" rather than anything I disagree with today. I wish Rust still had them. Maybe someday it will!" -- https://graydon2.dreamwidth.org/307291.html

https://langdev.stackexchange.com/questions/2670/what-are-interior-iteration-and-exterior-iteratio https://types.pl/@graydon/109995318136950657 "For example, JavaScript? includes both a forEach method (internal iteration) and an iterator protocol (external iteration); the latter is what is wrapped up by the built-in for ... of foreach syntax."

---

" icefox edited 2 days ago

link

This is a lovely write-up. It brings together a lot of my own experiences with trying to play with coroutines for Garnet; the Hard Problem is basically “what do you do with your stack, when you can stop and resume your code flow multiple times?” It’s a tough problem, and I’m not totally convinced that Rust-style async is the best solution, but I’m a lot more convinced than I used to be that it’s a reasonable local minimum. This article really does a good job explaining all the moving parts involved: internal vs external iterators, green threads, blocking vs. nonblocking I/O, continuations vs polling, etc. They’re all very intimately related! ...

I like how you put “what do you do with your stack when you stop and resume your code flow multiple times”. I learned that the RSP register is just a number and can be swapped at runtime from Marce Coll’s blog post on coroutines. Your list of things that are related (iterators, green threads, blocking, nonblocking, I/O, continuations, polling) I shall add another thing which is “closures”. If you think of the coroutine as owning its own stack, that the RSP is switched to at yield time and the closure has associated storage. (What’s the link between a closure and a coroutine?) I’m working on a state machine syntax called statelines which is to efficiently compose dispatch events thrown from any context, from any thread and any coroutine and to be efficient.

If you think of a tail recursive function or (even a non-tail call that shares the same stack context but is a function at a different address that you JMP to), it’s just a form of closure.

    6
    icefox avatar icefox edited 2 days ago | link
        I shall add another thing which is “closures”.
    Yes! Thank you. I keep forgetting closures are a problem ‘cause I’m so used to how Rust solves that problem, in the common cases its non-escaping closures Just Work. However that solution makes a lot of the more interesting uses of closures not worth the trouble. f you want your closure to escape then you have to box it and possibly box/refcount some of its environment too, and it becomes a pain in the ass and it’s easier to do whatever you’re doing some other way.
    But yeah, a coroutine is at its heart a jump and a stack switch – ie, saving the old value of the stack pointer and loading a different one. You can store the old stack pointer value in a struct or global somewhere, or on its own dedicated stack elsewhere in the runtime, or whatever, and you get symmetric and asymmetric coroutines, but the coroutine itself is still a jump and a stack switch. The issue then becomes one of memory management: with normal function calls without recursion the stack’s memory management is entirely linear and can be handled completely at compile-time; the compiler can infallibly figure out when to allocate and free a stack frame.. Adding non-tail recursion adds loops to the linear control flow and so you need some runtime memory management (ie, the call stack itself) and can get runtime memory errors, ie stack overflow. But that’s a pretty small special case that’s not usually too problematic in practice.
    But adding coroutines breaks this by turning the straight-line memory management algorithm of “enter function, push stack frame, leave function, pop stack frame” into one that might need to handle a branching data structure instead of just a stack. Non-escaping coroutines can make your stack look more like a tree, where each coroutine can borrow things higher in the tree than its own location but can’t borrow things below it or across branches, unless the coroutine moves… And escaping coroutines can basically borrow or own data from wherever they want and you have to figure it out at runtime. Hence why I feel like having a GC is necessary to get the most juice out of coroutines. Being able to move bits of your stack around and rewrite the pointers to it at runtime is basically required to make it safe, ‘cause you can’t always solve the ownership problem at compile time.
    And if you allow coroutines to recurse or mutually recurse in unbounded ways, they become stackful; instead of the compiler being able to figure out how to turn your coroutine’s environment into a single fixed-size struct (like it can for a tail-recursive function to turn it into a closure!), each coroutine needs to be able to allocate its own stack at runtime, and all the issues involved with trying to understand it at compile-time expand exponentially.
    And for using coros to make arbitrary state machines, as far as I’ve experimented, you really want to have escaping coro’s, and stackful ones can be convenient. Hence why I wonder if a different way of modeling state machines might be more convenient. I’d love to see what you come up with, when it’s in a state to share!
    Someday I need to really write all this down. I have fragments, but to really explain what’s in my head I’d need to spend like 15 hours making diagrams."

-- https://lobste.rs/s/6fjkeh/why_async_rust#c_zfzfer

---