--
consider a strictness analyzer which is conservative in that it marks functions as 'maybe strict in argument 1' when its not sure, and values as 'maybe infinite' when not sure. these can be overriden by declarations
--
also maybe it notices when it is recursing into a function and if so, strictifies that.
sort of like that haskell library that provides lists that evaluate the whole list when you evaluate any element of it.
but not quite. b/c we still want to allow infinite lists. more like codata: if you find yourself recursing WITHOUT producing any output, THEN strictify.
note that 'recursing' has to include MUTUAL recursion, so that the programmer can choose to divide a big recursive function into multiple functions that call each other
so, in the example with foldl in http://www.haskell.org/haskellwiki/Stack_overflow , we would notice that we are building up a thunk by recursing between + and foldl, without producing any output (this is not codata-like guarded recursion). so we strictify the thunk.
the example with foldr is tougher. looking at the expansion which is explicitly shown in http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27, we see that we actually can't reduce this without knowing that + is associative. So this really falls into the 'almost tail recursive' detection scenario.
however, the strictness analysis should be able to be given "sum1 = foldr (+) 0", and the knowledge that + is associative, and from this derive that sum1 will hang if applied to codata.
--
so it seems that data and codata might be just the language we need to talk about the sort of strictness analysis that i want.
btw the terms 'data' and 'codata' are confusing because 'data' already has another definition. when i write about 'data', you don't know if mean the usual definition of the term, which includes all codata, or the data/codata definition, which excludes some codata. so i'd like to call it finite data vs. infinite data instead of data vs. codata.
so, note that finite data is a value for which, if a substructurally recursive function is strictly (or lazily) applied to it, that function will terminate. infinite data is a value for which, if a guarded recursive function is lazily applied to it, that function will terminate and produce at least one next element.
the traditional use seems to be to restrict the language in order to prove termination. but in this case we want the compiler to hypothetically consider strictification and decide if there's any chance that strictification would convert a terminating program into a non-terminating one.
if the program was already non-terminating, then we don't care (it's the programmer's fault). so if a programmer calls a non-substructurally recursive function on finite data, that's fine, and if they lazily call a non-guarded recursive function on infinite data, that's fine.
what we are considering is strictification. if you strictify something which is applied to finite data, it'll still terminate, it might just be less efficient. but if you strictify something which is applied to infinite data which is not necessarily finite data, you may turn something terminating into something nonterminating. so that's what we want to watch out for.
so, we want to identify things which may not be finite data and which may be infinite data, and avoid strictifying them.
the conservative approach is to assume that anything might be infinite data, and so not strictify unless the programmer declares something as finite data, or unless we infer that. This is similar to my idea that laziness is the default but that optionally 'eagerness' could be attached to values.
however, we could be more aggressive. as noted above, if you have a function that is not terminating on finite data, and furthermore not even a single lazy step is terminating on infinite data, then we don't care because the programmer made a mistake. if we have an arbitrary function being applied to arbitrary finite data, the compiler can reason:
so the only cases in which strictification is forbidden is when (a) the argument is infinite data, and (b) the function operating on it lazily terminates on infinite data.
the conservative thing to do is to assume that any finite data that we don't understand must be infinite data, and any function that we don't understand must lazily terminate on infinite data.
but we could get more aggressive by prohibiting either undeclared infinite data (e.g. forcing possibly lazy values to be explicitly declared as 'lazy', except when this is inferred), or undeclared functions that terminate on infinite data (e.g. forcing functions that can safely operate on lazy data structures to be declared as such, except when this is inferred). if we did either of these, then the compiler need only check if it can infer either that the inputs are infinite data or that the function is safe on infinite data, and if neither are true, then it can strictify.
(and by strictify, we might mean strictify inputs, or we might mean only strictify if the function finds that it is (possibly mutually) recursively calling itself before producing any output)
note that if a function finds that it is (possibly mutually) recursively calling itself before producing any output, then it must not be guarded recursive.
so how about this. if a function finds that it is (possibly mutually) recursively calling itself before producing any output, then unless it is specially declared as safe for infinite data, we strictify.
the only time this causes a problem is if we have some weird function which recurses on itself more than once before producing output. this is the case in which the programmer is required to declare.
this could be annoying because if the programmer forgets to declare this for such a function, and that function is run on finite data, it will be strictified and everything will seem fine, but then a bug will only crop up when it is run on infinite data. Worse, if that function is in library code, now the library can't be used with infinite data.
so maybe we should require the programmer to declare for EVERY function either 'safe for infinite data' or 'unsafe for infinite data', except when we can infer this at compile time. this would mean automatically identifying guarded recursion, and forcing a declaration in all other cases.
another approach would be to provide both a facility to declare infinite data, and a facility to declare functions safe for infinite data (and therefore unsafe for strictification on infinite data). Now if the library writing forgets to declare their function safe for infinite data, the user can prevent autostrictification by passing in something declared as infinite data.
so, the proposal is as follows:
1) provide a facility to declare that certain inputs to certain functions are cool with infinite data, even though the function may not be guarded recursive 2) provide a facility to declare that certain outputs from certain functions will only yield infinite data if the function is given infinite data input 3) provide a facility to declare that certain outputs from certain functions will always yield finite data, even when the function's inputs are infinite data (e.g. the Haskell function 'take'). This implies (2). 4) provide a facility to declare that certain values are eager data, or that they are lazy data
and then during compile time: 1) infer substructural recursion. Mark such functions as 'only yields infinite data if given infinite data'. (this covers 'sum') 2) infer functions will only recur on some of their arguments, and mark them as 'always yields finite data' and 'only yields infinite data if given infinite data' in the other arguments (this covers 'take') 4) transitively dataflow trace eager data through functions that 'only yields infinite data if given infinite data', to mark their outputs as eager when only eager data is provided 4) transitively dataflow trace lazy data through functions that are not 'always yields finite data', to mark their outputs as lazy when lazy data is provided 5) if any value is marked both 'eager' and 'lazy' then raise a compile-time error
and then during runtime:
1) upon exiting a function or calling into a function, for those outputs or parameter values that are marked eager, strictify 2) upon exiting a function, check to see if we are stuck in a mutual recursion loop that has not produced any lazy output in an entire pass. If so, then unless either (a) the data is marked 'lazy', or (b) the data is passing into function inputs marked 'cool with infinite data', strictify.
i think this would allow the runtime to convert foldl to foldl' automatically, while still permitting codata-safe but non-guarded recursive functions (like interpreters), and still allowing a client to make such a libary function work if the original author forgot to declare it as such (by passing in 'lazy' values), and still allow stream fusion (because the auto-strictification only occurs when the runtime detects a loop that is not producing lazy output).
now, we don't really need declarations (2) and (3), because if we miss some of these, we only under-propagate the eager attribute and over-propagate the lazy attribute. This just reduces the power of the user to use eager data to overcome laziness (because we're being overly 'safe' about not letting the user force strictness through things that we worry may be like 'iterate') , and increases the clumsiness of the 'lazy' attribute (because it will not be removed through some things that are undetectably like 'take').
also, the 'cool with infinite data' is really about the path between certain inputs and certain arguments to recursive calls, but we might want to move just to the function as a whole, b/c that might seem simpler.
also, i guess 'cool with infinite data' applies to foldl inside 'sum1 = foldr (+) 0', but not always to foldr, so we need to be able to add that modifier on a per-call basis.
so the streamlined proposal is:
1) you can declare functions as 'lazyrec', meaning that you don't want autostrictification within them 2) when calling functions, you can call them thru the functor 'lazyrec', which is like declaring them 'lazyrec' just for that call 3) you can declare values as 'lazy', meaning that you don't want autostrictification applied to them or their heirs 4) you can declare values as 'eager', meaning that you want strictification applied to them and their heirs
during compile time:
1) infer substructural recursion. Mark such functions as 'only yields infinite data if given infinite data'. (this covers 'sum') 2) infer functions will only recur on some of their arguments, and mark them as 'always yields finite data' and 'only yields infinite data if given infinite data' in the other arguments (this covers 'take') 4) transitively dataflow trace eager data through functions that 'only yields infinite data if given infinite data', to mark their outputs as eager when only eager data is provided 4) transitively dataflow trace lazy data through functions that are not 'always yields finite data', to mark their outputs as lazy when lazy data is provided 5) if any value is marked both 'eager' and 'lazy' then raise a compile-time error
and then during runtime:
1) upon exiting a function or calling into a function, for those outputs or parameter values that are marked eager, strictify 2) upon exiting a function, check to see if we are stuck in a mutual recursion loop that has not produced any lazy output in an entire pass. If so, then unless either (a) the data is marked 'lazy', or (b) the function at the top of the loop is marked 'lazyrec', strictify.
---
http://www.cs.cmu.edu/~me/150/resources/lectures/23/LazyEvaluation.pdf
---
If you haven't read it,
If you haven't read it, check out Why Functional Programming Matters. By Ehud Lamm at Wed, 2007-05-30 18:05
| login or register to post comments |
Why Functional Programming Matters by John Hughes (1984)
Let me reemphasize this. This is one the best motivating papers for lazy evaluation and higher order functions even 23 years after it's creation. The most significant feature missing from it's treatment of laziness is circular programming, a.k.a. tying the knot. By Derek Elkins at Wed, 2007-05-30 22:15
| login or register to post comments |
http://www.haskell.org/haskellwiki/Tying_the_Knot
--
i think this is probably not a good idea, but just putting it out there: should we be lazy only across fn boundaries?
--
mb have laziness only for referentially transparent things, and have strictness for non-referentially transparent things, including not only I/O to the outside world, but also accesses to aliased reference variables.
(purely stochastic functions are technically non-referentially transparent but we'll treat them as if they were; what we are really concerned about is spooky action at a distance, e.g. aliasing)
if you do this, though, then what about accesses to aliased reference variables which are 'within one of the current monads'? this should logically be done lazily, because accesses to something 'within one of the current monads' should not 'count' as aliased reference right? but in that case since there we may have (commutative) multiple monads, this doesn't jive with the 'one receiver' for non-referentially transparent things (although maybe it does, since each such thing's 'receiver' just means either that the thing itself is accessed only via a API/set of methods, or that accesses to it are syntactic sugar for calls to methods of its monad?).
--
if we do something like that, must make sure that stuff like backtracking monads ( http://www.randomhacks.net/2007/03/12/monads-in-15-minutes/ ) still exist/work
--
an alternative to stict and lazy would be 'strict unless', where everything is strict except when the value is marked as infinite, or when the evaluation of the value would produce (nonundoable?) side effects. this would necessitate that all potentially infinite valuees would be marked (or, perhaps better, that all definitely finite values be marked; this would give the same semantics as laziness). this would bllock lazy stream fusion, tho, i think
---
laziness in clojure vs haskell:
http://debasishg.blogspot.com/2010/05/laziness-in-clojure-some-thoughts.html
" Clojure is not as lazy as Haskell. And hence the benefits are also not as pervasive. Haskell being lazy by default, the compiler can afford to make aggressive optimizations like reordering of operations and transformations that Clojure can't. With Haskell's purity that guarantees absence of side-effects, deforestation optimizations like stream fusion generates tight loops and minimizes heap allocations. But I hear that Clojure 1.2 will have some more compiler level optimizations centered around laziness of its sequences. "
" Being interoperable with Java is one of the benefits of Clojure. However you need to be aware of the pitfalls of using Java's data structures with the lazy paradigm of Clojure. Consider the following example where I put all accounts in a java.util.concurrent.LinkedBlockingQueue?.
(import '(java.util.concurrent LinkedBlockingQueue?)) (def myq (new LinkedBlockingQueue?)) (doto myq (.put acc-1) (.put acc-2) (.put acc-3))
Now consider the following snippet that does some stuff on the queue ..
(let [savings-accounts (filter #(= (:type %) ::savings) myq)] (.clear myq) (.addAll myq savings-accounts))
Should work .. right ? Doesn't ! filter is lazy and hence savings-accounts is empty within the let-block. Then we clear myq and when we do an addAll, it fails since savings-accounts is still empty. The solution is to use a doall, that blows away the laziness and realizes the filtered sequence ..
(let [savings-accounts (doall (filter #(= (:type %) ::savings) myq))] (.clear myq) (.addAll myq savings-accounts))
Of course laziness in Clojure sequences is something that adds power to your abstractions. However you need to be careful on two grounds :
Clojure as a language is not lazy by default in totality (unlike Haskell) and hence laziness may get mixed up with strict evaluation leading to surprising and unoptimized consequences.
Clojure interoperates with Java, which has mutable data structures and strict evaluation. Like the situation I described above with LinkedBlockingQueue, sometimes it's always safe to bite the bullet and do things the Java way."
---
" Update: It seems worthwhile to point out that memoization to create a persistent from an ephemeral data structure is a premier example of a benign effect, the use of state to evince functional behavior. But benign effects are not programmable in Haskell, because of the segregation of effects into the IO monad. "
---
http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
---
maybe have strictness-by-default within modules, but lazy-by-default between modules (e.g. upon values deriving from module inputs). So e.g. if you have
any p = or . map p
if the function 'any' is exposed to other modules, then this'll work as desired (both 'map' and 'or' will be lazy because their input is from another module, and '.' will be lazy because its operating on an input to this function, and this function is exposed to other modules)
if it was instead a module-internal function, then you'd have to make the '.' explicitly lazy:
any p = or lazy. map p
hmm... alternately, maybe lazy-by-default for point-free functions?
any p = or . map p
would be lazy, but
any p x = or . map p x
would not?
---
guess: memory consumption of laziness really only a prob when u have a recursive stack of thunks, by which i mean a thujnk of a thunk if thunck etc where the same subroutine generated many of those thunks. so idea: tag each thunk with a graph of all subroutines that generated any thunks within it ( i say graph so you can discriminate vertical stacking from mere hroizontal reoccurence), and if the subroutine on top (in root/head position) is part of a vertical stack in which the same subroutine occured n times, strictify. not quite; we want to only strictify if the (effective?) control flow graph involved this guy calling itself; not just if the computation has a lot of values 'wrapped' by this guy e.g. map f; map is never gonna recursively call itself, so even if there are 3 other maps vertically inside f, map is not the problem amd fold can have a memory leak without thunking over itslef, right? so what we want is to annotation the point of control is the program source from which each thunk was generated, rather than of the contents of the thunk
strict and lazy annotations, otherwise the compiler and/or runtime decides need bytecode for 'runtime decides' constraints on compiler: respect annotations, and if unannoated, nothing which takes a designated_lazy value is strict, and if unannotated and provably cannot take a designated lazy value, then must be strict if control flow generating the recursion is not fixed ?
note: compiler/runtime can trivially satisfy these by ??? no it can't
should have a 'must be lazy' annotation and a 'must be strict' annotation. Also, values may be annotated as 'lazy' (wait, is this any different from annotating expressions? i think no, because thunks contain expressions; if the head of the thunk has a 'lazy' annotation that's the same, right?). if neither are present, the laziness/strictness is undefined (the compiler/interpreter/runtime can choose), subject to the following guarantees:
bytecodes:
should also have a 'lazy unless proven unnecessary' annotation, meaning, lazy unless the compiler can prove that no infinite data structure is generated
maybe in fact just have four annotations:
and the default is the same as strict-unless-provably-constant-control-flow-generating-thunk ? but now there's a language requirement for the programmer to tag their fibonacchi generators as 'lazy'..
ok this is getting complicated, let's go back and look at what we know about the design space:
---
the design space:
if we never say 'you decide', then dynamic analysis is not allowed.
if the compiler never says to the runtime 'you decide, and i'm not promising that either one is safe', then you could do without a runtime by defaulting to either lazy or strict depending on which one the compiler guarantees is safe.
by contrast, if the compiler can't make either guarantee, and sometimes says that last thing, then whatever guarantees the language wants to provide must be guaranteed by the runtime. If these involve maintaining metadata about stuff, then the runtime will have to maintain that.
in any one case, the choice between 'strict' or 'lazy' might: (1) have exactly one 'crashing' choice, in that one choice requires infinite time or infinite memory and the other choice requires neither infinite time nor infinite memory, (2) have no crashing choice, and has at least one 'right' choice, in that one choice takes finite time and finite memory and no more time and no more memory than the other choice would, (3) be a tradeoff, meaning that one choice takes more time and the other takes more memory, but in both cases the amount of time or memory required is finite (4) have no possible solution, in that both choices take either an infinite amount of time or an infinite amount of memory.
Assuming the programmer didn't choose strict or lazy explicitly, the language must choose (at compile time or at runtime).
now some hypotheses:
---
so now back to ideas:
note that in that example, we see the maps stacked lexically in the source code
so in that example we could avoid unnecessary laziness by, at compile-time, set n to be greater than the amount of lexically stacked calls to the same function. But in general, we might pull out a subexpression into a separate function, even one in a separate module.
however, you could just count the total number of occurences of each function in the source.
but, you could do e.g. "f (\z -> g(h))"; now if f = g = h then you have a depth-3 stack of some function, even if that function's name only appears once.
still, you could just count the greatest observed lexical depth in the source.
but, you could have f(x) = x + 1; g(x) = f(f(x)); h(x) = g(g(x)). h(x) = g(g(x)) = g(f(f(x))) = f(f(f(f(x)))).
so, it's the sum of all lexical depths observed over the whole program.
note that an interpreter might 'benignly' exceed that (e.g. their runtime recursion through the same piece of source code in a thunk might exceed the sum of all lexical depths observed over the whole program); but, that makes sense, that's just the sort of thing we're trying to catch, because we can't make guarantees about an interpreter's benignness; so a programmer writing such an interpreter who wants voluntary strictification must add strictness annotations.
so you get the idea:
let's think about both simplifying proposals:
i dunno if those are kosher, but if they are, that leads to relatively simple guarantees and implementation:
guarantee:
note: to handle the foldl case as well as the foldr case, we do this will closures as well a thunks (if they are different), see http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27
so, imagine the foldr and foldl case in http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27 . We still seem to be stymied; without knowing that + is associative, even when the runtime notices that our chain is too long, it can't do anything about it. In the foldr case, we have something of the form
1 + (2 + (3 + (4 + (foldr (+) 0 [5..1000000]))))
the point is, you can't just 'reduce' thunks on demand without associativity.
but in the foldl case, we have something of the form
> foldl f z [] = z > foldl f z (x:xs) = let z' = z `f` x > in foldl f z' xs
let z1 = 0 + 1 z2 = z1 + 2 z3 = z2 + 3 z4 = z3 + 4 ... in (((z999996 + 999997) + 999998) + 999999) + 1000000
By comparision, foldl' goes:
> foldl' f z [] = z > foldl' f z (x:xs) = let z' = z `f` x > in seq z' $ foldl' f z' xs
> sum3 = foldl' (+) 0
> try3 = sum3 veryBigList
foldl' (+) 0 [1..1000000] --> foldl' (+) 1 [2..1000000] --> foldl' (+) 3 [3..1000000] --> foldl' (+) 6 [4..1000000] --> foldl' (+) 10 [5..1000000] -->
so, here we should be able to figure out that we can reduce z1 when the chain gets too long
however, does this has problems with the same cases that foldl' does?
" if the combining function is lazy in its first argument, foldl may happily return a result where foldl' hits an exception:
> (?) :: Int -> Int -> Int > _ ? 0 = 0 > x ? y = x*y > > list :: [Int] > list = [2,3,undefined,5,0] > > okey = foldl (?) 1 list > > boom = foldl' (?) 1 list
Let's see what happens:
okey --> foldl (?) 1 [2,3,undefined,5,0] --> foldl (?) (1 ? 2) [3,undefined,5,0] --> foldl (?) ((1 ? 2) ? 3) [undefined,5,0] --> foldl (?) (((1 ? 2) ? 3) ? undefined) [5,0] --> foldl (?) ((((1 ? 2) ? 3) ? undefined) ? 5) [0] --> foldl (?) (((((1 ? 2) ? 3) ? undefined) ? 5) ? 0) [] --> ((((1 ? 2) ? 3) ? undefined) ? 5) ? 0 --> 0
boom --> foldl' (?) 1 [2,3,undefined,5,0] --> 1 ? 2 --> 2 foldl' (?) 2 [3,undefined,5,0] --> 2 ? 3 --> 6 foldl' (?) 6 [undefined,5,0] --> 6 ? undefined --> * Exception: Prelude.undefined "
if you treat 'undefined' as an immediate error, yes. However, if you treat 'undefined' as a (runtime detectable) shorthand for an infinite recursion, then this would merely cause the voluntary strictification to abort when 'undefined' is reached.
So in conclusion, (assuming associativity is markable and is used by the voluntary strictification) this approach still has trouble with foldr with non-associative operators, but otherwise seems good. One could imagine that a sufficiently smart compiler could statically identify foldr-like situations used with operators that haven't been declared associative, and throw a warning.
---
although thunks are made of expressions, and the laziness/strictness annotations therefore correspond to places in the source code, there may be times (such as in an interpreter) when the programmer wants to dynamically say whether a given thunk should be treated strictly or not. So perhaps we should allow dynamic modification of 'treat the head of this thunk lazily or strictly'.
note that the language should also have a 'recursive strictify' bit, e.g. if you encounter that in a thunk, not only do you evaluate the head one more time, but you continue recursively evaluating the result until you can't anymore.
---
toread taming laziness http://iris.lib.neu.edu/cgi/viewcontent.cgi?article=1037&context=comp_sci_diss
On the relationship between laziness and strictness stephen chang
summary: Looks like a good paper (its a dissertation, actually) for its literature refs, but ultimately it is concerned with efficiency and presents a static analysis, whereas i'm mainly concerned with the minimal/simplest things that could be done to ensure (or at least make likely) that the program does not unexpectedly hang by trying to strictify an infinite data structure, but also does not unexpectedly run out of memory by lazily creating a long chain of thunks that could have been evaluated. Also, this dissertation is trying to make initially strict programs more efficient by automatically adding laziness, which i guess means it isn't considering programs which have infinite data structures to start with ("This dissertation tackles the laziness problem from the relatively-unexplored, strict-evaluation-first end of the spectrum. We are not aware of any work that looks to automatically insert laziness into strict programs".)
more comments: Is the thing i proposed above similar to 'stingy evaluation' (section 2.2.1, Subtracting Laziness, copied below)? Or maybe its more like "Eager Haskell: resource-bounded execution yields efficient iteration", whose abstract reads 'The advantages of the Haskell programming language are rooted in its clean equational semantics. Those advantages evaporate as soon as programmers try to write simple iterative computations and discover that their code must be annotated with calls to seq in order to overcome space leaks introduced by lazy evaluation. The Eager Haskell compiler executes Haskell programs eagerly by default, i.e. , bindings and function arguments are evaluated before bodies. When resource bounds are exceeded, computation falls back and is restarted lazily. By using a hybrid of eager and lazy evaluation, we preserve the semantics of Haskell and yet permit efficient iteration.'. Also, 'lenient evaluation' looks pretty cool, especially in the context of Oot's 'massive parallel' use-case/assumptions.
2 . 2 . 1 Subtracting Laziness The most well-known technique to remove laziness from lazy programs is strictness analysis [ 10 , 16 , 37 , 39 , 42 , 57 , 79 , 84 ], which statically removes lazi- ness by estimating which thunks would be forced during evaluation without introducing non-termination. Others have introduced additional static techniques that build on strict- ness analysis to remove even more laziness. Faxén [ 23 ] uses a whole-program analysis that utilizes labeled types and type inference [ 22 ] to identify “cheap eagerness.” The analysis maps labeled expressions to “costs” and expressions with low enough cost are evaluated strictly. Boquist [ 9 ] uses an analysis to identify promises that can be removed in case expressions. Researchers have also explored the idea of combining strict and lazy eval- uation in a dynamic setting. In “stingy” evaluation [ 77 ] expressions are eval- uated strictly, but only for a short period of time, thus yielding only a small occasional improvement. Maessen [ 51 ] and Ennals and Peyton Jones [ 20 ] ad- ditionally introduced similar strict-by-default mechanisms. In both their sys- tems, programs are evaluated eagerly but various runtime heuristics deter- mine when to suspend the computation to preserve lazy semantics. Id [ 60 ] and pH [ 1 ], are two languages that try to exploit parallelism in pro- grams using a technique called lenient evaluation, which is a blend of lazy and strict evaluation. To leniently evaluate a function call, the arguments and function body are eagerly evaluated at the same time. When a computation needs the value from another computation, it blocks if the other value is not ready. If some computation, a function argument for example, is not used, then the program never waits for its completion, so it does not affect evalua- tion of the final answer, even if that computation never terminates. Therefore, though function calls are evaluated eagerly, lenient evaluation ultimately im- plements a non-strict semantics. The previous techniques experienced varying degrees of success. Faxén’s cheap eagerness seems to gain the most improvement with regard to fixing space leaks during tail-recursive iterations. The dynamic strategies [ 20 , 51 ] successfully meet their goal of efficient iteration while still handling infinite data structures (i.e., non-termination) with a reasonable amount of overhead. But these strategies sometimes remove too much laziness, for example where laziness is beneficial in finite programs [ 62 ]. Strictness analysis is the sole technique that sees practical use today, though it too struggles to handle tail- recursive iteration and leaves too much laziness in the program. 2 . 2 balancing lazy and strict 17 Overall, it seems that the prior strategies have not quite found an ideal balance of lazy and strict, as real-world lazy programmers still resort to man- ually annotating their programs with strictness annotations such as seq [ 63 ] and bang ( ! )
cites:
skimmed. a clear, easy to read paper. strictness analysis seems to be a compile-time analysis that figures out which functions are strict in which arguments by starting out knowing how a basis set of functions are strict and then computing how functions built from them are strict. There are some wrinkles when it comes to recursion, etc, and this paper describes algorithms for dealing with these cases too.
[ 16 ] Chris Clack and Simon L. Peyton Jones. Strictness analysis—a practi- cal approach. In FPCA ’ 85
looked at the abstract and at talk slides (see below), looks like this is about doing speculative execution to improve efficiency and then doing VM hotspot-like runtime tuning to stop speculatively executing expressions that in the past didn't help much, so not what i'm looking for:
[ 20 ] Robert Ennals and Simon Peyton Jones. Optimistic evaluation: an adap- tive evaluation strategy for non-strict programs. In ICFP ’ 03
looked the the first page of this paper, looks like it is about using global static flow analysis to increase efficiency, not what i'm looking for:
[ 23 ] Karl-Filip Faxén. Cheap eagerness: speculative evaluation in a lazy func- tional language. In ICFP ’ 00
toread, may be good: [ 51 ] Jan-Willem Maessen. Eager Haskell: resource-bounded execution yields efficient iteration. In Proceedings of the ACM SIGPLAN Workshop on Haskell , pages 38 – 50 , 200
the "cheap eagerness" paper cites Mycroft "the theory and practice of transforming call-by-need into call-by-value" and says it requires global compiler analysis, so it's probably not what i'm looking for, although this is a different paper. [ 57 ] A Mycroft. Abstract interpretation and optimising transformations for applica- tive programs . PhD? thesis, University of Edinburgh, 1981 .
great read, but not relevant to my current question (talks about times where laziness is and isn't useful in R):
[ 55 ] Floréal Morandat, Brandon Hill, Leo Osvald, and Jan Vitek. Evaluating the design of the R language. In ECOOP ’ 12
skimmed, doesn't seem to have much for me, the later paper "Lenient evaluation is neither strict nor lazy" is more up my alley:
[ 69 ] Klaus E. Schauser and Seth C. Goldstein. How much non-strictness do lenient programs require? In FPCA ’ 95
weird edge case? in any case talking about adding lazy to strict, so probably not what i want:
[ 80 ] Philip Wadler, Walid Taha, and David MacQueen?