kccqzy 1 day ago [-]
One of the coolest things I've seen is to use Prolog with CLP(FD) to solve the 7-11 problem. The problem basically says the sum of the prices of four items is $7.11, and the product is $7.11 too (no rounding); find the prices of these four items.
This can be solved in two lines of code that gives the (unique) solution in a second. Not even my expensive Mathematica can do this!
?- use_module(library(clpfd)).
true.
?- Vs = [A,B,C,D], Vs ins 1..711, A * B * C * D #= 711000000, A + B + C + D #= 711, A #>= B, B #>= C, C #>= D, labeling([ff, down], Vs).
Vs = [316, 150, 125, 120],
A = 316,
B = 150,
C = 125,
D = 120 ;
false.Another great thing Prolog is good at is type inference. After all, type inference, in its simplest form, is just syntax-directed constraint generation and then using unification to solve constraints—exactly what Prolog gives you by default. You can write a type inference engine for simply typed lambda calculus in Prolog in 10 minutes. Google used to have a project to do type inference for Python written in Prolog, although they've since [switched away](https://github.com/google/pytype).
reply
triska 1 day ago [-]
Hah! I used exactly this example to motivate the arbitrary precision CLP(FD) implementation, published as The Finite Domain Constraint Solver of SWI-Prolog, FLOPS 2012, LNCS 7294:
https://www.metalevel.at/swiclpfd.pdf
The observations on type inference are also spot on. The project you mentioned was previously discussed on HN, but the page is no longer available:
https://news.ycombinator.com/item?id=12108041
reply
---
" if Prolog had been easier to understand – perhaps with some stronger typing and some greater degree of declarativeness (such as can be found in some experimental descendants of Prolog such as Goedel) it might have survived.
Then again, perhaps not – Ada, after all, is pretty much dead too and it had none of these problems. "
---
---
paganel 2 hours ago [-]
> in a language frowned upon by many programmers today.
Adding to the list, lambda-the-ultimate.org itself is written in Drupal/PHP.
reply
---
buserror 8 hours ago | parent | flag | favorite | on: Adobe Photoshop 1.0.1 Source Code (2013)
everything was written in pascal back then, not just Photoshop. I spent years writing pascal as a professional mac developer before Lightspeed C (which became THINK, then Symantec C later on) appeared... Then I never looked back.
MPW (Macintosh Programmer's workshop) was pretty awesome, if slow as hell -- Turbo Pascal was a lot better to work with -- the only issue is that it 'tokenized' your source code, so it wasn't plaint text anymore...
I still miss these 'one pass' compilers; I think it peaked with MetroWerks? toolchain (which kept a Pascal plugin for a long time!) that was IMO the best compiler/linker/debugger toolset ever made.
((my note: MetroWorks? made Codewarrior; i briefly googled it with terms like 'single pass' and 'one pass' and couldn't find anything))
---
" Some features of the language were omitted because of time but the omitted features do not directly relate to the analysis that is done to validate this work. The main features that are not supported yet are inheritance and exceptions. " -- [1]
---
chubot 23 days ago
| parent [-] | on: Python startup time: milliseconds matter |
This is disappointing to me too, but I think there are some problems baked in to the language that make it hard.
The import code in CPython was a mess, which was apparently cleaned up by importlib in Python 3, through tremendous effort. But unfortunately I think importlib made things slower?
I recall a PyCon? talk where as of 3.6, essentially everything about Python 3 is now faster than Python 2, EXCEPT startup time!
This is a shame, because I would have switched to Python 3 for startup time ALONE. (As of now, most of my code and that of my former employer is Python 2.) That would have been the perfect time to address startup time, because getting a 2x-10x improvement (which is what's needed) requires breaking changes.
I don't think there's a lack of interest in the broader Python community, but there might be a lack of interest/manpower in the core team, which leads to the situation wonderfully summarized in the recent xkcd:
FWIW I was the one who sent a patch to let Python run a .zip file back in 2007 or so, for Python 2.6 I think. This was roughly based on what we did at Google for self-contained applications. A core team member did a cleaner version of my patch, although this meant it was undocumented until Python 3.5 or so:
https://docs.python.org/3/library/zipapp.html
The .zip support at runtime was a start, but it's really the tooling that's a problem. And it's really the language that inhibits tooling.
Also, even if you distributed self-contained applications, the startup time is not great. It's improved a bit because you're "statting" a zip file rather than making syscalls, but it's still not great.
In other words, I have wondered about this "failure" for over a decade myself, and even tried to do something about it. I think the problem is that there are multiple parts to the solution, the responsibility for these parts is distributed. I hate to throw everything on the core team, but module systems and packaging are definitely a case where "distributed innovation" doesn't work. There has to be a central team setting standards that everyone else follows.
Also, it's not a trivial problem. Go is a static language and is doing better in this regard, but still people complain about packaging. (vgo is coming out after nearly a decade, etc.)
I should also add that while I think Python packaging is in the category of "barely works", I would say the same is true of Debian. And Debian is arguably the most popular Linux package manager. They're cases of "failure by success".
...
FWIW I think importing is heavily bottlenecked by I/O, in particular stat() of tons of "useless" files. In theory the C to Python change shouldn't have affected it much. But I haven't looked into it more deeply than that.
chubot 23 days ago [-]
EDIT: I should also add that the length of PYTHONPATH as constructed by many package managers is a huge problem. You're doing O(m*n) stat()s -- random disk access -- which is the slowest thing your computer can do.
m is the number of libraries you're importing, and n is the length of the PYTHONPATH.
So it gets really bad, and it's not just one person's "fault". It's a collusion between the Python interpreter's import logic and how package managers use it.
---
https://andrewkelley.me/post/full-time-zig.html
" Here are some of the bigger items that are coming up now that I have more time:
remove more sigils from the language
add tuples and remove var args
self-hosted compiler
http server and http client based on async/await
decentralized package manager
generate html documentation
hot code swapping"---
regarding removing sigils (see previous section), e says:
" andrewrk commented 12 days ago
Replace a ?? b binary operator with a ifnull b. After this change, all control flow occurs with keywords only.
other suggestions instead of ifnull are welcome. ifnull is nice because it is 2 lowercase words squished together so nobody will miss the identifier name.
Remove ??x prefix operator. Replace it with x.?. After this change and #770, this symmetry exists:
*T is used to make a pointer type, and x.* derefs
?T is used to make a nullable type, and x.? de-nullifies
Both operations have assertions that are potentially unsafe - .* asserts the pointer is valid among other things and .? asserts the value is non-null.Anecdotally, this will look a lot better syntactically, especially with C-translated code. "
---
" a lot of people using do not have a software development background and do not care that the language is not elegantly designed: they just want to get analytical work done. In that respect, R is far, far superior to Python. Even something as simple as installing a library is a conceptual leap for these people (why wouldn't the software just come with everything needed to work?). Have you ever tried explaining the various python package and environment management options to someone with a background in Excel/SQL? Just getting a basic environment set up can be days of frustrating effort (though Anaconda is getting better with this). Compared to R, where you install RStudio and are off to the races, with a helpful package installation GUI. Another great example: in R, data types are pretty fungible, everything is a vector, coercing things generally "just works". In pandas, it can be very confusing that you need to explicitly turn a 1x1 dataframe into a scalar value. Same thing with Python vs R datetimes. "
---
bokstavkjeks 2 days ago [-]
It's also worth noting that R becomes much more pleasurable with the Tidyverse libraries. The pipe alone makes everything more readable.
I'm also coming from more of an office setting where everything is in Excel. I've used R to reorganize and tidy up Excel files a lot. Ggplot2 (part of the Tidyverse) is also fantastic for plotting, the grammar of graphics makes it really easy to make nice and slightly complex graphs. Compared to my Matplotlib experiences, it's night and day.
...
That said, if anyone's interested in learning R from a beginner's level, I can recommend the book R for Data Science. It's available freely at http://r4ds.had.co.nz/ and the author also wrote ggplot2, RStudio, and several of the other Tidyverse libraries.
riskneutral 2 days ago [-]
Tidy features (like pipes) are detrimental to performance. The best things R has going for it are data.table, ggplot, stringr, RMarkdown, RStudio, and the massive, unmatched breadth and depth of special-purpose statistics libraries. Combined, this is a formidable and highly performant toolset for data analytics workflows, and I can say with some certainty that even though “base Python” might look prettier than “base R,” the combination of Python and NumPy? is not necessarily more powerful or even a more elegant syntax. The data.table syntax is quite convenient and powerful, even if it does not produce the same “warm fuzzy” feeling that pipes might. NumPy? syntax is just as clunky as anything in R, if not worse, largely because NumPy? was not part of the base Python design (as opposed to languages like R and MATLAB that were designed for data frames and matrices).
What is probably not a good idea (which the article unfortunately does) is to introduce people to R by talking about data.frame without mentioning data.table. Just as an example, the article mentions read.table, which is a very old R function which will be very slow on large files. The right answer is to use fread and data.table, and if you are new to R then get the hangs of these early on so that you don’t waste a lot of time using older, essentially obsolete parts of the language.
reply
roenxi 2 days ago [-]
> Tidy features (like pipes) are detrimental to performance.
Detrimental to the runtime performance; if you happen to be reading and processing tabular data from a csv (which is all I've ever used R for, I must admit), then you get real performance gains as a programmer. For one thing, it allows a functional style where it is much harder to introduce bugs. If someone is trying to write performant code they should be using a language with actual data structures (and maybe one that is a bit easier to parallelism than R). The vast bulk of the work done in R is not going to be time sensitive but is going to be very vulnerable to small bugs corrupting data values.
Tidyverse, and really anything that Hadley Wickham is involved in, should be the starting point for everyone who learns R in 2018.
> languages like R and MATLAB that were designed for data frames and matrices
Personal bugbear; the vast majority of data I've used in R has been 2-dimensional, often read directly out of a relational database. It makes a lot of sense why the data structures are as they are (language designed a long time ago in a RAM-lite environment), but it is just so unpleasant to work with them. R would be vastly improved by /single/ standard "2d data" class with some specific methods for "all the data is numeric so you can matrix multiply" and "attach metadata to a 2d structure".
There are 3 different data structures used in practice amongst the R libraries (matrix, list-of-lists, data.frame). Figuring out what a given function returns and how to access element [i,j] is just an exercise in frustration. I'm not saying a programmer can't do what I want, but I am saying that R promotes a complicated hop-step-jump approach to working with 2d data that isn't helpful to anyone - especially non-computer engineers.
reply
rpier001 2 days ago [-]
I think what you're saying is mostly on point. I wanted to share a couple possible balms for your bugbears.
For attach metadata to an anything, why not use attributes()/attr() or the tidy equivs? Isn't that what it is for?
It might not make you feel much better, but data.frame is just a special list, c.f. is.list(data.frame()). So, if you don't want to use the connivence layers for data.frame you can just pretend it is a list and reduce the ways of accessing data structures by one.
You can paper over the distinction between data.frames and matrices if it comes up for you often enough. E.g.
`%matrix_mult%` <- function(x,y) { if("data.frame" %in% class(x)) { x <- as.matrix(x) stopifnot(all(is.numeric(x))) } if("data.frame" %in% class(y)) { y <- as.matrix(y) stopifnot(all(is.numeric(y))) } stopifnot(dim(x)[2] == dim(y)[1]) x %*% y }
d1 %matrix_mult% d2
... but I'll grant that isn't the language default.
reply
roenxi 1 day ago [-]
...
For my own work I just use tidyverse for everything. It solves all my complaints, mainly by replacing apply() with mutate(), data.frame with tibble and getting access to the relational join commands from dplyr. I'll cool with the fact my complaints are ultimately petty.
> For attach metadata to an anything, why not use attributes()/attr() or the tidy equivs? Isn't that what it is for?
I've never met attr before, and so am unaware of any library that uses attr to expose data to me. The usual standard as far as I can tell is to return a list.
> It might not make you feel much better, but data.frame is just a special list, c.f. is.list(data.frame()). So, if you don't want to use the convenience layers for data.frame you can just pretend it is a list and reduce the ways of accessing data structures by one.
Well, I could. But data frames have the relational model embedded into them, so all the libraries that deal with relational data use data frames or some derivative. I need that model too, most of my data is relational.
The issue is that sometimes base R decides that since the data might not be relational any more it needs to change the data structure. Famously happens in apply() returning a pure list, or dat[x, y] sometimes being a data frame or sometimes a vector depending on the value of y. It has been a while since I've run in to any of this, because as mentioned most of it was fixed up in the Tidyverse verbs and tibble (with things like its list-column thing).
> `%matrix_mult%` <- function(x,y) { if("data.frame" %in% class(x)) { x <- as.matrix(x) stopifnot(all(is.numeric(x))) } if("data.frame" %in% class(y)) { y <- as.matrix(y) stopifnot(all(is.numeric(y))) } stopifnot(dim(x)[2] == dim(y)[1]) x %*% y }
I have got absolutely no idea what that does in all possible edge cases, and to be honest if the problem that is solving isn't actually one I confront often enough to look in to it.
It just bugs me that I have to use as.matrix() to tell R that my 2d data is all made up of integers, when it already knows it is 2d data (because it is a data frame) and that it is made up of integers (because data frame is a list of vectors, which can be checked to be integer vectors). I don't instinctively see why it can't be something handled in the background of the data.frame code, which already has a concept of row and column number. Having a purpose-built data type only makes sense to me in the context that at one point they used it to gain memory efficiencies.
I mean, on the surface
data %>% select(-date) %>% foreign_function() and data %>% select(-date) %>% as.matrix %>% foreign_function()
look really similar, but changing data types half way through is actually adding a lot of cognitive load to that one-liner, because now I have to start thinking about converting data structures in the middle of what was previously high-level data manipulation. And you get situations that really are just weird and frustrating to work through, eg, [1].
[1] https://emilkirkegaard.dk/en/?p=5412
reply
rpier001 1 day ago [-]
scale() for example uses attributes to hold on to the parameters used for scaling. Most packages that use attributes provide accessor functions so that the useR doesn't need to concern themselves with how the metadata are stored. I'll grant that people do tend to use lists because the access semantics are easier.
reply
com2kid 2 days ago [-]
> Tidy features (like pipes) are detrimental to performance.
But they are some absolutely amazing features to use. After helping my wife learn R, and learning about all the dypler features, going back to other languages sucked. C#'s LINQ is about as close as I can get to dypler like features in a main stream language.
Of course R's data tables and data frames are what enable dypler to do its magic, but wow what magic it is.
reply
wodenokoto 1 day ago [-]
I think your autocorrect mangled up your `dplyer`s :)
reply
curiousgal 2 days ago [-]
From your experience what makes data.table so useful?
reply
vijucat 2 days ago [-]
Answering questions in a rapid, interactive way (, while using C to be efficient enough that one can run it on millions of rows):
> dt[grepl('Merc', name), list(.N, median(hp))]
N V2
1: 7 123 > dcast(dt, cyl + carb ~ ., value.var=c("hp", "wt"), fun.aggregate=list(mean, length))
cyl carb hp_mean wt_mean hp_length wt_length
1: 4 1 77.4 2.151000 5 5
2: 4 2 87.0 2.398000 6 6
3: 6 1 107.5 3.337500 2 2
4: 6 4 116.5 3.093750 4 4
5: 6 6 175.0 2.770000 1 1
6: 8 2 162.5 3.560000 4 4
7: 8 3 180.0 3.860000 3 3
8: 8 4 234.0 4.433167 6 6
9: 8 8 335.0 3.570000 1 1I used slightly verbose syntax so that it is (hopefully) clear even to non-R users.
You can see that the interactivity is great at helping you compose answers step-by-step, molding the data as you go, especially when you combine with tools like plot.ly to also visualize results.
int_19h 1 day ago [-]
What a lot of people don't get is that this kind of code is what R is optimized for, not general purpose programming (even though it can totally do it). While I don't use R myself, I did work on R tooling, and saw plenty of real world scripts - and most of them looked like what you posted, just with a lot more lines, and (if you're lucky) comments - but very little structure.
I still think R has an atrocious design as a programming language (although it also has its beautiful side - like when you discover that literally everything in the language is a function call, even all the control structures and function definitions!). It can be optimized for this sort of thing, while still having a more regular syntax and fewer gotchas. The problem is that in its niche, it's already "good enough", and it is entrenched through libraries and existing code - so any contender can't just be better, it has to be much better.
reply
extr 2 days ago [-]
Completely agree. dplyr is nice enough but the verbose style gets old fast when you're trying to use it in an interactive fashion. imo data.table is the fastest way to explore data across any language, period.
reply
riskneutral 1 day ago [-]
I strongly agree, having worked quite a bit in several languages including Python/NumPy?/Pandas, MATLAB, C, C++, C#, even Perl ... I am not sure about Julia, but last time I looked at it, the language designers seemed to be coming from a MATLAB type domain (number crunching) as opposed to an R type domain (data crunching), and so Julia seemed to have a solid matrix/vector type system and syntax, but was missing a data.table style type system / syntax.
reply
ChrisRackauckas? 1 day ago [-]
Julia v0.7-alpha dropped and it has a new system for missing data handling. JuliaDB? and DataFrames? are two tabular data stores (the first of which is parallel and allows out-of-core for big data). This has changed pretty dramatically over the last year.
reply
maxander 2 days ago [-]
No, you are wrong. R is terrible, and especially so for non-professional programmers, and it is an absolute disaster for the applications where it routinely gets used, namely statistics for scientific applications. The reason is its strong tendency to fail silently (and, with RStudio, to frequently keep going even when it does fail.) As a result, people get garbage results without realizing, and if they're unlucky, these results are similar enough to real results that they get put somewhere important. Source: I'm a CS grad working with biologists; I've corrected errors in the R code of PhD?'d statisticians, in "serious" contexts.
rpier001 2 days ago [-]
many of the 'silent' failures are easily configured away (some examples, https://github.com/hadley/strict).
zrobotics 2 days ago [-]
While I kind of want to agree with you, I just don't see a better alternative. Do you really want biochemists to have to deal with the horrors of C compilation? In production code I'm very glad my makefile tells clang to fail on absolutely anything, but is that the best we can do? Other commenters have pointed out ways to avoid dangerous things like integer division, but if you think R is hostile then please offer a tenable alternative. The only ones I can think of are python and Matlab, and both are even worse for the intended use.
Yes, R is not my preferred language for anything heavy-duty, but I would guess ~95% of R usage is on datasets small enough to open in excel, and that is where the language truly shines (aside from being fairly friendly to non-programmers).
So yes, there are some problems with R, but what are your proposed improvements? Because if I have to analyze a .csv quickly, I'm going for R most of the time.
reply
teekert 1 day ago [-]
Python3: Pandas ans Seaborn?
I have very quick flows for data processing: load data, make long form, add meta data as categories, plot many things with seaborn one-liners. I use Jupyter lab and treat it Like a full lab notebook, including headers, introduction, conclusion, discussion. Works very wel for me.
reply
yread 1 day ago [-]
> install RStudio and are off to the races, with a helpful package installation GUI.
Unless the package needs a native component like libcurl of particular version then it can turn into couple of hours of blindly trying everything you can think of.
> Another great example: in R, data types are pretty fungible, everything is a vector,
Unless it's a dataframe or factor or string or s3, s4 or s5 or a couple of other things.
And the documentation will tell you the reference paper that you can read and some completely impractical example.
Ugh, feels better now, sorry for the rant.
reply
---
YeGoblynQueenne? 1 day ago
| unvote [-] |
Well, the "=" operator is eas to type since it's on pretty much every keyboard and probably has been for ever. And it's faster than typing a two-symbol operator like := or <- so.
If you want "=" to (more or less) mean what it used to mean "long before computer programming", try Prolog.
?- a = a. true.
?- a = b. false.
?- A = a. A = a.
?- A = a, A = b. false.
?- A = a, A = B. A = B, B = a.
?- A == B. false.
?- A = B, A == B. A = B.
Nice? Most programmers would pull their hair from the root at all this :)
(hint: "=" is not assignment, neither is it equality and "==" is just a stricter version thereof).
reply
---
" The Linux people ran into a similar problem in 2016. Consider the following code:
extern int _start[]; extern int _end[];
void foo(void) { for (int *i = _start; i != _end; ++i) { /* ... */ } }
The symbols _start and _end are used to span a memory region. Since the symbols are externalized, the compiler does not know where the arrays are actually allocated in memory. Therefore, the compiler must be conservative at this point and assume that they may be allocated next to each other in the address space. Unfortunately GCC compiled the loop condition into the constant true rendering the loop into an endless loop as described in this LKML post where they make use of a similar code snippet. It looks like that GCC changed its behavior according to this problem. At least I couldn’t reconstruct the behavior with GCC version 7.3.1 on x86_64 Linux. "
---
" in Ada where you can't just take the address of anything you want, you need to declare it as aliased to begin with). In fact Ada tries hard to make pointers redundant which is a great thing. This and range types and bound check arrays( Ada has a lot lot of great ideas, and "C style" Ada is a treat to use).
So a C replacement should keep the procedural nature of C, with closures perhaps, but not go further in terms of paradigm. "
tzs 2 days ago [-]
Note that if the two pointers are passed to a function, and the comparison is done in the function, the results are different:
void pcmp(int *p, int *q)
{
printf("%p %p %d\n", (void *)p, (void *)q, p == q);
} int main(void) {
int a, b;
int *p = &a;
int *q = &b + 1;
printf("%p %p %d\n", (void *)p, (void *)q, p == q);
pcmp(p, q);
return 0;
}That is giving me:
0x7ffebac1483c 0x7ffebac1483c 0 0x7ffebac1483c 0x7ffebac1483c 1
That's compiled with '-std=c11 -O1' as in the article. The result is the same of pcmp is moved into a separate file so that when compiling it the compiler has no knowledge of the origins of the two pointers.
I don't like this at all. It bugs me that I can get different results comparing two pointers depending on where I happen to do the comparison.
reply
JoeAltmaier? 2 days ago [-]
Yes, I agree. It makes far more sense to make guarantees in a language, that two values are compared using a metric that is invariant. Especially in a language like C, where we expect it to be a very thin abstraction over the machine.
reply
---
on Python:
sametmax 1 day ago [-]
It's going to be very interesting to see if things like:
That were BDFL-blocked, will go back to be debated in the mailing list in the next months.
And if yes, will the community stands by its root or create a new era ?
The consequences of which we will only really see in 10 years.
Guido as done an incredible job at being the boogie man, keeping the language simple and readable. It's a hard job.
Can we pull it off ?
reply
stormbeta 1 day ago [-]
The big one I want to see, because it's one of my biggest frustrations with Python, is to finally make lambdas work like every other contemporary language instead of being inexplicably limited to a single expression simply because Guido couldn't come up with a syntax that agreed with him.
There's so many cases (arguably including the problem this PEP was designed to solve!) where having a real inline closure is just far more readable than having to arbitrarily break every single thing that happens to need 2+ expressions out into named blocks out of sequence.
Other things in Python are either simply a result of the language's age and history, or have real technical pros and cons, but that one irks me because it's an artificial limitation chosen for no reason except aesthetics.
reply
---
https://yarchive.net/comp/linux/everything_is_file.html
first few comments are a good read: https://news.ycombinator.com/item?id=17531875
---
" Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code. "
---
on Python:
riazrizvi 1 day ago [-]
The thing I love most about the language is its conciseness, on several levels.
It's syntactically concise because of its use of whitespace and indentation instead of curly braces.
Expressions are often concise because of things like list slicing and list expressions.
It can be linguistically concise, because it is so easy to fiddle with the data model of your classes, to customize their behavior at a deeper level. For example, it is so quick to design a suite of classes that have algebraic operators, which leads to elegant mathematical expressions of your custom GUI objects for example. But you can also change the way classes are instantiated by overriding Class.__new__ which I've used to make powerful libraries that are a joy to use.
Further elegance can be added through creative use of the flexible argument passing machinery, and the lovely way that functions are also objects.
Your application architecture tends to be concise because there are so many existing libraries, you can often design your application at a more interesting higher level. For example, you don't have to worry about the size of int's.
My big regret with Python 3.0 is that the scoping rules in list expressions changed to prohibit modification of variables in surrounding scope. This degraded the utility of list expressions. The work around is so long-winded.
Besides the odd gripe here and there, the language really gives me a warm glow.
reply
---
small program:
https://www.quaxio.com/bootable_cd_retro_game_tweet/
---
doomjunky 1 hour ago [-]
Functional programming!
Functional programming languages have several classic features that are now gradually adopted by none FP languages.
Lambda expressions [1] is one such feature originating from FP languages such as Standard ML (1984) or Haskell (1990) that is now implemented in C#3.0 (2007), C++11 (2011), Java8 (2014) and even JavaScript?