proj-oot-old-150618-ootLibrariesNotes2

i'm not quite sure how to divide up the core/bundled/recommended libraries.

At the one extreme, we have the core language within Oot. (this defines the standard AST that macros operate upon, i guess)

Then we have the minimal profile for Oot. Eg a very small version of Oot without many libraries, that defines the language itself. This is what you'd run on massively parallel machine where each CPU has little more than 32k (or whatever) local memory (although couldn't we put the other libraries in shared global memory here?).

Then we have the standard profile, ie everything that is imported by default on PCs.

Then we have the distribution, ie everything that is shipped/distributed. All of this is released and versioned together.

Then we have the 'blessed' libraries, ie things that are screened for quality and usefulness by the Oot team, but not as carefully as the distributed libraries; and which are released and versioned separately from the distribution.

Then we have the 'canonicalized' library set, that is the 'canonical' libraries for doing various tasks, according to the wider community.

And then we have everything else that is being tracked by Oot's official packaging system/CPAN analog (is this confined to open source?).

Quesions: Should all of these things be considered separately?

which set of library functions should a 'typical Oot user' be assumed to be familiar with (i'm guessing, either the minimal profile, the standard profile, the distribution?) I'm leaning towards saying they should know the minimal profile cold, they should be familiar with every function in the standard profile, and they should be aware of the general capabilities of every library in the distribution.

if a blessed library becomes orphaned, should the core team try to adopt it? (i'm guessing, yes, but they should feel free to cut it if they don't have the manpower)

should we even have blessed library? (the motivation for this is so as to not bloat the distribution, but also possibly to provide a way for a library to become 'standard' even if an outsider wants to use their own release system for it)

what are the counterparts of these things in other languages? haskell: i feel like the 'minimal profile' is like the Haskell Prelude; what does GHC import by default (corresponding to our 'standard profile'; here's hawk's: https://github.com/ssadler/hawk/issues/35 ). is haskell.base part of the GHC distribution? actually Haskell 2010 specifies a set of libraries, so i guess these would be the std. profile: http://www.haskell.org/onlinereport/haskell2010/haskellpa2.html#x20-192000II

---

http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal (called the AMP in other places). related to the idea to add foldable/traverable to the Prelude.

---

---

http://www.haskell.org/haskellwiki/Class_system_extension_proposal

https://ghc.haskell.org/trac/ghc/wiki/InstanceTemplates

---

" a design that makes it easier for new releases to make backwards incompatible changes. One approach to this could be at the package level the way that base-compat operates. Another approach that could be useful to library authors is incorporate versioning at the module level.

Something to keep in mind though is that because the new Prelude needs to try to work with the old Prelude, there are not that many options in the design space. classy-prelude has had the luxury of being able to re-think every Haskell wart. So it was able to remove all partial functions and use Text instead of String in many places. But that process is very difficult for the actual Prelude, which is severly constrained. "

---

" Wednesday, October 01, 2014 Why Traversable/Foldable should not be in the Prelude

Summary: For GHC 7.10, Traversable and Foldable are going to be in the Prelude. I missed the original discussion, but I suspect it's a bad idea.

Types are how Haskell programmers communicate their intentions to each other. Currently, the Haskell Prelude contains:

mapM :: Monad m => (a -> m b) -> [a] -> m [b]

As of GHC 7.10, as part of something known as the Burning Bridges Proposal (ticket, discussion, I can't actually find a full proposal...), that will become:

mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)

Surely that's a good thing? Aren't more general types always better? Isn't the Prelude an archaic beast from the time before? I'd argue functions which are highly polymorphic are hard to use, and hard to think about, especially for beginners. I'd also argue the Prelude is remarkably well designed, not perfect, but quite an impressive feat.

What makes a type signature complex?

I've been thinking recently about what makes type signatures complex, both to practitioners, and to relative beginners. My rough metric is:

    Fully concrete types are usually simple, as long as they aren't too long. The longer a type gets, the more complex it gets.
    Types with functions in them aren't too bad (order-1 types), but as you go up to order-2 types things start to get more complex.
    Fully polymorphic functions can be simpler than concrete functions, since they declare what you don't need to worry about.
    Functions with type classes are more complex, since you need to read the type signature while looking at the context, and need to know each class being used.
    Simple type classes (Eq, Show) aren't too bad, but custom type classes impose more of a burden.
    As you add more type classes, the complexity grows faster than linearly. Three type classes are not three times as complex as one, but quite a bit harder.
    Higher kinded type classes are significantly more complex than kind * type classes, e.g. Monad, Functor. The reason is that instead of having a hole you fill in, you now have a hole which itself has a hole.
    The higher-kinded type classes Monad and Functor aren't as bad as the others, since Functor is really the "simplest" higher-kinded type class, and Monad is required knowledge for IO.
    As you have more higher kinded type classes, the complexity burden grows even worse than for kind * type classes. Two is significantly more complex than one.

By that metric, the old mapM isn't too bad, but the new mapM is quite complex. It has two higher-kinded type classes, and one of them is not one of the common ones. I appreciate that making Foldable and Traversable key to Haskell will probably lead to them being more used, but now all beginners are going to have to wade through the Monad tutorial, their Foldable tutorial and their Traversable tutorial before they start programming (or just give up).

Why generality hurts

There are two main reasons why generality hurts:

Reading type signatures becomes difficult/impossible. We already have that problem with the Control.Arrow module, which (as far as most people use it), is just a pile of tuple combinators. But unlike other tuple combinators, these are ones whose type signature can't be understood. When I want to use &&& or * I just pick randomly, see if it type checks, then try again. When other people I know what to use these functions they just use an explicit lambda. No one thinks of referring to the documentation, since the documentation presents a unification problem (which most of the people I know could solve), not an intuition.

Reading code becomes difficult. Haskell is brilliant for letting you write a composable pipeline of code that takes some input, does some processing, and produces some output. But that only works if you have enough concrete pieces in each function to read each piece in isolation. As an example:

test = foo . mapM baz . bar

Using the current mapM definition I can, in a fraction of a second, know the approximate shape of what foo consumes, and what bar produces. With the new mapM I don't, and have to keep more context in my head to reason about the code.

Who it hurts

Generality of this nature tends to hurt two types of people:

Beginners are hurt because they need to know more concepts just to get going. As a beginner I read through Data.List regularly to build up weapons in my arsenal to attack larger problems. The new Data.List will be generalised, and reading it won't give the insights I enjoyed. Maybe the beginner can instantiate all Foldable things to [], but that adds a mental burden to exactly those people who can bear it least.

Practitioners, those who are paid to code for a living, will have greater problems with maintenance. This isn't an unsubstantiated guess... I have taken over a project which made extensive use of the generalised traverse and sequence functions. Yes, the code was concise, but it was read-only, and even then, required me to "trust" that the compiler and libraries snapped together properly.

Who it benefits

The benefit probably comes from those who are already using the Applicative/Traversable classes regularly. For these people, they can probably avoid an import Prelude(). I am not against ever changing the Prelude, but I do think that for changes of this magnitude the ideas should probably be prototyped as a separate package, widely accepted, and only then should significant surgery be attempted on the Prelude. The classy-prelude work has gone in that direction, and I wish them luck, but the significant changes they've already iterated through suggest the design space is quite large.

Concluding remarks

I realise that I got to this discussion late, perhaps too late to expect my viewpoint to count. But I'd like to leave by reproducing Henning Thielemann's email on the subject:

        David Luposchainsky wrote:
        +1. I think the Prelude should be a general module of the most commonly
        needed functions, which (generalized) folds and traversals are certainly
        part of; right now it feels more like a beginner module at times.
    It is certainly a kind of beginner module, but that's good. Experts know
    how to import. Putting the most general functions into Prelude does not
    work because:
    1. There are often multiple sensible generalizations of a Prelude
    function.
    2. You have to add more type annotations since types cannot be infered
    from the functions.
    There is simply no need to change Prelude and all packages that rely on
    specific types. Just don't be lazy and import the stuff you need!
    I should change my vote to:
    -10"

-- http://neilmitchell.blogspot.co.uk/2014/10/why-traversablefoldable-should-not-be.html

(already added this link to plbook)

--

[–]yitz 6 points 1 day ago

    proposal made by Simon Marlow a year and a half ago that if you import Prelude.Foo then NoImplicitPrelude would get set automatically. This would make alternate preludes easier for folks to push.

That is a really nice idea.

[–]WilliamDhalgren? 3 points 1 day ago*

right.

If I'm reading that design correctly, the leaf class with InstanceTemplates? still needs to be coded for the true hierarchy above it, with a "default instance <classname>" for each superclass template it inherits. The example given has Monad decl still conscious of the default Functor instance in Applicative.

and still gets warnings for any generated instances unless doing a "deriving <classnames>" for all actual classes on the final datatype.

IMHO not as scalable as McBride?'s proposals, where final instances apparently freely mix declaration from all intrinsicly-declared superclasses.

There you only get warnings if pre-empting with explicitly created instances, allowable for a transitional period with a PRE-EMPT pragma, or an error otherwise, without excluding these explicitly from being generated.

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 6 points 1 day ago

When I last spoke with Richard we'd talked about including such a component in the proposal. I'm unsure if its absence is an act of omission or commission.

The ability to split a class is particularly dear to me, if we ever want to have the ability to refine our class hierarchies without doing so on the back of every user.

    permalink
    save
    parent
    report
    give gold
    reply

---

"I think a more promising solution to the problem of generic type complexity is making specialization of type signatures easier in code, documentation, and compiler error messages."

---

---

"libraries in haskell that stood out as being unique":

http://www.reddit.com/r/haskell/comments/1k3fq7/what_are_some_killer_libraries_and_frameworks/

(already added this link to plbook)

summary:

the main ones:

runner-ups:

others:

---

confusing error in Python numpy: if you use an array where a scalar is expected, you get:

"TypeError?: only length-1 arrays can be converted to Python scalars"

of course, you weren't trying to convert anything

---

max and nanmax

---

need a find_first!

http://numpy-discussion.10968.n7.nabble.com/Implementing-a-quot-find-first-quot-style-function-td33085.html

---

numpy 'take' (axis-wise array indexing by array)

---

http://www.yukinishijima.net/2014/10/21/did-you-mean-experience-in-ruby.html

---

i always have to look up how to convert b/t epoch time and Python datetime objects in Python:

http://partiallyattended.com/2011/10/13/managing-unix-time-in-python/

---

.NET immutable collections

http://msdn.microsoft.com/en-us/library/dn385366%28v=vs.110%29.aspx

--

it's confusing how in Python, datetime.timedelta(100) is 100 days , but datetime.timedelta(0, 100) is 100 seconds. datetime.timedelta(100) should be 100 seconds.

--

need library fn to quickly sanitize strings for use in filenames, shell commands, etc

  re.sub(r'[^a-zA-Z0-9_]', '', x) might be enough

--

https://lodash.com/?v3

---

python requests

---

perl6's roll, join, pick, say, as in [1]

---

validation libraries

for example (i dunno if this one is popular/any good, i just saw it in a random search for something else):

https://pypi.python.org/pypi/good/0.0.1-0

---

we discuss this in [2], but here i note that it also serves as a list of useful library functions:

http://t-a-w.blogspot.com/2010/07/arrays-are-not-integer-indexed-hashes.html

---

https://msdn.microsoft.com/en-us/library/aa287104%28v=vs.71%29.aspx

An Extensive Examination of Data Structures Part 1: An Introduction to Data Structures Part 2: The Queue, Stack, and Hashtable Part 3: Binary Trees and BSTs Part 4: Building a Better Binary Search Tree Part 5: From Trees to Graphs Part 6: Efficiently Representing Sets

---

on arrays/lists:

--

"Don't steal good names from the user. Avoid giving a package a name that is commonly used in client code. For example, the buffered I/O package is called bufio, not buf, since buf is a good variable name for a buffer." -- http://blog.golang.org/package-names

"Avoid stutter. Since client code uses the package name as a prefix when referring to the package contents, the names for those contents need not repeat the package name. The HTTP server provided by the http package is called Server, not HTTPServer. Client code refers to this type as http.Server, so there is no ambiguity." -- http://blog.golang.org/package-names

(http://blog.golang.org/package-names probably has other useful tips too, i probably should read it)

--

this looks really cool:

https://github.com/gizak/termui https://news.ycombinator.com/item?id=9276188

--

https://news.ycombinator.com/item?id=9280813

--

--

np.asanyarray : File: /usr/lib/python2.7/dist-packages/numpy/core/numeric.py Definition: asanyarray(a, dtype=None, order=None)

Convert the input to an ndarray, but pass ndarray subclasses through.

--

griddata meshgrid transpose loadtxt imload (imread) imsave plot scatter what else? look at everything i used in bshanks_thesis/__init__.py; also look at the explicitly imported stuff in atr. Also look at those matlab<-->numpy cheat sheets. Also julia, and that clojure matrix math lib (incanter)

--

use reddis data structs, ops for libs eg "lists, sets, sorted sets, hash tables, pub/sub, hyperloglog, and scripting (lua) support." -- https://news.ycombinator.com/item?id=9304718 eg "Simple values or data structures by keys but complex operations like ZREVRANGEBYSCORE. INCR & co (good for rate limiting or statistics) Bit operations (for example to implement bloom filters) Has sets (also union/diff/inter) Has lists (also a queue; blocking pop) Has hashes (objects of multiple fields) Sorted sets (high score table, good for range queries) Lua scripting capabilities (!) Has transactions (!) Values can be set to expire (as in a cache) Pub/Sub lets one implement messaging "

note: reddis 'hyperloglog' is a constant-space, linear-time algorithm for approximately counting uniques in some set in an online (as opposed to batch) manner (as opposed to the naive algorithm, which is linear in space rather than constant). There are 3 operations for it:

i've written the above signatures as if these are non-variadic fns operating on immutable data and returning updated state when necessary, but in Reddis, actually PFADD and PFMERGE are variadic, PFADD var element .. element, PFMERGE dst src src .. src, and PFADD and PFMERGE are mutating (PFADD mutates 'var' and PFMERGE takes an additional parameter, 'dst', which is mutates).

--

http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis

--

Lodash (the new underscore, apparently)

--

good for quick tests:

the constant: array([[1,2], [3,4]])

also the magic(9) constant (the 3x3 magic number array)

--

isprime

prime factorization

--

https://github.com/mahmoud/boltons

https://news.ycombinator.com/item?id=9350562

https://github.com/cool-RR/python_toolbox

https://news.ycombinator.com/item?id=9352253

---

javascript arrays have a 'map' method that is like map-enumerate, passing both an item and its index:

" here's the view m("table", [ todo.vm.list.map(function(task, index) { return m("tr", [ m("td", [ m("input[type=checkbox]") ]), m("td", task.description()), ]) }) ]) " -- http://lhorie.github.io/mithril/getting-started.html

--

"matplotlib works with a number of user interface toolkits (wxpython, tkinter, qt4, gtk, and macosx)"

--

In [829]: vstack([[1,2], [3,4]]) Out[829]: array([[1, 2], [3, 4]])

In [830]: vstack([[1,2], [[3,4], [5,6]]]) Out[830]: array([[1, 2], [3, 4], [5, 6]])

---

complaints about golang's std lib:

 TheDong 13 hours ago

Consistent standard library?

It's a crapshoot what will be an interface and what will be a struct, which is one of the fundamental features of the language.

Returned error values are sometimes errors.New (e.g. io.EOF) and sometimes random fmt.Errorf strings that couldn't possibly be handled (e.g. all of tls.go basically), sometimes actual structs (e.g. PathError?), and sometimes panics.

If you can't even get error handling right and consistent, what claim do you have to consistency? At least in java it's pretty much all exceptions.

The math library, as mentioned above, is pretty darn iffy.

The existence of both 'path' and 'filepath' is possibly a mistake.

The 'heap' and 'list' in collections both seem like they should be similar, but 'heap' operates on a thing that fulfills its interface and list operates directly on arbitrary objects without some interface needed.

Due to the lack of 'protected', the std library is littered with exported fields you shouldn't use (e.g. gob.CommonType? and template.Template.*parse.Tree and so on).

Sure, the library is consistent in many places because the language is so small there's little else it can do but be consistent, but the fact that it falls flat in error handling and has visible warts makes me feel that it's a very broad std library, but far less consistent than, say, Rust's (where everything returns consistent Optional error types) or Java's or Smalltalks. Sure, it's more consistent than the clusterfuck that is ruby and/or python or javascript or (heaven forbid) c/c++, but by no means is being better than any of those an achievement in this category.

reply

chimeracoder 12 hours ago

> Returned error values are sometimes errors.New (e.g. io.EOF) and sometimes random fmt.Errorf strings that couldn't possibly be handled (e.g. all of tls.go basically), sometimes actual structs (e.g. PathError?),

This is the advantage of having errors be interfaces rather than concrete values. Once you learn how to use it, it's a strength. The usage of structs that satisfy the interface (as in PathError?) is clearly documented and completely predictable. If your concern is that a certain package could use a struct satisfying the interface and does not, well, one advantage of errors as interfaces is that you could propose this change without breaking the backwards compatibility guarantee.

It's really important to keep in mind that errors are adopted from error values in C, which means that their idioms and usage is inspired by that rather than by exception handling. Many new Go programmers (including me, when I started) were more familiar with exception handling in higher-level languages than error handling in idiomatic C, so it does take some getting used to.

> and sometimes panics.

Do you have any examples of the standard library using panics to signal handle-able errors (not counting cases in which the panic is recovered at the top-level in the library and never visible to the caller)?

> The math library, as mentioned above, is pretty darn iffy.

The only complaint I have had about package "math" is that all operations are defined on float64s, which can be annoying, but is far better than any alternative. That's an annoyance, but it's certainly not an inconsistency in the library.

Do you have any other examples of inconsistencies in the math package?

reply

jbooth 10 hours ago

Yeah, but with C's old-school int values for errors, you can always make an equality comparison. The error interface in Go doesn't specify equality, so you can do "if err == io.EOF" in some cases and you're up a creek in other cases. Sure you can do if err.Error() == "String I'm expecting", but as the parent said, fmt.Errorf can easily make that impossible.

reply

 TheDong 10 hours ago

No, the advantage of having an interface is everything can return an error I can typeswitch on. Unfortunately, you can't (in a backwards compatible way) change io.EOF or any of the other 'constant' errors because there's so much code doing 'if err != io.EOF' which now breaks. In addition, it's backwards incompatible due to anyone doing 'reflect.TypeOf?' which I guess you could argue is fine to break.

Speaking of reflection, there's a metric TON of runtime panics in the reflect package. Hell, I'm not sure there's a method in there that can't panic.

No doubt, however, you'll say that's cheating and the expected behavior of that package, so how about NewTimer? (playground https://play.golang.org/p/jDKniK3aqa ). It does not give me an opportunity to recover from bad input by passing an error out, it just panics. It is also not documented that it panics afaik and other functions in time (time.Sleep) take negative durations and handle them just fine. This is definitely inconsistent with other std library functions that can take bad input and return errors.

I also think it's disingenuous to say "Keep in mind errors are from C and at least they're better than C"... the comment I'm responding to is a comment regarding go std library being one of the most consistent (implicit compared to ANY LANGUAGE) so I may bring in anything I wish, and sure Go's errors are a step up from C, but they're awful compared to other languages. They're basically passing around strings with no builtin way to create a trace of errors to return up the stack to maybe be printed, no default way to create an optional type with an error value, and no consistent way to determine which of N errors it was that was returned from a function without using a regexp 90% of the time because that damned fmt.Errorf / errors.New... And yes, using a Regexp in your error handling IS a problem.

> defined on float64, which can be annoying, but is far better than any alternative

Funnily enough, the 'rand' library implements everything by just having multiple functions and postfixing them with types (rand.Uint32, rand.Int, rand.Float32) and strconv does something vaguely similar.

Whether one or the other is better, that's another inconsistency in the stdlibrary; do you have many functions per type or only one function for one type? I have no clue how Go could have abused you such that you are able to conceive of not having a builtin int64 compatible 'min' function as a good thing.

Actually, perhaps your post is simply a plea for help as Go has tied you up and holds a gun to your head, in which case I suppose we should mount a rescue mission forthwith! Just let me write a quick timer to keep track of our progress... -1 you s.. err panic

reply

comex 39 minutes ago

I don't know about consistency within the standard library, but NewTimer?'s behavior definitely sounds consistent with the concept of distinguishing programming errors from exceptional conditions. Passing a negative number as a duration could either be defined behavior (equivalent to zero for programmer convenience and consistency with the principle that timers can fire late) or a programming error (should've checked that whatever you were subtracting to get that duration isn't backwards), but it's not an exceptional condition that can't be predicted in advance. Indeed, checking for a negative number, which is rarely necessary, is no more code than the 'if err' thing, and if it isn't necessary in a particular use of the function, you have the luxury of knowing it can't fail, and you don't need to figure out what will happen if it does.

(I bet you'd be unhappy at Rust irrecoverably panicking the thread for simply performing addition, if the result overflows!)

reply

tomjakubowski 16 days ago

parent flag

There is a third way. See Haskell's Either l r, or Rust's Result<T, E> types. See http://lucumr.pocoo.org/2014/10/16/on-error-handling/ and http://lucumr.pocoo.org/2014/11/6/error-handling-in-rust/

I don't know why so often, the immediate reaction to legitimate criticism of this wart in Go is to argue against exceptions, even when nobody has even brought up exceptions as an alternative.

vowelless 10 hours ago

> The math library, as mentioned above, is pretty darn iffy.

My biggest disappointments early on was not even having a Max/Min function for ints.

reply

whateveracct 10 hours ago

But there is for float64 ;)

I think this is a classic case of Go's need for "simplicity" hamstringing the language. When handling min/max, they had a few options:

1) Treat numbers as a special case and have a polymorphic min/max that operates on any number type. This is out of the question because it is obtuse and irregular and neither of those things are the "Go way"

2) Properly abstract over numbers somehow. This can be weakly done using Go interfaces but 1) it would require >, < etc to all be methods on the number types. But then Go would need operator overloading and that's not the "Go way" 2) using an interface is actually bad anyways because it doesn't make guarantees that input type =:= output type. To do that you need proper generics and then use a common subtype or take it further and use a Numeric typeclass. This is way too complicated and not the "Go way"

3) Write a min/max for every number type. Due to lack of overloading, each would be named accordingly (minInt64 etc). This is ugly and definitely not the "Go way"

4) Just use the most "general" number type and write min/max for that. You can just cast on the way in and out so this "works". It doesn't require API bloat or more language features, so it's the "Go way"

reply

aikah 10 hours ago

yet the "go way" makes computing really tough :

        x := 1
	y := 2.0
	z := x + y

Error : invalid operation: x + y (mismatched types int and float64)

So much that devs end up using float64 everywhere ...

reply

coldtea 14 hours ago

It's quite good, but e.g. Java SDK has stuff that runs circles around Go's standard library regarding breadth and maturity, especially stuff added since 1.4 (nio, etc).

And some aspects of the Go SKD are horrible in practical use. Case in point, most things math related.

reply

krylon 9 hours ago

Well, yes. But Java's standard library is also huge and full of deprecated stuff. I have not done a lot of Java programming, but when I did play around with Java, I found myself spending most of the time actually browsing through the standard library's documentation (which is, to be fair, really, really good) looking for stuff.

Also, Java has a head start of nearly 15 years on Go, and the Java community is (or used to be, at least) pretty huge.

Not that this invalidates your point.

reply

 nevergo 11 hours ago

Go has the poorest std lib I've ever seen.

reply

andrewchambers 5 hours ago

Compared with? My reference point is python,java,clojure,c,c++. Python is huge and useful but messy and inconsistent. Java is huge and doesn't compose well. Clojure is underdocmented and hardly batteries included. C and C++ can't do anything useful without a bunch of extra libraries. From what I can see, C# is like java in this regard.

Go is a small incredibly useful and composable set of libraries that handles a vast amount of cases with a small amount of code.

reply

 curun1r 8 hours ago

> Can you think of some aspect that was especially well thought-out?

Go routines, channels and select. With very little else, it's possible to write some very useful code in a way that's concise, elegant and easy to reason about.

reply

coldtea 8 hours ago

Only compared to something like C or explicit old-skool Java like threading. A lot of modern languages have CSP built-in or as a lib (Java, Scala, Haskell, Clojure, C++, Ada, Erlang, heck even Rust).

And Go is not quite expressive to address higher level, but common, constructs ( https://gist.github.com/kachayev/21e7fe149bc5ae0bd878 ) in a conceise and elegant way.

And if you access anything outside of channel provided stuff in Golang, you have to bring your own safety.

reply

---

https://github.com/aturon/rfcs/blob/collections-conventions/text/0000-collection-conventions.md

--

http://www.jsgraphs.com/

other suggestions: https://news.ycombinator.com/item?id=9583384

http://c3js.org/ recc. by https://news.ycombinator.com/item?id=9585859

http://ecomfe.github.io/echarts/index-en.html recc. by https://news.ycombinator.com/item?id=9584899

---

needs an isiterable (Python somehow left this out!)

---

in Python, the syntax: print '.', is very convenient, but often what you actually have to do is: sys.stdout.write('.') sys.stdout.flush()

---

should have something like python.requests but which makes it easy to:


another contender to lodash and underscore (and ramda?) for a js library, this one claims to be more functional:

https://github.com/jussi-kalliokoski/trine

discussion: https://news.ycombinator.com/item?id=9699061

---

http://underscorejs.org/ https://lodash.com/ https://github.com/ramda/ramda

---