proj-oot-ootNotes23

golang has:

select { case ch1 <- 0: print(0) case ch1 <- 1: print(1) case ch2 <- 2: print(2) }

where the case is randomly chosen amongst those that are ready. For example, if there is someone else at the other end of ch1 reading it, but no one reading from ch2, then either the 0 case or the 1 case will be chosen, never the 2 case.

thx [1]

---

The thing to realise about Clojure is that it isn't an open source language like Python. It is a language controlled very tightly by Rich Hickey and Cognitect. Major work is done in secret (transducers, reducers, spec), then announced to the world as a "Here it is!" and then suggestions are taken. This goes well mostly, though it took a lot of outside persuasion that Feature Expressions weren't the best idea, and to come up with Reader Conditionals instead.

---

why not clojure?

---

"

kentonv 132 days ago

parent [-]on: E Programming Language: Write Secure Distributed S...

...

Cap'n Proto is very much E's distributed programming concepts "ported" to other languages, and Mark helped me get the details right.

Interestingly, E takes a very different approach from standard distributed systems practices today. Most distributed programming today emphasizes stateless servers performing idempotent operations on a monolithic datastore. Stateful servers are considered too hard to get right. What E did is actually provide the vocabulary needed to be able to reason about stateful servers, so you could get them right. Stateful servers are able to achieve massively better performance, especially in terms of latency, because they don't need to hit persistent storage for every single operation. You obviously need some way to deal with machine and network failures, but E provides that. "

---

this is confusing in Python: 'namedtuple' returns a class but if you forget that and think it returns and instance, some things appear to work:

n = collections.namedtuple('my_named_tuple_class', 'x')
n.x = 3
n.x = 5

---

interestingly nim has exploratory numerical computing stuff eg https://github.com/johnnovak/johnnovak.site/blob/master/blog/files/2016-09-21/src/gammatransfer.nim which i think makes the graphs on pag http://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/

---

this article is mostly a long list of complaints that JS is not staticaly typed:

https://arielelkin.github.io/articles/why-im-not-a-react-native-developer

the complaints/wishlist items are common ones but i'll still make a list b/c it helps us remember some stuff we want in oot:

class Rectangle {
  constructor (width, height) {
    this.width = width
    this.height = height
  }
  area () {
    return width * height
  }
}

class Square extends Rectangle {
  constructor(){
    super() 
  }
}

var myRect = new Square()

console.log(myRect.area()) // ReferenceError: width is not defined
if (1 > 2) 
  console.log("ha"); console.log("he")
  console.log("hi")
  console.log("ho")

e likes JS with Flow okay but wishes it were part of JS, so that the ecosystem used it more.

---

a security problem in JS (easy to obfuscate code) due to autoconversion

http://jazcash.com/a-javascript-journey-with-only-six-characters/

---

on some good things about PHP:

" ...state. Every web request starts from a completely blank slate. Its namespace and globals are uninitialized, except for the standard globals, functions and classes that provide primitive functionality and life support. ... concurrency. An individual web request runs in a single PHP thread. ... Finally, the fact that PHP programs operate at a request level means that programmer workflow is fast and efficient, and stays fast as the application changes. Many developer productivity languages claim this, but if they do not reset state for each request, and the main event loop shares program-level state with requests, they almost invariably have some startup time. For a typical Python application server, e.g., the debugging cycle will look something like “think; edit; restart the server...I claim that PHP’s simpler “think; edit; reload the page” cycle makes developers more productive. "

---

" the Crystal[1] programming language, which has looked closely on mainstream programming languages and adopted the good parts of them, like Go's concurrency model, Node's asynchronous I/O, Ruby's clean & straight-forward syntax and object-orientation model and the performance and interoperability with C."

https://crystal-lang.org/

---

"...numeric computation is a potential domain of interest, and its one where "const generics" would be a great help" -- [2]

what are "const generics"? https://github.com/rust-lang/rfcs/issues/1038 suggests they are just things like SmallVec?<32>, where there is a numeric constant hardcoded into the type

---

Rust's proposed 2017 roadmap's non-goals [3] are pretty interesting in that they are diametrically opposed to my thoughts for Oot (where there is always an oot-next):

" Non-goals

Finally, it's important that the roadmap "have teeth": we should be focusing on the goals, and avoid getting distracted by other improvements that, whatever their appeal, could sap bandwidth and our ability to ship what we believe is most important in 2017.

To that end, it's worth making some explicit non-goals, to set expectations and short-circuit discussions:

    No major new language features, except in service of one of the goals. Cases that have a very strong impact on the "areas of support" may be considered case-by-case.
    No major expansions to std, except in service of one of the goals. Cases that have a very strong impact on the "areas of support" may be considered case-by-case.
    No Rust 2.0. In particular, no changes to the language or std that could be perceived as "major breaking changes". We need to be doing everything we can to foster maturity in Rust, both in reality and in perception, and ongoing stability is an important part of that story."

---

on Swift's unacceptable long compile times being caused by a type inference algorithm with bad time complexity:

mahyarm 7 hours ago [-]

Most of the compile time issues comes from type inference (exponential algo!), a really simple file based incremental compilation logic system and generic / struct specialization. If you could make a version of swift that turns off those two features and improves incremental compilation then it would probably be pretty good.

Compile speeds go into the seconds when you try simple things like append 10 array variables with a + operator because of type inference!!

Swift if a nice language otherwise. Most iOS devs & tooling are small teams creating projects that are around 30-50 kloc of code total. It works ok then, but when you get to 100kloc sizes, all of that fancy stuff is a relative waste in comparison to compile speed and xcode's indexer crashing and dying all the time.

Backend server projects usually have codebases that are far larger than clients after a while, so I worry about using it in the backend like that.

reply

tspike 40 minutes ago [-]

Some anecdata: I'm working on a 100k+ LOC Swift project and I can confirm, compilation is a dog and much of my day is spent getting myself back on track after waiting for compiles to finish.

That said, I'm in love with the language and I'd rather wait for a compile to finish than have a server blow up because of a null pointer exception. After working on a large Python backend, it's been really nice having a compiler to watch my back.

reply

melling 10 hours ago [-]

I don't think it's 10x slower but it's definitely slower. If you have lots of "complicated" type inferencing the compiler does have problems:

https://spin.atomicobject.com/2016/04/26/swift-long-compile-...

https://thatthinginswift.com/debug-long-compile-times-swift/

reply

...

pcwalton 8 hours ago [-]

I wouldn't say Swift is simpler than Rust. The complexity is just in different places. For instance, Swift leans on OO quite heavily (as it has to for compatibility with Objective-C), and as a result its typechecker is quite a bit more complex: it's a full constraint solver as opposed to Rust's typechecker, which has a much simpler "expansion/contraction" heuristic. Of course, on the other hand, Rust has the whole lifetime system that complicates things.

reply

---

pcwalton 8 hours ago [-]

I wouldn't say Swift is simpler than Rust. The complexity is just in different places. For instance, Swift leans on OO quite heavily (as it has to for compatibility with Objective-C), and as a result its typechecker is quite a bit more complex: it's a full constraint solver as opposed to Rust's typechecker, which has a much simpler "expansion/contraction" heuristic. Of course, on the other hand, Rust has the whole lifetime system that complicates things.

reply

Gankro 2 hours ago [-]

Yeah totally agreed. Swift is rife with way more special behaviours/magic to try to smooth over sloppy programming and just make stuff work. There's even a standard term for pushing a problem into the compiler: compiler heroics. Where heroics fall down, the language needs extra annotations to help the compiler out (@escaping closures is the most obvious).

Swift also has a lot of slightly more ergonomic but otherwise largely unnecessary features: initializers vs static funcs, guards vs ifs, throws vs returning an enum, fileprivate vs private.

Moving to working on Swift from Rust has led to me really appreciating how simple a lot of Rust really is. There's a lot of "oh it's literally just X".

But the lifetime thing is a huge empowerer of Rust being simpler everywhere else. It means a lot more stuff can be "just X", because X can be made safe.

reply

---

pjmlp 10 hours ago [-]

OS/2 had its own version of COM, called SOM.

The best thing about it, was that it also supported metaclasses Smalltalk style. So one could go crazy with OO metaprogramming.

Sadly it died with OS/2.

Then there was the other multi-platform OO ABI project from Apple, Sun and IBM, Taligent. Also not that successful.

COM is ok to work with, when done from .NET or now UWP point of view. C++/CX and the recently announced C++/WinRT? are still ok, even if a bit more wordy.

The real pain starts increasingly with C++ WTL, C++ ATL, C++ MFC or for the real masochists pure C with COM related macros and the COM IDL compiler.

Then there was that other thing called CORBA, which makes the initial versions of Java EE feel like being in heaven.

reply

rpeden 10 hours ago [-]

I've mostly used COM from .NET, and as you said, it's fairly painless.

At one point, we were having issues with a COM DLL we were using from .NET, so I thought it might be fun and useful to dive in and learn more about COM. I bought Don Box's COM book, and tackled COM in Plain C: http://www.codeproject.com/Articles/13601/COM-in-plain-C

It was somewhat less fun than I'd hoped it would be.

XPCOM seems to be (somewhat) alive and kicking: https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM ; Firefox and VirtualBox? are the only apps I've seen that use it, though I imagine at least a few others do as well.

michaelsbradley 12 hours ago [-]

OPC UA (IEC 62541) provides a robust platform-neutral information model, and multiple options for transport. Similarly, it specifies events, subscriptions, discovery and several higher-level abstractions (e.g. alarms and programs). In some sense, it's a cross-platform successor to COM/DCOM.

Some open source implementations are incomplete, but work continues.

https://en.wikipedia.org/wiki/OPC_Unified_Architecture

https://github.com/open62541/open62541

https://github.com/OPCFoundation

You may also find Woopsa interesting, depending on your use cases.

http://www.woopsa.org/

reply

---

JSON parsing spec details:

http://seriot.ch/parsing_json.html

---

" Like the numerical for loop, the generic for loop behave a little differently in Lua 5.1 compared to Lua 5.0.2. In the following example: local a = {[1]=2,[2]=4,[3]=8} local b = {} for i,v in pairs(a) do b[i] = function() return v end end print(b[1](), b[2](), b[3]()) Lua 5.0.2 will print out 3 nil s, while Lua 5.1 will print out 2, 4 and 8. In Lua 5.0.2, th e scope of the external iterator variables encloses the for loop, resulting in the creation of a single upvalue. In Lua 5.1, the iterator variables are truly local t o the loop, resulting in the creation of separate upvalues. "

---

Eve looks pretty cool

---

http://stackoverflow.com/questions/tagged/programming-languages http://stackoverflow.com/questions/tagged/language-design http://softwareengineering.stackexchange.com/questions/tagged/programming-languages http://softwareengineering.stackexchange.com/questions/tagged/programming-languages+language-design

---

Python "a.b".split(".") vs ".".join(["a","b"]) (compare to Ruby: "a.b".split(".") and ["a", "b"].join(".") )

https://news.ycombinator.com/item?id=12838376 claims there is a good reason

---

repsilat 8 hours ago [-]

> but why?

In Python (almost?) any unqualified identifier you see in an expression is either a builtin function or it's defined/imported somewhere else in the file. I find Ruby a little stressful by comparison (without even getting into the awful cultural approval of defining the names of things procedurally, ensuring you'll never find where they came from...)

The lack of parens on function calls also adds uncertainty for me. I know in Python you can overload `__getattr__` and introduce just as much magic, but for the most part I can be confident that `a.b` doesn't do anything too crazy. That's the general trend for me -- Python is almost relentlessly boring, with a few little surprises that stick out mostly because everything else is so plain and sensible. Ruby is just a little crazier everywhere, partly because the language is a bit more eccentric and partly because the people who use it are all Ruby programmers :-)

reply

pmontra 7 hours ago [-]

My why was about why one should design a language in that way and not think harder and make it look better, but in the early days of Python there weren't many other OO languages around, so I can understand that it could have been natural to go somewhat low level and mimic C (even with all those __s). Maybe some language went more high level and died because of that. Python passed the test of time so that (self): probably was a good idea at the end of the 80s / early 90s.

In the case of Ruby, you can't name a function (which is a method) without executing it. That's why () don't matter much. The optional () also make Ruby a good language to write DSLs. By the way, if you want to get a reference to a method, you must prepend it with a &, pretty much like in C. Ha! :-) This demonstrates that every language has its quirks. Or you can call a method by sending a message to its object like object.send(:method) using a symbol named after the method. That's more or less a reference to it, which can be metaprogrammed because symbols can be built from strings ("something".to_sym). Is that the "defining the names of things procedurally" you don't like? On the other side, I find stressful that in Python you have to enumerate all your imports, like in Java. It's the same in Ruby, but I'm almost always programming in Rails and it auto imports everything. All those imports in Django and Web2py are tiresome. I got naming clashes with Rails only a couple of times in 10 years but I missed imports many times in Django yesterday.

I learned Python after Ruby. My trajectory was BASIC in the 80s, Pascal, C, Perl, a little TCL, Java and JavaScript? since their beginning, Ruby since Rails, a little Python and PHP a few years later, much less Perl now and almost nothing of what preceded it, some Elixir. I keep using Ruby and JS, I'm using Python now. I insist that compared to Pascal, Java and Ruby Python looks illogical and unnecessarily complex, but I can understand why people with a different history can feel like Ruby is eccentric. I remember when I demoed it to some PHP developer many years ago, he said it was like writing in English, which was surprising because it is not how it looks to me, but it felt flattering.

reply

cma 6 hours ago [-]

I like the __'s, when listing methods you can immediately scan it and ignore all the built-in stuff if you want, to see what is special about this object.

In order to get "split" in ruby for a sequence, at least a one time, hopefully cleaned up by now, you end up mixing in some huge number of methods and made any method list in the console impossible to read.

reply

---

this looks really cool!!

https://github.com/hchasestevens/astpath/blob/master/README.md

---

" This quirk — that C regards all non-zero integers as true — is generally regarded as a mistake. C introduced it machine languages rarely have direct support for Boolean values, but typically machine languages expect you to accomplish such tests by comparing to zero. But compilers have improved beyond the point they were when C was invented, and they can now easily translate Boolean comparisons to efficient machine code. What's more, this design of C leads to confusing programs, so most expert C programmers eschew using the shortcut, preferring instead to explicitly compare to zero as a matter of good programming style. But such avoidance doesn't fix the fact that this language quirk often leads to program errors. Most newer languages choose to have a special type associated with Boolean values. (Python has its own Boolean type, but it also treats 0 as false for if statements.) " -- http://www.toves.org/books/cpy/

---

security:

" Its good to see classic ISAs moving away from memory protection 'rings' towards arbitrary 'zones', even if retrofitting it e.g. SMEP/SMAP gives horrendous APIs and a nightmare to keep checked and balanced! ;)

The Mill comes at this from the other direction, starting with 'zones' (termed "turfs" in Mill jargon) and emulating 'rings' (if your kernel wants that) with overlapping access rights between turfs.

On the Mill you can have lots of turfs that may or may not have disjoint memory access, and you move between turfs synchronously with a special kind of indirect function call termed a "portal". There are provisions for passing across specific transient rights to memory in these calls, so you can pass a pointer to a buffer and other aspects that facilitate the 'usercopy()' mentioned in the article but with full hardware rather than software protection.

We have tightened the portal/turf concept extensively since the Security talk http://millcomputing.com/docs/#security but it does give a gentle high-level intro to turfs and portals.

These days, we have facilities for passing buffers without exposing memory pointers and other niceties to make it easy to write correct yet efficient code. They can now all be made public but oh so little time, and I'm hoping to get a white paper out about it by the end of this month. Watch this space ;)

Happy to elaborate if anyone has Mill or general questions :)

PS an example of 'zoning' is http://elfbac.org/ , which is not getting enough attention. Its another way facilitate memory separation, albeit by abusing the classic MMU and with inherent runtime cost. Elfbac is userspace, but the hardware could be abused to protect kernels on classic CPUs too. Well worth everyone reading :) "

" ELFbac policy captures the programmer's intention for code and data sections: for code units not meant to access certain data sections (or pages), access is trapped. Whenever a code section has exclusive relationships with some data sections (such as between a cryptographic library and crypto keys or certificates), or may only access a data section in a particular phase of the process' runtime (such as initialization, authentication or data handling), these relationships are enforced. Standard ELF ABI already provides over 30 semantically and intentionally different kinds of sections for the runtime, and the programmer may create custom sections (e.g., the GCC toolchain creates custom sections with the attribute __section__(name), a GNU extension). "

http://elfbac.org/

---

fizzbatter 7 days ago [-]

1. Channels without panics. Channels are awesome, but Go's design of them means that you have to learn special ways to design your usage of channels so that it does not crash your program. This is asinine to me. So much type safety exists in Go, and yet it's so easy for a developer to trip over channels and crash their program.

2. Proper error handling. I love error checking - i love it in Rust especially. It's very explicit, and most importantly, very easy to check the type of things. Recently i was reading an article about Go errors[2] and it made me realize how messy Go errors are. There are many (three in that article) ways to design your error usage, and worst of all your design doesn't matter because you have to adapt to the errors returned by others. There is no sane standard in use that accounts for the common needs of error checking.

3. Package management. It's a common complaint, i know. But Rust & Cargo is so excellently crafted.. Go just got it wrong. Recently i've been using Glide, and while it's a great tool, there is only so much it can do. It's a tool built to try and put some sanity in a world where there is next to no standardization. We need a Go package manager.. hell, just copy Cargo.

4. Enums. My god, Enums. Such a simple feature, but so insanely welcome and useful in Rust.

You'll note that i didn't list Generics. I know that's high on peoples list, but not mine. To each their own.. please don't start a holy war. (this is likely due to me using Go for ~4 years. I'm quite comfortable without Generics)

[2]: http://dave.cheney.net/2016/04/27/dont-just-check-errors-han...

---

quotemstr 10 days ago [-]

> But empirically, it doesn't happen as often.

It's sad that people use choice-of-language as a proxy for choice-of-execution-strategy (interpreted? JITed?), choice-of-allocation-strategy, choice-of-linking-strategy, choice-of-packaging, and so on. All of these factors should be orthogonal. By linking them, we create a lot of inefficiency by fragmenting our efforts.

AFAICT, C++ is the only language that's really been successful at being multi-paradigm.

reply

(bayle: at least for Oot, i disagree with this (Oot is going to be opinionated along those dimensions), but it gives a list of things to think about in a language)

---

http://pact-language.readthedocs.io/en/latest/

---

random, someone else's idea for a dynamic context variable in Python: https://github.com/mitsuhiko/python-logical-call-context/blob/master/README.md

---

hey, Python3 now enforces the privacy of variables in a class instance which are prefixed with __ !

---

random fun language:

https://states-language.net/spec.html

see also https://news.ycombinator.com/item?id=13094107

---

what data is stored for a process?:

https://drawings.jvns.ca/process/

---

https://drawings.jvns.ca/distributed-systems/

---

some random problems that tend to occur in distributed systems: http://wiki.c2.com/?DistributedTransactionsAreEvil

--- " Erlang was built as a concurrent language for fault tolerance, not performance. The best example of a trade-off I can think of is the preemptive scheduling: Erlang switches between processes very often, which has a cost in performance but ensure the system stays responsive even if some tasks are doing intensive work (or even stuck in an infinite loop). Every function call, allocation, etc. increments a counter which is used to decide when a process is scheduled out, so that's significant overhead you pay to get a more predictable system.

Some other examples are live code loading, which means you can upgrade a system without restarting or interrupting it, the extensive tracing and statistics, etc. All these little things make it very nice to build and maintain a long-running service, but add up to make the VM slower. "-- https://news.ycombinator.com/item?id=13080920

---

deniska 2 days ago [-]

Be careful with named tuples:

    >>> from collections import namedtuple
    >>> Time = namedtuple('Time', ['hours', 'minutes'])
    >>> Point = namedtuple('Point', ['x', 'y'])
    >>> t = Time(hours=7, minutes=40)
    >>> p = Point(x=7, y=40)
    >>> p == t
    True

They're still tuples. They're meant more as a tool for making an already existing tuple mess slightly more managable rather than creating a new mess.

For making plain data objects take less boilerplate to create have a look at attrs: https://attrs.readthedocs.io

reply

SEJeff 2 days ago [-]

Strong +1 for attrs. It is a really nice library and quite easy to use.

reply

---

http://www.w3schools.com/js/js_statements.asp thinks the most important js statements are:

break continue debugger do ... while for function if ... else return switch try ... catch var

---

http://www.w3schools.com/js/default.asp is probably a good tutorial to model ours after, esp. the first few sections, although they are targeted a little more towards people new to programming than i am aiming for. The official Python tutorial is, of course, the other one to model on.

---

"He defines 5 primitive operations (atom, eq, cons, car, and cdr) along with a conditional expression. It also assumes the ability to define functions" [4]

"`lambda` and function application alone are Turing-complete, " -- [5]

https://dev.to/ericnormand/the-idea-of-lisp has a list of cool things about list (note that it also has some inaccuracies, debunked in [6]):

---

Animats 9 hours ago

parent flag favorite on: Four years with Rust

Rust as a language is now realizing the benefits of borrow checking. As the article points out, the syntax doesn't have to distinguish between move and assign. The borrow checker will catch a reuse of something already moved away. This turns out to be effective enough in practice that the syntax distinction isn't necessary. That wasn't obvious up front.

Not having exceptions tends to generate workarounds which are uglier than having exceptions. Rust seems to be digging itself out of that hole successfully. Early error handling required extremely verbose code. The "try!()" thing was a hack, because a expression-valued macro that sometimes does an invisible external return is kind of strange. Any macro can potentially return, which is troublesome. The "?" operator looks cleaner; it's known to affect control flow, and it's part of the language, so you know it does that. Once "?" is in, it's probably bad form for an expression-valued macro to do a return.

There's still too much that has to be done with unsafe code. But the unsafe situations are starting to form patterns. Two known unsafe patterns are backpointers and partially initialized arrays. (The latter comes up with collections that can grow.) Those are situations where there's an invariant, and the invariant is momentarily broken, then restored. There's no way to talk about that in the language. Maybe there should be. More study of what really needs to be unsafe is needed. -- https://news.ycombinator.com/item?id=13232002

---

"That said, there are actually drawbacks of Rust compared with Go, IMHO. When facing a moderately large project written by others, the ergonomics for diving into the project is not as smooth as Go. There is no good full-source code indexer like cscope/GNU Global/Guru for symbol navigation across multiple dependent projects. Full text searching with grep/ack does not fill the gap well either since many symbols, with their different scopes/paths, are allowed to have the same identifier without explicitly specifying the full path. That makes troubleshooting/tracing a large, unfamiliar codebase quite daunting compared with Go. "

---

hardwaresofton 9 days ago [-]

The Haskell community is incredibly blessed to have Stephen Diehl around. This is an excellent write up on so many of the things that have happened in the Haskell community this year, not to speak of his other written guides and in-depth explanations which are amazing.

reply

fegu 9 days ago [-]

I came here after reading the article to say something like this. I try to follow the Haskell community, reading articles etc, but I had missed a lot of this stuff.

reply

TheAceOfHearts? 9 days ago [-]

I started learning Haskell this year.

One of the small bumps I had was getting my environment setup. Based on my experiences with Ruby and Node, I knew I'd want to have a tool for managing the language's version and dependencies per-project, so I ended up going with stack [0]. Arriving at that decision required a bit more reading than with other languages. Additionally, while setting up stack, I thought their docs were too long. They'd benefit from being broken up into more pages, instead of pushing so much all at once. With that said, the information presented in the docs is actually quite clear and well written.

Looking at the Downloads section [1] on the Haskell website, it looks like they've improved the docs since I last visited, but it's still a bit confusing. What's the point of Haskell Platform? It looks like it includes stack, which already covers all my requirements. Maybe it'd be useful to include a "why" section for each choice, to provide some examples of scenarios in which you might go with one choice over the other. Telling me what I'm getting doesn't give me any meaningful information if I don't know why I'd want that in the first place. I think there's too much information up-front, even though people landing there probably aren't equipped to make use of it. Why would someone pick the Haskell Platform option or the minimal install option?

While reading Learn You a Haskell, I used Haskell for Mac [2] for poking around. It's pretty great, although I didn't end up purchasing it, as I'm not doing anything that would benefit from using it.

Something I liked about Elixir is that you can just read their getting started docs and pick up Phoenix framework to get a web app up and running. That gives you a nice base on which to gradually build upon as you learn. Does anyone have a similar suggestion for Haskell?

[0] https://docs.haskellstack.org/en/stable/README/

[1] https://www.haskell.org/downloads

[2] http://haskellformac.com/

reply

bojo 9 days ago [-]

Environment setup has been heavily discussed[1] in the community and is definitely a pain point when getting started with Haskell. Unfortunately there are several sides with differing opinions on how to bootstrap a user and get them using Haskell, although the new Downloads page seems to be a good first step towards achieving a more unified goal.

The opinions kind of break down as:

1. Simply get GHC on their machine - Typically academic instructors trying to get their students started with low overhead. They simply want GHC and GHCi available so an assignment can compile, or code can be tested in the repl.

2. Install Haskell Platform - This has been strongly supported by what I would call the older Haskell community, although admittedly share your confusion as to its point. This seems to target point-and-click users and give them every tool in the ecosystem so that when they reference the random blogs/documentation out there they aren't confused by missing a command.

3. Use Stack - This has been heavily pushed by people trying to help Haskell grow in the programming industry. It's a fantastic tool and I recommend everyone use it if they plan on doing complex projects, especially in a team setting.

My hope is that over the next year or so we'll see more stack adoption, ideally because there are more consistent tutorials/guides/documentation published which target it as the basic tool to start developing Haskell in.

[1] https://www.reddit.com/r/haskell/comments/50prvg/haskellcomm...

reply

---

dllthomas 9 days ago [-]

I think types are a bit mediocre as documentation. But as you say, mediocre correct documentation beats the pants off of any sort of incorrect documentation.

Moreover, it's documentation that's immediately available for the expression I just assembled at the repl!

None of which is to say most of us wouldn't love some good, correct documentation. But that takes a lot of effort, and I can see why priorities wind up being elsewhere given that it's a smaller improvement over what's available free than in other languages.

Worked examples are actually low-hanging fruit that we really should be making available, given that we can mechanically ensure correctness. Come to think of it, I wonder if this could just be a matter of surfacing existing end-to-end tests somewhere visible.

reply

bojo 9 days ago [-]

I'm also in the "I think there is plenty of documentation" camp, so this topic tends to confuse me when it comes up.

I suppose what people who are curious about Haskell find lacking are definitive language guides like the Rust Book, Effective Go, and the like?

reply

zoul 9 days ago

unvote [-]

I am one of the people who are interested in Haskell and find its documentation lacking. I don’t miss comprehensive language books, I miss module documentation. It almost looks as the Haskell community has something against examples in documentation. I stress that the types are not enough. Contrast that with Elm, where I was able to write a working app in a day or so. But Elm is written off quite harshly in the post, so I am again left with the feeling that Haskell may not be the language meant for people like me.

reply

---

some elements of a formatting convention:

"spacing, variable naming conventions, line endings, spaces instead of tabs"

---

some C safety tips

https://github.com/dovecot/core/blob/master/doc/securecoding.txt

---

on the difficulty of making an immutable language on top of RPython (eg the language Pixie, a functional Lisp in RPython):

_halgari 18 hours ago [-]

I'm the original author of pixie, and yeah, I'm a bit surprised to see this hit HN today.

It should be mentioned that I put about a year of work into this language, and then moved on about a year or so ago. One of the biggest reasons for my doing so is that I accomplished what I was looking for: a fast lisp that favored immutability and was built on the RPython toolchain (same as PyPy?). But in the end the lack of supporting libraries and ecosystem became a battle I no longer wanted to fight.

Another goal I had was to see how far I could push immutability into the JIT. I learned a lot along the way, but it turns out that the RPython JITs aren't really that happy with VMs that are 99.99% pure. At one point I had a almost 100% immutable VM running for Pixie...as in each instruction executed created a new instance of the VM. It worked, but the JIT generated by RPython wasn't exactly happy with that execution model. There was so much noise in the maintenance of the immutable structures that the JIT couldn't figure out how to remove them all, and even when it could the JIT pauses were too high.

So anyways, after pouring 4 hours a day of my spare time into Pixie for a full year, I needed to move on.

Some other developers have commit rights and have pushed it along a bit, but I think it's somewhat a language looking for usecase.

And these days ClojureScript? on Node.js could probably be made to handle most peoples needs.

reply

samth 17 hours ago [-]

As a somewhat different data point, we've been developing pycket, an implementation of Racket on top of rpython, for the past 3 years, and while it faces many of the same challenges, we've been very happy with the results. The JIT can remove almost all of the intermediate data structures caused by the functional nature of the language, and we support tail calls and first class continuations. Overall, pycket is almost as fast as chez scheme on average, and faster than every other scheme system we've compared with.

reply

_halgari 14 hours ago [-]

Yes! Pycket is a great language, I used your paper as a reference more than once while working on Pixie.

reply

---

mikaelj 5 hours ago [-]

As always, for all these small languages (MicroPython? comes to mind, but even "real-world" languages such as Lua) - unless they grow a debugger, they'll always be silly toy languages nobody can use for serious work.

I'd settle for a gdb backend, really. But printf-debugging is unacceptable.

reply

hkjgkjy 5 hours ago

parent flag favorite on: Pixie – A small, fast, native Lisp

Clojure developer here, was interested in Pixie since you announced it.

The only reason why I have not tried it out yet is

It's lazy and dumb, but that is the truth. So for me the only reason for not being a Pixie developer full time are those 2 UX things.

Thanks for your work and keep it up :-).

---

reply

grayrest 20 hours ago [-]

If this (((startup time))) is a major concern for you, there are several clojurescript projects--Planck and Lumo--that have very fast startup. JVM Clojure is a bit faster than jsc/node clojurescript (~2x IIRC) but that shouldn't matter much for scripting/automation and lambda functions or equivalent on other clouds.

reply

---

" Don’t declare variables to be instances of particular concrete classes. Instead, commit only to an interface defined by an abstract class.

This point is profound and if it isn’t already something you religiously practice, I suggest you do some more research on this topic. Coupling between types directly is the hardest, most pernicious form of coupling you can have and thus will cause considerable pain later.

Consider this code example:

public string GetLastUsername?() { return new UserReportingService?().GetLastUser?().Name; }

As you can see, our class is directly new()’ing up a UserReportingService?. If UserReportingService? changes, even slightly, so must our class. Changes become more difficult now and have wider-sweeping ramifications. We have now just made our design more brittle and therefore, costly to change. Our future selves will regret this decision. Put plainly, the “new” keyword (when used against non-framework/core-library types) is potentially one of the most dangerous and costly keywords in the entire language — almost as bad as “goto” (or “on error resume next” for the VB/VBScript veterans out there).

What, then, can a good developer do to avoid this? Extract an interface from UserReportingService? (-> IUserReportingService?) and couple to that. But we still have the problem that if my class can’t reference UserReportingService? directly, where will the reference to IUserReportingService? come from? Who will create it? And once its created, how will my object receive it? This last question is the basis for the Dependency Inversion principle. Typically, dependencies are injected through your class’ constructor or via setter methods (or properties in C#).

...

Creational Patterns Considered Obsolete

the Abstract Factory and Builder pattern implementations, to name two, became increasingly complicated and convoluted ... Inversion of Control Containers

To combat the increasing complexity and burden of managing factories, builders, etc, the Inversion of Control Container was invented. It is, in a sense, an intelligent and flexible amalgamation of all the creational patterns. It is an Abstract Factory, a Builder, has Factory Methods, can manage Singleton instances, and provides Prototype capabilities. It turns out that even in small systems, you need all of these patterns in some measure or another. As people turned more and more of their designs over to interface-dependencies, dependency inversion and injection, and inversion of control, they rediscovered a new power that was there all along, but not as easy to pull off: composition.

" -- [7]

dunno what they mean by IoC? container. https://martinfowler.com/articles/injection.html suggests that what they really mean is dependency injection? The thing being 'contained' by the 'IoC? container' appears to just be what i'd call the 'plugin registry' of what to inject in this particular program.

---

https://martinfowler.com/articles/injection.html talks about different mechanisms for dependency injection, and a similar pattern called service locator. Dependency injection is when you have some class A that calls (and possibly even constructs) class B, but you want to generalize it so that class A can use any class in place of class B; so what you do is you have class A take the class B as a parameter in some form or another, and then you have some other registry that provides the 'B' parameter value. In dependency injection the registry calls class A in order to 'inject' the 'B' parameter value. Since the registry calls class A (rather than class A calling the registry), then this is an instance of 'inversion of control'; so 'dependency injection' is a special case of 'inversion of control' in which the purpose of the inversion of control is to provide the 'B' parameter value (the 'dependency'). The methods described by the article are:

The article then compares these to service locators. With service locators, A has a reference to the registry and calls the registry to get B. An advantage of service locators is that they are conceptually simpler. A disadvantage is that, in order to see what dependencies A has, you have to look for all calls to the registry, rather than just looking at static metadata like constructor parameters/setters defined/interfaces implemented. Another disadvantage is that, with service locators, assuming that one big service locator is used for many services throughout a program, the service locator could be too 'fat' and hard to mock for testing of A, whereas with dependency injection the person writing A can control the complexity of everything needed for testing, although if the designer of the service locator pays attention to this it can be avoided.

Fowler prefers Service Registries because they are simpler, except when building classes to be used in multiple applications, in which case he prefers Dependency Injection. Within Dependency Injection, he prefers Constructor Injection, because it's simpler, but Setter Injection if there is some reason to use it.

---

" Many forms of ceremony come from unnecessary special cases or limitations at the language level, e.g.

factory patterns (Java)

dependency injection (Java)

getters and setters (Java)

annotations (Java)

verbose exception handling (Java)

special syntax for class variables (Ruby)

special syntax for instance variables (Ruby)

special syntax for the first block argument (Ruby) " -- [8]

---

http://www.slideshare.net/extempore/keynote-pnw-scala-2013

---

https://blog.golang.org/open-source

" In Go, we have explicitly tried not to solve everything. Instead, we've tried to do just enough that you can build your own custom solutions easily. ... In general, if we have 100 things we want Go to do well, we can't make 100 separate changes. Instead, we try to research and understand the design space and then identify a few changes that work well together and that enable maybe 90 of those things. We're willing to sacrifice the remaining 10 to avoid bloating the language ... Next, channels and goroutines. How should we structure and coordinate concurrent and parallel computations? Mutexes and condition variables are very general but so low-level that they're difficult to use correctly. Parallel execution frameworks like OpenMP? are so high-level that they can only be used to solve a narrow range of problems. Channels and goroutines sit between these two extremes ... Next, types and interfaces. Having static types enables useful compile-time checking, something lacking in dynamically-typed languages like Python or Ruby. At the same time, Go's static typing avoids much of the repetition of traditional statically typed languages, making it feel more lightweight, more like the dynamically-typed languages. ... Go's interfaces are a key part of that. In particular, omitting the ``implements declarations of Java or other languages with static hierarchy makes interfaces lighter weight and more flexible. Not having that rigid hierarchy enables idioms such as test interfaces that describe existing, unrelated production implementations. ... Go's testing package is not meant to address every possible facet of these topics. Instead, it is meant to provide the basic concepts necessary for most higher-level tooling. Packages have test cases that pass, fail, or are skipped. Packages have benchmarks that run and can be measured by various metrics. ... Next, refactoring and program analysis. Because Go is for large code bases, we knew it would need to support automatic maintenance and updating of source code. We also knew that this topic was too large to build in directly. But we knew one thing that we had to do. In our experience attempting automated program changes in other settings, the most significant barrier we hit was actually writing the modified program out in a format that developers can accept.

In other languages, it's common for different teams to use different formatting conventions...gofmt... Gofmt enabled gofix, goimports, eg, and other tools. ... Last, building and sharing software. In the run up to Go 1, we built goinstall, which became what we all know as "go get". That tool defined a standard zero-configuration way to resolve import paths on sites like github.com, and later a way to resolve paths on other sites by making HTTP requests. This agreed-upon resolution algorithm enabled other tools that work in terms of those paths, most notably Gary Burd's creation of godoc.org. In case you haven't used it, you go to godoc.org/the-import-path for any valid "go get" import path, and the web site will fetch the code and show you the documentation for it. A nice side effect of this has been that godoc.org serves as a rough master list of the Go packages publicly available. All we did was give import paths a clear meaning. ... You'll notice that many of these tooling examples are about establishing a shared convention.

---

"

neilparikh 17 hours ago [-]

> Awesome progress that brought us Java, SOAP, C++, Javascript-on-the-server, and a slew of other tech some of us want to stay far, far away from.

When people talk about progress from PL design, they aren't talking about any of those things. If you notice, all of those things were made in industry, not in PL research/academia. Not to mention that those languages are also ones that ignored the PL design progress! (although C++ finally seems to be adding some ideas from PL research in C++17)

They're talking about things like parametric polymorphism, dependent types, modules, macros etc. (I've mostly been reading about work in the types/ML family languages, but I'm sure there's progress been made outside that as well)

reply

josch 1 day ago [-]

I guess Scala being "the most expressive typed language on the JVM" is true in the sense that it has a ton of features (OOP, FP, Exceptions, null Java backwards compatibility, etc.), but that's just too many features for a coherent language.

reply

oelang 1 day ago [-]

It's funny that both java & c# seem to be picking up many scala features in their latest & future versions like traits, lambdas, tuples, pattern matching, case classes, closed hierarchies, declarative generics... Your language isn't incoherent if you can build features on top of each other with is exactly what scala does.

reply

"

---

Elixir does something like my 'local mutation only' thing: " A Note About Immutable Variables

Elixir is a functional language which means that variables can’t change value. You can, however, reuse variable names. If you set x = 2 and then set x = 3, Elixir doesn’t mind.

iex> x = 2 2 iex> y = 3 3 iex> x = y 3 iex> x 3 iex> ^x = 2 # force no reusing of variables (MatchError?) no match of right hand side value: 2

Behind the scenes, though, there’s still a version of x that is set to 2. For instance, if you pass x to another function that’s being run concurrently and then change x, the concurrent function will still have the original value of x. You can force variables to not be reused by using ^, but ^x = 3 would still match because xactually is 3. "

---

hot reloading in Erlang/BEAM/Elixir:

" Hot reloading

Elixir also comes with another deployment option that the BEAM makes possible. It’s a little bit more complex, but certain types of applications can greatly benefit from it. It’s called a “hot reload” or “hot upgrade.”

Distillery goes out of its way to make this easy by just letting you tac on the --upgrade flag to your release build command, but that still doesn’t mean you should always use it.

Before talking about when you’d use it, we need to understand what it does.

Erlang was developed to power phone systems (OTP stands for Open Telecom Platform) and currently powers about half of them on the planet. It was designed to never go down, which is a complicated problem for deployments when you have active phone calls running through the system.

How do you deploy without disconnecting everybody on the system? Do you stop new traffic from coming to that server and then politely wait for every single call to finish?

The answer is no, and that’s where hot reloads come in.

Because of the heap isolation between processes, a release upgrade can be deployed without interrupting existing processes. The processes that aren’t actively running can be replaced, new processes can be deployed side by side with currently running processes while absorbing the new traffic, and the running processes can keep on trucking until they finish their individual jobs.

That allows you do deploy a system upgrade with millions of calls and let the existing calls finish on their own time without interruption. Imagine replacing a bunch of bubbles in the sky with new bubbles…that’s basically how hot reloading works; the old bubbles hang around until they pop.

“Imagine ‘hot reloading’ as adding new bubbles next to old ones before they pop.”

Click To Tweet

Understanding that, we can potentially see some scenarios where it might come in handy:

    A chat system utilizing websockets where users are connected to specific machines
    A job server where you may need to deploy an update without interrupting jobs in progress
    A CDN with huge transfers in progress on a slow connection next to small web requests

For websockets specifically, this allows deployments to a machine that might have millions of active connections without immediately bombarding the server with millions of reconnection attempts, losing any in progress messages. That’s why WhatsApp? is built on Erlang, by the way. Hot reloading has been used to deploy updates to flight computers while planes were in the air.

The drawback is that hot reloading is more complicated if you need to roll back. You probably shouldn’t use it unless you have a scenario where you really do need it. Knowing you have the option is nice. " -- [9]

" Clustering is the same way; you don’t always need it, but it’s indispensable when you do. Clustering and hot reloads go hand in hand with a distributed system. " (by clustering they mean that since data in Erlang is immutable, you can more easily do RPC) -- [10]

---