notes-computer-programming-programmingLanguageDesign-prosAndCons-haskellProsAndCons

my personal opinion: there's a lot to like about Haskell. I list some of the cons at [1].


toread:

---

toread http://news.ycombinator.com/item?id=4721550

http://neugierig.org/software/blog/2011/10/why-not-haskell.html

---

"

> Haskell and Ruby have continuations

I wonder where this meme of Haskell having continuations started. It pops up here and there, like some mystical incantation that makes the speaker closer to the gods of programming languages.

Haskell has precisely the same continuations JS has, it's just capable of abstracting CPS away. Somewhat.

See http://hackage.haskell.org/pac... . "

"

DanWaterworth? 4 days ago

link

Haskell doesn't have first-class continuations, it has syntactic sugar that allows you to write continuation passing code in a direct style.

reply "

--- http://www.quora.com/Haskell/What-are-the-main-weaknesses-of-Haskell-as-a-programming-language

" What are the main weaknesses of Haskell as a programming language?

Jesse Tov, writes Haskell all day. 92 votes by Marc Bodnick, Omer Zach, Matthew Hill, (more) Haskell’s main technical weakness compared to popular, mainstream languages is probably the lack of a comprehensible cost model. Essentially, laziness makes it difficult to reason about when computation happens, and it can result in serious space leaks. While GHC can produce very efficient code, convincing it to do so in cases where that isn’t the default behavior is a dark art with few real masters. I’ve been writing Haskell for over a decade and using it as my main language at work for at least five years, and I’ve never learned to do this beyond simple cases—mainly because I don’t write code that needs to run efficiently.

There are some other things that are easy in many other languages but difficult in Haskell. For example, purity makes it slightly painful to discover and propagate run-time configuration information. In most languages it’s easy to read a configuration file at startup and store the result in a global variable, but in Haskell that requires dirty tricks.

Haskell has some other weaknesses compared to other academic/research languages. For example, its module system (if you can even call it that) is clearly inferior to Standard ML’s and OCaml’s module systems. Its compile-time meta-programming facility, Template Haskell, is occasionally useful, but it’s a poor substitute for a real, state-of-the art macro system. And of course, there are many things that some languages can do but others can’t. For example, if you want to prove your programs correct, Haskell’s type system, even with recent extensions in that direction, won’t let you prove as much as Coq or Agda—but that’s asking a language to do something that it wasn’t designed for, so I don’t really consider it a valid criticism.

From a theoretical perspective, some people have complained that call-by-need makes Haskell harder to reason about. It’s true that Haskell’s function space is a bit messier what you get in a language that’s both pure and total, but I don’t buy that it’s worse than your standard impure, call-by-value language—it’s just different.

Finally, Haskell does suffer some social weaknesses by being weird and researchy. Hiring large numbers of Haskell programmers is going to be difficult (though the ones you do find are likely to be very good). Because it’s so different from the languages that most people know, it presents a potentially steep learning curve. And of course, the community, while helpful and prodigious, is tiny compared to more popular languages. The variety and quality of libraries available on Hackage is impressive given the number of Haskell programmers out there, but it cannot compete with Java for mindshare. Finally, because Haskell is still very research-oriented, it changes quickly and sometimes unpredictably. I’ve found that my Haskell code breaks and needs to be updated not only for major GHC version increments but often for minor versions as well. This is annoying, but it’s a price I’m willing to pay to program in a beautiful language with top-notch abstraction facilities.

--

http://www.yesodweb.com/blog/2011/09/limitations-of-haskell

---

 "Namespace clashing, particularly for record fields

data Record = Record {a :: String } data RecordClash? = RecordClash? {a :: String }

Compiling this file results in:

record.hs:2:34: Multiple declarations of `Main.a' Declared at: record.hs:1:24 record.hs:2:34 "

solution ideas:

http://ghc.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolution

however a comment on http://www.yesodweb.com/blog/2011/09/limitations-of-haskell notes that typeclasses are a better soln (so we should have everythingIsATypeclass , just as i've been saying for jasper)

---

history via Peyton-Jones

http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/index.htm

http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/

typeclasses, also with a useful intro to how the compiler implements typeclasses http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/ECOOP-July09.pdf

vs erlang http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/Haskell-Erlang-Jun09.pdf

http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/HaskellRetrospective-2.pdf

---

--

" The second problem with Monads is related to their great strength - they are synonymous with Domain Specific Languages. The mission statement for domain specific languages is stupendous - don't write a complex program to solve your problem, write a simple program in a programming language you have designed solely for that task. I've already mentioned the best use of DSL/Monads - Haskell's parsec module. With Parsec the Haskell function to parse a file is identical to the Backus Naur Form description of the parse grammar - how much clearer could it be? They say that imitation is the highest form of flattery, and every parser function I have written outside of Haskell since meeting Parsec has been a shoddy facsimile of Parsec in the chosen language. The success of Parsec and its ilk has filled Hackage (the Haskell module repository) with hundreds of DSLs covering any task you care to mention.

Yes, literally hundreds of them. Hundreds of little programming languages, one for BNF parsing, one for parsing xml, one for creating PDFs, all perfectly suited to their task. Each is different, and each has its own learning curve. Consider a common task such as parsing an XML file, mutating it according to some JSON you pulled out of a web API and then writing to a PDF. In Ruby or a similar object oriented language you expect to find three APIs/gems, all with a similar object oriented syntax, but for three Haskell DSLs designed for three different tasks to share syntax implies that their authors failed to optimise them for those tasks, hence instead of five minutes with API documentation you have hours of DSL tutorials ahead of you before you can begin work. It is this subtle difference that plants Haskell on the academic side of the town and gown divide. With Haskell the focus is on the language, not the problems you are solving with the language. "

---

probs with laziness, esp. lazy I/O: http://users.aber.ac.uk/afc/stricthaskell.html

--

"

In Haskell it’s easy parallelize code, but performance is not only using your cpu cores. And it’s hard to write code that can have paragonabile performance to C++ code

Today a cpu silicon is invested principally in: cache

You solve one problem, but you regres in others: memory usage

Haskell promote a coding style where your data is sparse (terrible data locality), and even basic data types are boxed

Use a vector? Not idiomatic Haskell! You have to use some tree!! So your code will execute on all your core, but all your core will spend the majority on time on frontend stall! Not a great speedup "

--

" If people write custom allocators, is not because they are fools

And having a bad cache usage in a multi threading environment is also worser that in a single threads environment: MESI protocol will take even additional time

And in all of this, not a word on NUMA architecture, with will be probably predominant in the future. How Haskell can be made NUMA aware?

You are making it like “We don’t need to worry about low level details”…where is my api for cpu affinity? When a thread will be scheduled on another code, the data set has to be readed twice from the memory This matters in high performance code

If the OS expose api like that, is not because they are fools "

" Bartosz Milewski Says:

September 19, 2013 at 6:24 pm

@Nicola: I really recommend Simon Marlow’s book. It will answer your questions. Locality and unboxing is very important indeed and, at least for matrix calculations, is provided by the Repa library in Haskell. GPU code is produced by the Accelerate library. On GPUs you usually have to copy your data to private GPU memory, and Accelerated does it for you. These libraries are all about scalable performance. Parallel programming is pretty useless if it can’t speed programs up. "

" Edward Kmett Says:

September 19, 2013 at 8:17 pm

Nicola,

I’ve been slowly building up a set of cache-oblivious yet still purely functional data structures in Haskell. Ryan Newton has been working towards NUMA-awareness. I have a decent high performance lock-free work-stealing deque and tbb-style task monad. With GHC 7.8 we’re getting primops for working with SIMD, prefetching, etc. and filling in other gaps in the high-performance spectrum.

Moreover I would challenge that the ‘vector isn’t idiomatic haskell’ argument is a few years old. Nowadays vector, repa, and accelerate are all pretty well en-meshed in the culture.

That said, CPU affinity _is_ still something we don’t have a story for. It bites me too. HECs don’t tend to move around, but that isn’t the best guarantee.

I won’t lie and say we have all of these things today, but we’re not ignoring them. Haskell is continuing to evolve and borrow good ideas. "

--

" thirsteh 36 days ago

link parent flag

This is highly anecdotal, but I've built and been part of very practical/non-theoretical, large Haskell projects (100k+ lines, which is a lot for Haskell). The only big complaint I have is that it's somewhat hard to do loose coupling, i.e. for something somewhere to reference a type without either redeclaring the type (when that's possible), or having a huge, centralized Types.hs that declares all the types that are used in different places (to avoid cyclic imports.) (Contrast with e.g. Go or ML where you have interfaces/modules without 'implements'.)

This isn't unique to Haskell by any means, but it's the only real complaint I have about Haskell as a language for non-toy projects. The benefits definitely make it my go-to language. It's hard to list them all, but by far the nicest feeling is the correctness: when your code compiles, 60% of the time your program works every time. (Not to imply that tests aren't necessary--QuickCheck? is great for that.) It's an otherworldly feeling to write a program not in terms of what to do, but what kinds of filters you want to put on something, and have it just work (and either stay working, or break future compiles if something's changed!) after compiling 10 lines of code, when you would have written at least 50-70 and had to debug it in almost any other language.

Edit: I'll add another complaint: Haskell is like C++ in that it's incredibly easy for a codebase to become completely unmanageable if your team doesn't have a common style/discipline. Go is a nicer language for "average"/"enterprise" teamwork, I think, since it almost forces you to write programs in a way everyone will understand. If you're in a team with good programmers that you trust not to abuse the language, this is a non-issue.

Edit: Okay, another one: If you change your Types.hs, the recompilation can take a long time in a large codebase, similar to C++. But GHC/Cabal keep getting faster.

Think that's it.

upvote

evincarofautumn 36 days ago

link

Have you tried .hs-boot files for separate compilation with circular dependencies? It’s not bad, but it does lead to some extra crapwork when changing interfaces, much like with C++ headers.


upvote

thirsteh 36 days ago

link

I haven't, and it does look pretty cumbersome. So far a couple of Types.hs, i.e. for functionally separate components, have worked out fine, but I'll check it out if that becomes unwieldy.


upvote

Dewie 36 days ago

link

How long does it take to compile the whole codebase? If you know.


upvote

thirsteh 36 days ago

link

My biggest project probably takes about 5-7 minutes to compile in dev mode (no optimizations), and about 15-20 minutes with -fllvm and -O2 on a not-super-beefy i7 laptop (I haven't really timed it recently.)

It's not noticeable in general, since only the Types.hs affect many different files, and they're rarely changed. Also, 99% of the compiles are partial, i.e. only the files that have changed (or dependencies of them) are recompiled. Partial compiles don't feel slower than ones on my smaller Haskell projects.

Is it annoying compared to e.g. Go? Definitely. But it isn't ruining my life. And there's a lot of optimization, fusion particularly, that goes on in the background.


upvote

rybosome 36 days ago

link

You frequently mention nice things about Go in comparison to Haskell. I'm learning Go at the moment, and must admit that I find myself struggling to stay motivated. Given your background in FP (and apparent enjoyment), what do you find appealing about Go?


upvote

thirsteh 36 days ago

link

I like both Go and Haskell. They are wildly different languages, of course (even though some things are at least a little bit similar, like typeclasses and interfaces, cabal and go get, type signatures/inference, 'forkIO' and 'go', etc.)

If it's just me making something, 99% of the time I'll pick Haskell, unless I know everything that I'm going to do is mutate a hash table or array, in which case I use Go. (Not that that's not doable in Haskell, it's just not as intuitive/easy to do efficiently. Keep in mind that I said "if it's the only thing"--Haskell's downsides in this area aren't significant if you're also doing other things, and especially so if just some of those things are pure/don't have side effects.)

The reason why I say "if it's just me" is that Go is much nicer to use in normal teams. To me, it's a Java/Python/Ruby/JS competitor. Let's face it, a lot of enterprise teams aren't as disciplined or as good at/interested in programming as they could be. For this, Go is perfect. You could read that as "Go is for average programmers", and that is true--in a good way. It's extremely easy to pick up for new members of the team (Haskell is very hard, I have to admit!), everybody can collaborate without asking a lot of questions, because of 'go fmt' there's never any indent wars, and it's a snap to compile and deploy binaries.

If I have to do anything like map/reduce/filter, really any kind of operation on a set, I hate using Go. But it's not Go that I hate; it's most imperative languages.

...

When friends ask me which language to look at, I say "get to know Python, then learn Go" nowadays, mainly because pointers and pass-by-value can be a little hard to understand. Go is a great Python replacement. Web applications and APIs/backends I've particularly enjoyed implementing in it. Haskell is for when you've spent so much time coding in imperative languages that it's all boring, and you want to step into an interesting, but extremely frustrating (at first) new world.

A lot of people say e.g. Haskell would be as easy to pick up if it was your first language. I don't think that's true, if only because of the amount of syntax you have to learn--Scheme is probably better--but I do wish I could go back and try.

upvote

wtetzner 36 days ago

link

Any reason not to use, say, OCaml when working on a team? Its module system seems like it would make it well-suited to working with other people.


upvote

thirsteh 36 days ago

link

Or F#. No particular reason, I/we just don't like them as much. (Haskell has many other unique qualities, e.g. proper STM.)

---

"

If a language cannot define lazy function it simply lacks in abstraction power. I think any kind of fragment of the language should be nameable and reusable. (Haskell lacks this ability for patterns and contexts; a wart in the design.) So if you notice a repeated expression pattern, like

    if c then t else False

and cannot give this a name, like

    and c t = if c then t else False

and then use it with the same effect as the orginal expression, well, then your language is lacking.

For some language constructs the solution adopted by Smalltalk (and later Ruby), i.e., a very lightweight way on constructing closures is acceptable. So, for instance, I could accept writing

    ... myAnd x {y} ...

(In SML you could make something using functors, but it's just too ugly to contemplate.) "

-- " Cyclic data structures This is related to the last point.

Sometimes you really want cyclic data structures. An example are the Haskell data types in Data.Data that describe data types and constructors. A data type descriptor needs to contain a list of its constructors and a constructor descriptor needs to contain the data type descriptor. In Haskell this can be described very naturally by having the two descriptors reference each other. In SML this is not possible. You will have to break the cycle by somthing like a reference (or a function). In OCaml you can define cyclic data structures in a similar fashion to Haskell, so this isn't really a problem with strict languages, but rather a feature that you can have if you like. Of course, being able to define cyclic data leads to non-standard elements in your data types, like

    data Nat = Zero | Succ Zero
    omega :: Nat
    omega = Succ omega

So having the ability to define cyclic data structures is a double edged sword. I find the lack of a simple way to define cyclic data a minor nuisance only. "

--

"

upvote

orclev 36 days ago

link

I've been trying to come up with a general way to rank Haskell experience, and so far I've got:

Beginner: Can read the basic syntax, knows about E.G. Monads and folds

Intermediate: Understands and can use the more advanced classes like Applicative, Arrow, lenses, and how to use things like fix

Advanced: ??? Makes new powerful classes/libraries like Pipe, Conduit, or Netwire? Can easily translate a problem domain into a concise and elegant type representation.

Maybe there needs to be some more layers in there, I don't know... also the advanced level feels weak to me. I'm still somewhere between beginner and intermediate myself.


upvote

sseveran 36 days ago

link

Well I don't claim to understand the mathematical basis of Monads, and I don't use arrows and just started using applicative more often. Lenses rock and I have used those for a long time. I do build my own Monad Transformers so maybe I am still a beginning with intermediate tendencies. :-) "

--

" But nobody has successfully made functional code approachable either, because too often it's contaminated by syntactic sugar that's too symbolic, various handcuffs that make even the simplest tasks arduous (poor handling of global state), or learning curves that can't be scaled when it comes to interacting with real-world time-based data (monads etc). " --- " No, you're definitely not an idiot. Haskell's type system is a completely different language, semantically speaking, from its language of terms. This type language is much more like a logic programming language than a functional one. Type variables are unified by the type inference engine, a concept with which most people are unfamiliar. Type constructors are easily mistaken for data constructors (they often use the same name). A lot of Haskell tutorials gloss over these details which is unfortunate. "

--

upvote

boothead 2 days ago

link

How much interest would there be in a 0 to full Haskell development environment set of ansible scripts (and or Vagrantfile)? I'm working on a start up using Haskell at the moment and I've been capturing all of my set up in this way. If folks are interested I can make some of this stuff available.

reply

upvote

jcurbo 2 days ago

link

Yes, but I'd be more interested in how you put it together than the end product. A good blog post breaking your process and components down would be fantastic.

reply

upvote

Keyframe 2 days ago

link

What do you install and setup apart from ghc/i?

reply

upvote

boothead 2 days ago

link

Emacs config. Latest versions of cabal cabal-install. A work around for the brokenness of the Haskell platform on ubuntu 13.04. Threadscope. Proper cabal setup for testing. Jenkins setup for CI from the start.

Also investigation dockerizing Haskell services too.

reply

upvote

Peaker 2 days ago

link

Do you have a good ~/.cabal/config auto-installed with library profiling enabled?

reply

upvote

boothead 2 days ago

link

Yes, but I have moved to doing most stuff in sandboxes now and cabal repl is not playing nice with profiling at the moment.

reply

upvote

Keyframe 2 days ago

link

I'd be more than interested to see what you got. I want to introduce Haskell to our production soon, but am still not sure about the whole process of managing it at scale. This would definitely help as an info if nothing else.

reply

upvote

dmmalam 2 days ago

link

this would be very interesting

reply

upvote

harichinnan 2 days ago

link

Up voted. Please post a link.

reply

upvote

nbouscal 2 days ago

link

hdevtools, hoogle, hlint

reply

upvote

boothead 2 days ago

link

Yep, those are all installed as part of my emacs set up.

reply

---

freyrs3 2 days ago

link

Ok, here's a book on doing Natural Language Processing with Haskell: http://nlpwp.org/book/index.xhtml

reply

--

http://www.willamette.edu/~fruehr/haskell/evolution.html

--

"

barrkel 6 hours ago

link

Java classes have to be declared to implement an interface. Haskell types don't need to be declared to "implement" a typeclass, since the typeclass instance is declared separately. You don't need to modify the original definition in order to package up a value + functions into an existential; to me, that's the essential advantage of duck typing for larger projects, where you don't have the ability to freely modify any and all source included.

reply "

--

" [1] I tried Haskell and Elixir as candidates, but nested data holders with multiply circular references appear to be problematic to deal with in functional languages. Immutable data presents interesting challenges when it comes to cyclic graphs! The solutions suggested by the respective communities involved considerable boiler-plate. More importantly, the resulting code lost direct correspondence with the problem's structural elements. Eventually, I abandoned that approach.↩ "

asdasf 1 hour ago

link

His dismissal of haskell is rather misinformed. Haskell has no such problems with cyclic data structures. And this:

>Except that abstracting things into functions that take the `old' graph as input, and answer the `new' graph as output are not very convenient or easy.

Is just crazy. Yes, it is very convenient and easy: http://hackage.haskell.org/package/mtl-2.0.1.0/docs/Control-Monad-State-Lazy.html

reply

--

"

signa11 8 hours ago

link

probably this: http://research.swtch.com/generic ? go had 3 options for implementing generics:

1. leave generics out : the c way (slows programmer)

2. do it at compile time: c++ way (slows compilation)

3. box/unbox everything implicitly: java way (poor perf)

reply

"

"

SideburnsOfDoom? 7 hours ago

link

Hm. Which of these does C# use? Which does Haskell use?

reply

pjmlp 7 hours ago

link

Regarding C# it depends on which compiler you talk about.

Microsoft's current JIT/NGEN, generate a single version for reference types and a separate version for each value type. So a mix of Java/C++ approaches.

Not sure what the new JIT/NGEN compilers, RyuJIT? will use.

I also don't know what approaches are taken by Mono, Bartok and IL2CPU, but they might be similar.

reply

SideburnsOfDoom? 5 hours ago

link

Yep, now I can see why this is trilema is "incomplete at best" - c# and other use what can be described as a mixture of these three approaches.

I was going to say that the c# compiler is fast enough despite this, but then I remembered that one of go's selling points is that the go compiler is blindingly fast compared to languages such as c#. Perhaps maintaining that performance with generics is a real issue.

reply

pjmlp 3 hours ago

link

Although any language with modules is fast enough, compared with C and C++.

Many old timers like myself can remember the blazing fast compile times of Modula-2 and Turbo Pascal compilers in MS-DOS systems.

Go compilers also lack a strong optimizer, which is tends to slow down compilation.

reply

dbattaglia 7 hours ago

link

I believe C# does it at runtime for reference types, and compile time for value types, so that you don't eat the cost of boxing at runtime. It doesn't make sense to make separate compile time generic types for reference types in C# since the type is basically constrained to using system.object members anyway (unless you add type constraints to the generic type class definition).

reply

thirsteh 5 hours ago

link

Haskell: A mix of 2 and 3.

reply

arnehormann 6 hours ago

link

Also look at a list of proposed ways to add generics (and why they didn't make it): https://github.com/remyoudompheng/go-misc/blob/master/generi...

reply "

https://github.com/remyoudompheng/go-misc/blob/master/generics/generics.markdown

--

" Records

Haskell is supposed to be about declarative programming. Haskell programs should look like specifications. Haskell is supposed to be succint and Succinctness is Power, according to Paul Graham.

One area where this breaks down quickly is records.

Compare Erlang

    -record(pot, {
      profit = 0,
      amounts = []
     }).

with Haskell

    data Pot = Pot
        {
         pProfit :: !Word64,
         pAmounts :: ![Word64] -- Word16/
        } deriving (Show, Typeable)
    mkPot :: Pot
    mkPot =
        Pot
        {
         pProfit = 333,
         pAmounts = []
        }

The Haskell version requires twice as much lines of code just to initialize the structures with meaningful defaults. I have 164 record in the program, some of which are rather large. Renaming and using the Haskell accessor functions gets rather tedious after a while and there’s nothing elegant in having to explain to the customer how xyFoo is really different from zFoo when they really mean the same thing. This might seem like no big deal but, again, I have a lot of records.

I tried creating classes for each “kind” of field and I tried using HList to put these fields together into records. This seems like 1) a terrible hack compared to the Erlang version and 2) not very efficient. I did not get to measuring efficiency with a profiler but I did have GHC run out of memory trying to compile my HList code. SPJ fixed this but I decided not to take it further.

" -- http://wagerlabs.com/blog/2006/01/01/haskell-vs-erlang-reloaded/

--

" Speaking of monads… There’s not a lot of beauty in this:

type ScriptState? b = ErrorT? String (StateT? (World b) IO) type ScriptResult? b = IO (Either String (), World b) type Dispatcher b = Event -> (ScriptState? b) Status data Status = Start

Eat !(Maybe Event)
Skip
    deriving Showinstance Show (String, Dispatcher b) where show (tag, _) = show tag runScript :: World b -> (ScriptState? b) () -> ScriptResult? b runScript world = flip runStateT world . runErrorT

and then this:

withFilter :: Event -> (Event -> (ScriptState? b) ()) -> (ScriptState? b) () withFilter event fun = do w <- get let p = trace_filter w unless (p event) $ fun event

In fact, I still don’t have anywhere near in-depth knowledge of how to write my own monad.

Erlang is free of side effects (built-in function aside) just like Haskell but pattern-matching becomes far easier and the code becomes much smaller when you don’t have to deal with static typing. To wit:

%%% Matching on a tuple handshake(Bot, _, {udp, _, , ?SRV_PORT, 1}) -> bot:trace(Bot, 85, "handshake: ~w: Error: ~w~n", [?LINE, Code]), erlang:error({handshake_error, Code}); % Matching on a tuple of a different size (records are tuples in Erlang) handshake(Bot, [_, , Event], #srvhandshake{}) -> Bot1 = bot:pop(Bot), bot:post(Bot1, Event), {eat, Bot1}; % Yet another tuple handshake(Bot, Args, X = {tcp_closed, _}) -> bot:trace(Bot, 85, "Connection closed during handshake, retrying"), Bot1 = retry(Bot, Args, X), {eat, Bot1};

"

-- http://wagerlabs.com/blog/2006/01/01/haskell-vs-erlang-reloaded/

--

" Concurrency

Concurrency in Haskell deserves a praise, specially when used together with STM. Threads are lightweight (1024 bytes on the heap) and easy to launch and STM is a beautiful thing. Nothing beats being able to just send yourself a message, though. This is something that you can easily do with Erlang.

Erlang processes (327 bytes starting up, including heap) come with a message queue and you retrieve messages with “selective receive” that uses the same pattern-matching facilities as everything else.

%%% Dispatch event run(, {keepgoing, Bot}) when is_record(Bot, bot) -> receive {tcp, _, 2} -> Event = unpickle(Bot, Packet), run(Bot, handle(Bot, Event)); {script, Event} -> case Event of {tables, [H

trace(Bot, 95, "Event: {tables, [~w, ~w more]}", [H, length(T)]); _ -> trace(Bot, 95, "Event: ~p", [Event]) end, run(Bot, handle(Bot, Event)); Any -> run(Bot, handle(Bot, Any)) end;
T]} ->

This code just works. It collects network messages, events, timer events, you name it. Posting an event is also easy.

post(Bot, Event) -> self() ! {script, Event}.

I tried implementing this scheme using STM.TChan but failed. The best example of this is my logger. The most natural way to implement logging seemed to be by reading from a TChan in a loop and printing out the messages. I launched several thousand threads, all logging to the single TChan. Bummer, I think I ran out of memory.

Follow-up discussions on Haskell-Cafe narrowed the issue down to the logger thread not being able to keep up. I took this for granted and implemented a single-slot logger. This worked and reduced memory consumption drastically but I believe introduced locking delays in other places since threads could only log sequentially.

Erlang provides the disk_log module that logs to disk anything sent to the logger process. The logger can be located anywhere on a network of Erlang nodes (physical machines or VMs) but I’m using a local logger without any major problems so far.

Could the difference be due to differences in the scheduler implementation?

The Erlang version of my code has a separate socket reader process that sends incoming packets as messages to the process that opened the socket. This is the standard way of doing things in Erlang. Network packets get collected in the same message queue as everything else. It’s the natural way and the right way.

I tried to do the same with Haskell by attaching a TChan mailbox to my threads. Big bummer, I quickly ran out of memory. The socket readers were quick to post messages to the TChan but the threads reading from it apparently weren’t quick enough. This is my unscientific take on it.

Moving to single-slot mailboxes did wonders to lower memory consumption but introduced other problems since I could no longer send a message to myself from the poker bot thread. The socket reader would stick a packet into a TMVar and then the poker bot code would try to stick one in and block. This caused a deadlock since the bot code would never finish to let the thread loop empty the TMVar.

I ended up creating a bunch of single-slot mailboxes, one for the socket reader, one for messages posted from the poker bot code, one for outside messages like “quit now”, etc. Thanks to STM the code to read any available messages was elegant and probably efficient too but overall the approach looks hackish.

fetch :: (ScriptState? b) (Integer, Integer, Integer, (Event)) fetch = do w <- get liftIO $ atomically $ readQ (killbox w) `orElse` readQ (scriptbox w) `orElse` readQ (timerbox w) `orElse` readQ (netbox w)

I had to replace this code with some other hack to be able to run retainer profiling since it does not work with STM.

I also had issues with asynchronous exceptions (killThread blocking?), including crashes with a threaded runtime. "

-- http://wagerlabs.com/blog/2006/01/01/haskell-vs-erlang-reloaded/

--

" Serialization

This horse has been beaten to death by now. I would just say that thinking of Haskell binary IO and serialization makes me cringe. Binary IO is so damn easy and efficient with Erlang that I look forward to it. Specially after I wrote the Erlang version of the Pickler Combinators. Please refer to Bit Syntax for more information. I would give an arm and a leg to stick to binary IO in Erlang rather than process XML or other textual messages, just because it’s so easy.

With Haskell I tried reading network packets as a list of bytes which was elegant but not very efficient. I also tried serialization base don Ptr Word8 and IOUArray. I don’t think there’s a lot of difference between the two efficiency-wise. allocaBytes is implemented on top of byte arrays, for example.

allocaBytes :: Int -> (Ptr a -> IO b) -> IO b allocaBytes (I# size) action = IO $ \ s -> case newPinnedByteArray# size s of { (# s, mbarr# #) -> case unsafeFreezeByteArray# mbarr# s of { (# s, barr# #) -> let addr = Ptr (byteArrayContents# barr#) in case action addr of { IO action -> case action s of { (# s, r #) -> case touch# barr# s of { s ->

    s, r #)
    }}}}}

I would preferred serialization on top of byte arrays since you can inspect them and see the data. There’s no version of Storable for arrays, though. Not unless you use a Storable Array and then it can only be an array of that instance of Storable. "

-- http://wagerlabs.com/blog/2006/01/01/haskell-vs-erlang-reloaded/

--

" Inspecting the environment

Erlang has plenty of tools to inspect your environment. You can get the number of processes running, a list of process ids, state of each process, etc. This very convenient for debugging. Other libraries

I can log any Erlang term to disk, store it in a database, etc. This makes my life significantly easier. Conclusion

I was able to finish the Erlang version 10 times faster and with 1/2 the code. Even if I cut the 10-11 weeks spent on the Haskell version in half to account for the learning curve, I would still come out way ahead with Erlang.

This is due to language issues where static typing and records are working against me. This also due to the many GHC/runtime issues that I stumbled upon, specially with regards to concurrency, networking and binary IO. Last but not least, this is due to the much better library support on the Erlang side. "

-- http://wagerlabs.com/blog/2006/01/01/haskell-vs-erlang-reloaded/

--

haskell from a scala POV, toread:

https://github.com/pchiusano/fpinscala/wiki/A-brief-introduction-to-Haskell,-and-why-it-matters

--

http://www.haskellforall.com/2014/03/introductions-to-advanced-haskell-topics.html

---

http://engineering.imvu.com/2014/03/24/what-its-like-to-use-haskell/

---

haxl library:

https://code.facebook.com/projects/854888367872565/haxl/ https://news.ycombinator.com/item?id=7873933

awda 9 hours ago

link

Why Haskell?

reply

lbrandy 9 hours ago

link

Ah, my favorite question.

We previously had a custom DSL and it outgrew it's DSL-ness. The DSL was really good at one thing (implicit concurrency and scheduling io), and bad at everything else (cpu, memory, debugging, tooling). The predecessor was wildly successful and created new problems. Once all those secondary concerns became first order, we didn't want to start building all this ecosystem stuff for our homemade DSL. We needed to go from DSL to, ya know, an L. So the question is which...

If you understand the central idea of Haxl, I don't know of any other language that would let you do what Haxl in Haskell does. The built in language support for building DSLs (hijacking the operators including applicative/monadic operations) -really- shines in this case. I would -love- to see haxl-like implicit concurrency in other languages that feel as natural and concise. Consider that a challenge. I thought about trying to do it in C++ for edification/pedagogical purposes but it's an absolutely brutal mess of templates and hackery. There may be a better way, though.

reply

msie 4 hours ago

link

Did you have a problem with namespace collisions of identical field names in records? Is that much of a problem in Haskell? How did you deal with them? Thanks.

reply

thoughtpolice 4 hours ago

link

To be completely honest, the namespace/module situation with Haskell could certainly be a _ton_ better, but after 8 years of it I can't ever remember a time when it was ever at the top of my mind as game-breaking. Occasionally quite annoying? Yes, most definitely. But I'd say there are more many more annoying things day to day, and in any case, it is certainly a tradeoff I'll put up with for the returns.

That said, GHC 7.10 will ship with a new extension called `OverloadedRecordFields?` that will allow you to have identical record field names for different datatypes. Yay!

reply

--

pornel 20 hours ago

link

Rust has `unsafe` blocks in which you're allowed to do all nasty hacks you want.

Rust actually isn't that complicated. Don't get discouraged by comparisons to Haskell — it's still a C-family language where you can play with pointers and mutable state.

To me Rust still feels like a "small" language (similar size as Go or ObjC?, not even scratching complexity of C++). It's mostly just functions + structs + enums, but they're more flexible and can be combined to be more powerful than in C.

reply

thinkpad20 19 hours ago

link

I think comparisons to Haskell are not too far off the mark. Haskell is not that complicated of a language either. You can do all sorts of complicated things with it, but the language itself is relatively simple. It just has a lot of stuff that you're likely to have never seen before (extensive use of higher-order functions, higher-kinded types, etc), and its type system produces error messages that seem obscure from the outside. Similarly Rust can have some rather obscure error messages that you're probably not going to have seen before during compilation - lifetime specifiers, errors about using things in the wrong contexts, heck, even "Bare str is not a type (what?)"

I'm much more familiar with Haskell than Rust, but having played around with Rust I think they're on a par with each other in terms of difficulty, depending on your background.

reply

--

tel 6 days ago

link

It would be interesting to see what it takes to get this functionality (at least locally) into Haskell given that there are asynchronous exceptions... even ones with arguably greater safety than their Erlang compatriots.

A similar effort is happening with the Cloud Haskell program, but my understanding is that they're pouring a lot of effort into transmitting arbitrary Haskell functions over the wire between computers. This is pretty unnecessary for supervision and "Let it Crash"-style error handling alone.

reply

jeremyjh 6 days ago

link

That is a core primitive in Cloud Haskell but not the only one - you also send and receive data as in Erlang. distributed-process-platform provides Erlang style supervisors, gen_server, etc. It's all there and very much captures the features and spirit of OTP in a powerful static type system.

reply

tel 5 days ago

link

I think I just missoveremphasized the importance of a particular paper on it that I read.

reply

thinkpad20 6 days ago

link

Oh, that's a bit disappointing. I thought Cloud Haskell was focused on developing actor-pattern abstractions like lightweight, concurrent threads and fault tolerance. Serializing functions seems like a real red herring.

reply

jeremyjh 6 days ago

link

You can totally do that in Cloud Haskell - serializing functions can be useful too but it's not the draw for me.

reply

exo762 5 days ago

link

Somewhat different, but there is an attempt to implement BEAM in Haskell: https://github.com/gleber/erlhask

This problem (sync/async) is one of the things that is being solved in this project.

reply

---

"Rust is inspiring for many reasons. The biggest reason I like it is because it's practical. I tried Haskell, I tried Erlang and neither of those languages spoke "I am a practical language" to me. I know there are many programmers that adore them, but they are not for me. Even if I could love those languages, other programmers would never do and that takes a lot of enjoyment away. " -- http://lucumr.pocoo.org/2014/10/1/a-fresh-look-at-rust/

---

dbaupp 12 hours ago

link

It can take some wrangling to avoid space leaks and you often have to be careful about strictness (especially when a small change in the code causes the compiler to miss one of the big optimisations), while languages like Rust and C++ put more control into the hands of the programmer and thus achieve high performance more naturally.

As an example of this, consider the benchmarks game[1]. The Rust and Haskell are on par[2] for the most part.

Except, the Haskell programs are having to pull out all stops to reach this level, they essentially all have a ton of strictness annotations, and are doing manual pointer manipulations and even manual allocations. Of the fastest Haskell solutions (i.e. the ones linked on [2]) I count two that don't use a `Foreign.*` module (pidigits and binary-trees) and only one that uses no !'s (pidigits).

On the other hand, none of the Rust programs use any `unsafe` at all (that is, they are guaranteed to be memory safe, e.g. no risk of out-of-bounds accesses or dangling pointers), and are generally not particularly optimised (e.g. the Rust program that is 85 times slower than the Haskell just appears to have been written without thinking about performance at all... it does a whole formatted printing call for each and every character in the output! The version in-tree[3] is at least 50 times faster and still uses no unsafe code).

(I say this as someone who likes Haskell: I'm the person with the most votes for answers in the [rust] tag on StackOverflow?, but I've got even more votes than that for my answers on the [haskell] tag.)

[1]: http://benchmarksgame.alioth.debian.org/

[2]: http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...

[3]: https://github.com/rust-lang/rust/blob/master/src/test/bench...

reply

---

" As for Haskell the language vs others in general, I personally like the combination of high abstraction, static typing and raw speed. Besides (like lisp?) it seems to be a productivity amplifier for very small teams of very sharp people, which is my situation." -- https://news.ycombinator.com/item?id=1924684

---

http://bm380.user.srcf.net/cgi-bin/stepeval.cgi

---

http://www.i-programmer.info/professional-programmer/i-programmer/3892-the-functional-view-of-the-new-languages.html

" One of the negative things I pointed out previously was the dreadful record syntax. Accessing (nested) fields is a nightmare and updating them even more. Not even lenses come close to the syntactic ease of field manipulation of conventional programming languages in my opinion. What’s even more tragic: everybody in the Haskell community is aware of this but despite endless discussions for years a solution is not to be seen on the horizon yet.

Lazy evaluation by default caused problems in every single project thus far - and they were not particularly large projects nor were they particularly complicated. Sometimes I searched eight hours for the source of a space leak. Other times I had to really dig around the internet to learn how to force a computation so that I could measure the time of said computation.

Each time I increased my “defeating-laziness-skills”, for example how to use the profiling tools and how to use deepseq, but for the next “lazy” challenge my skills weren’t good enough and I had to play the role of a detective again, which in fact wasn’t as fun as it sounds. And even though I acknowledge that there are theoretical advantages that are allowed by laziness, personally, I find it to cost more than it helps and I find it telling that even the most bright Haskell hackers have to tackle so much with space leaks. One of Haskell’s creators has actually released slides with the title “The next Haskell will be strict“. At any rate, I’m not the only person that dislikes laziness by default and there is a lot more that could be said about this topic. Read the paper “Why functional programming matters” to see some semantic advantages of laziness.

I don’t have the source for the following paraphrased quote but I’ve read it a couple of times in the Haskell mailing list:

    Haskell makes some easy things harder and some difficult things easier

In some scenarios I wished for the good old for loop instead of a fold, which is arguably as “low-level” as a for loop, or some not-so-clear combination of some local or inline function definitions and compositions of several combinators. Now I know you can mimic for and while loops in Haskell, but then you are in monadic land and anyhow - they are syntactically and semantically not quite the same.

What this point actually boils down to is that I want a language where immutability as well as mutability are treated like first class citizens. Let’s be honest: if you want to do some things that are easy in conventional, imperative or “unpure” languages, prepare for a syntactic and semantic overhead of concepts in Haskell. It’s not that it’s too hard to understand how to do it in Haskell, but rather too cumbersome in my opinion.

Here’s a sentiment that I stumbled upon in one of the soon-to-be-presented languages (Disciple) and that I share fully:

We still think the Monad type class is useful, and we support it as well. However, when writing Haskell programs we’ve found that most uses of Monad are for IO, and to manage the internal state of the program. In Disciple, we use effects for state-based functions because it’s more convenient, and reserve Monad for things that definitely want a non-standard notion of sequence, like parser combinators.

Source

There are other problems that could be talked about:

    the overwhelmingness and confusion of new concepts: conduits, iterators, enumerators, etc
    package dependency hell (which I have experienced fully)
    standard libraries could be more consistent (e.g. only map, not fmap…)
    …"

---

http://existentialtype.wordpress.com/tag/haskell/

http://existentialtype.wordpress.com/?s=haskell

http://existentialtype.wordpress.com/page/2/?s=haskell

" It should have a natural parallel cost model based on the dependencies among computations so that one may talk about parallel algorithms and their complexity as naturally as one currently discusses sequential algorithms. Imperative-only programming languages fail this criterion miserably, because there are too many implicit dependencies among the components of a computation, with the result that sequentiality is forced on you by the artificial constraints of imperative thinking. ... Unfortunately, Haskell loses on the cost model, about which more in another post " -- http://existentialtype.wordpress.com/2011/03/16/what-is-a-functional-language/

" Finally, this article explains why Haskell is not suitable for my purposes: the principle of induction is never valid for a Haskell program! The problem is that Haskell is a lazy language, rather than a language with laziness. It forces on you the existence of “undefined” values of every type, invalidating the bedrock principle of proof by induction. Put in other terms suggested to me by John Launchbury, Haskell has a paucity of types, lacking even the bog standard type of natural numbers. Imagine! " -- http://existentialtype.wordpress.com/2011/03/21/the-dog-that-didnt-bark/ note: "my purposes" here appears to be to teach classes where people write proofs about program behavior

" Haskell is, in my view, the world’s best imperative programming language, and second-best functional language, but that’s a subject for another post " -- http://existentialtype.wordpress.com/2011/03/16/what-is-a-functional-language/

" Haskell provides better support for parallelism (by undoing its unfortunate commitment to laziness, which results in an unusable cost model for both time and, especially, space), but wasn’t suitable because of its lack of support for modularity. " -- http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/

" In Haskell you have type classes, which are unaccountably popular (perhaps because it’s the first thing many people learn). There are two fundamental problems with type classes. The first is that they insist that a type can implement a type class in exactly one way. For example, according to the philosophy of type classes, the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest. The second is that they confound two separate issues: specifying how a type implements a type class and specifying when such a specification should be used during type inference. As a consequence, using type classes is, in Greg Morrisett’s term, like steering the Queen Mary: you have to get this hulking mass pointed in the right direction so that the inference mechanism resolves things the way you want it to. In F# the designers started with the right thing (Caml) and eliminated the very thing that matters the most about ML, it’s module system! Instead the F# designers added a bunch of object-oriented concepts (for the sake of compatibility with .net and with the mindset of MS developers), and tricked up the language with features that are more readily, and flexibly, provided by the module system. " -- http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/

" It is monumentally difficult to reason about the time, and especially space, usage of a Haskell program. Worse, parallelism arises naturally in an eager, not a lazy, language—for example, computing every element of a finite sequence is fundamental to parallel computing, yet is not compatible with the ideology of laziness, which specifies that we should only compute those elements that are required later. "

" My point is that the ML module system can be deployed by you to impose the sorts of effect segregation imposed on you by default in Haskell. There is nothing special about Haskell that makes this possible, and nothing special about ML that inhibits it. It’s all a mode of use of modules.

So why don’t we do this by default? Because it’s not such a great idea. Yes, I know it sounds wonderful at first, but then you realize that it’s pretty horrible. Once you’re in the IO monad, you’re stuck there forever, and are reduced to Algol-style imperative programming. You cannot easily convert between functional and monadic style without a radical restructuring of code. And you inevitably need unsafePerformIO to get anything serious done. In practical terms, you are deprived of the useful concept of a benign effect, and that just stinks! " -- http://existentialtype.wordpress.com/2011/05/01/of-course-ml-has-monads/


there is or used to be some sort of problem in Haskell with exceptions and the type system that can cause a compiler crash:

http://existentialtype.wordpress.com/2012/08/14/haskell-is-exceptionally-unsafe/

comments say this is merely due to 'lying' in the Typeable instance

---

" It’s quite obvious to me that the treatment of exceptions in Haskell is wrong. Setting aside the example I gave before of an outright unsoundness, exceptions in Haskell are nevertheless done improperly, even if they happen to be sound. One reason is that the current formulation is not stable under seemingly mild extensions to Haskell that one might well want to consider, notably any form of parameterized module or any form of shadowing of exception declarations. For me this is enough to declare the whole thing wrong, but as it happens Haskell is too feeble to allow full counterexamples to be formulated, so one may still claim that what is there now is ok … for now." -- http://existentialtype.wordpress.com/2012/12/03/exceptions-are-shared-secrets/

---

" eyall says: May 2, 2011 at 12:12 pm ... GHC is indeed written in an imperative style, and some in the Haskell community voice concern over that — but of course that is a choice of style that could go either way. There are plenty of functional-style Haskell compilers.

Yin Wang says: May 3, 2011 at 11:31 pm

You made good points. I looked at the source code of JHC, and it looks a lot more functional than that of GHC. Thanks, eyall.

" -- http://existentialtype.wordpress.com/2011/05/01/of-course-ml-has-monads/#comment-859

---

" it is not clear whether monadic isolation of effects (esp store effects) is a good idea, because it precludes benign effects." -- http://existentialtype.wordpress.com/2011/05/01/of-course-ml-has-monads/#comment-830

---

" In Haskell you have type classes, which are unaccountably popular (perhaps because it’s the first thing many people learn). There are two fundamental problems with type classes. The first is that they insist that a type can implement a type class in exactly one way. For example, according to the philosophy of type classes, the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest.

...

psteckler says: April 17, 2011 at 6:45 am

0

0

Rate This

Another excellent post.

I hadn’t before thought about the limitations of type classes you mention. Yes, you might want Int to be an instance of Ord in arbitrarily many ways. Like the Model T, you can choose any color, as long as it’s black. Reply

    schoenfinkel says:	
    April 17, 2011 at 5:05 pm	
     
    0
     
    0
     
    Rate This
        you can choose any color, as long as it’s black 
    This particular objection has to do with a particular concrete type though, and for that reason seems kind of weak. Getting new class instances is a standard use of the
    newtype
    keyword, explained a few pages into the average Haskell tutorial:
    newtype Backward = B {b :: Int} deriving Eq
    instance Ord Backward where (=) `on` b
    -- ghci> B 1 < B 0
    -- True
    At the other extreme, though, the great type classes, like
    Functor
    and
     Monad 
    , attach not to concrete types but to things of kind
     (* -> *)
    and when you look into them, it seems there usually aren’t to many ‘colors’ available. How many ways can
    [] 
    or
     Maybe 
    be made a
     Functor 
    or
     Monad 
    ? One great class in that family,
     Applicative 
    , does admit more than one list instance — thus the introduction of
     newtype ZipList a = ZipList [a] 
    in
    Control.Applicative
    .
    I don’t mean to be objecting to the general point of the essay, which most Haskell users accept.
    Abstract Type says:	
    April 17, 2011 at 9:22 pm	
     
    1
     
    0
     
    Rate This
    I know about newtype, of course. But, as the name implies, it’s a new type, it’s not int.

schoenfinkel says: April 17, 2011 at 5:08 pm

0

0

Rate This

Sorry, got my code tags mixed up there. schoenfinkel says: April 17, 2011 at 5:34 pm

0

0

Rate This

Here’s more or less what I intended, if you care to replace this and the above mess. http://hpaste.org/raw/45786/type

" -- http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/

" I also think you’re unfair to type classes. You’re right that they are not completely satisfying as a modularity tool, but your presentation make them sound bad in all aspects, which is certainly not true. The limitation of only having one instance per type may be a strong one, but it allows for a level of impliciteness that is just nice. There is a reason why, for example, monads are relatively nice to use in Haskell, while using monads represented as modules in a SML/OCaml programs is a real pain. It’s a fact that type-classes are widely adopted and used in the Haskell circles, while modules/functors are only used for relatively coarse-gained modularity in the ML community. It should tell you something useful about those two features: they’re something that current modules miss (or maybe a trade-off between flexibility and implicitness that plays against modules for “modularity in the small”), and it’s dishonest and rude to explain the adoption difference by “people don’t know any better”. " -- http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/#comment-735

"

State related Problems

As I already mentioned, the lion’s share of problems/bugs were state/identity related in both the Lua & Java version. Other problems were logical errors or small slips that could be fixed immediately. If your reasoning is wrong then the programming language can’t help you since it can’t guess what you really want to do. The situation is different with state/identity related bugs so I want to concentrate on these.

I pointed out the slight annoyance of not being able to freely rearrange the order of definitions in a Lua source file6. There is another reason why I don’t like it in general: You must specify an order whether you care about it or not. So when refactoring code and shuffling it around I sometimes found myself changing the order in such a way that some variables were not reasonably initialized which lead to strange behaviour but not error messages.

Even if Lua has only one mutable type - the table - I often found myself subtly overseeing this fact and thus creating weird side effects. I’m going to provide one concrete example:

--... local function _createAllCodes(codes, code, length) if length == 0 then codes[#codes+1] = {unpack(code)} else for i=0,7 do code[length] = T[i] _createAllCodes(codes, code, length-1) end end end

--- Create a list of all possible codes with a length of 'length'. local function createAllCodes(length) local codes = {} _createAllCodes(codes, {}, length) return codes end --...

The third line used to be codes[#codes+1] = code instead of codes[#codes+1] = {unpack(code)}. The former version reused references whereas the new version created new references. When I used the first version I couldn’t understand why the program was not working correctly until I, through a lot of tests and print statements, narrowed down the problem. There were a lot of problems like this one.

Much the same applies to the Java version. Here is an example:

... public Code(T... t) { this.code = t.clone(); } ...

The line this.code = t.clone(); used to be this.code = t; and, like the Lua example, lead to weird behaviour.

It says a lot about such state/identity related problems when neither I nor six additional eyes could find them. As I mentioned, the initial version7 was created in, I guess you could say, “pair programming style”. I agreed with a fellow student on the general design of the code and he sat next to me when I coded. I said out loud every function I wanted to create and how I am going to write it. He critically verified that I really did do what I said and if it made sense at all to do it this way. Still, it turned out that there were bugs8. When I explained to the others that there must be a bug in the code we went on a long bug search. Eventually, one of them told me that he found it. I was interested why his fix worked. He couldn’t explain it. He just changed some lines and the code seemed to work. I, too, couldn’t understand why this particular fix worked. I repeat: It says a lot about such state/identity related problems when you can’t often explain why a “fix” fixes the code.

Even if you pretend that you are aware of using mutable objects there will be one short moment of unconzentration and a subtle state related bug will sneak in. And boy, are they hard to find! On top of that, they have very weird and often not reproducable effects as I described earlier. I know someone who, I think, would argue that it is the programmer’s fault, that the programmer’s inability leads to this kind of bug and it is entirely not the language’s blame - the language is good enough. Well, I don’t share this opinion.

Now to a selection of some anecdotes that are only tangentially related to the simulation but all have something to do with state related bugs. As a matter of fact, I wanted to write down six anectodes but I decided that two are enough to make the point:

As a task of a university course I had to write a Java program that finds the shortest route (of cities) from one city to any other using Dijkstra’s Shortest Path Algorithm. We were given the desired output for one city as the starting point and we were given a file with the nodes and edges that were needed to solve this task. I finished the task and ran the file. To my surprise the output was wrong! I didn’t want to believe it so I ran the file again and, voila, the output was correct!? Why? I didn’t change anything! After some tests I observed that, whenever I ran the file after not having run it for some time the wrong output was produced. However, when running the file quickly in succession apart from the first time the correct output was produced. I mean, what is this!? What does such a behaviour tell me? In fact, my code was correct all along since later it turned out that there was a wrong value for an edge in the given file… But how could this weird behaviour tell you such a thing? If the program had always produced the wrong output I would have been okay with it but it didn’t9.

As you know, the simulation actuallly has been an auxiliary program for the greater university task of making a driving robot play mastermind. Well, nearing the end of the project’s deadline we had finished the program and we made a test run with the robot. We knew the secret code and the algorithm so we knew which buttons the robot had to approach and in wich order. Everything seemed to be great until the dreaded null-pointer exception peep destroyed our confidence shortly before the robot would have entered the correct sequence! We couldn’t understand it and the code didn’t give any clues either. In fact, this exception never happened again10. We weren’t the only victims of null-pointer exceptions. All the time the other groups encountered them - even in the final run. If there are possibilities and programming languages that allow us to be freed from “the null-pointer exception” why then do we still hold to it by not changing our tools?

To sum it up, I wanted to stress that bug hunting for this simulation and in general is a dominant but bad and hated part of programming. Every means to reduce bug hunting11 should be encouraged.

Haskell

If you are not familiar with Haskell have a look at a deliberately uncritical article about Haskell’s virtues ( http://web.archive.org/web/20120402183419/http://www.haskell.org/haskellwiki/Why_Haskell_matters ). Haskell’s most valuable virtue for this task is that apart from being functional Haskell is “purely functional” and useful - right now - for practical tasks12. That means, that you must be very explicit in your code whenever using effects and therefore you are always aware if your change can alter effects or not. I find this approach very good and I felt that you start to think more precisely about problems because you try to reduce the “state variables” to the absolute minimum. Now to the real experiences with writing the simulation in Haskell.

You should know that this has been my first “real” or rather independant Haskell project apart from small exercises. As saying goes the devil is in the details and even though the small exercises were very naturally solved in Haskell this particular task highlighted some problems with Haskell especially regarding laziness. To give you an example of Haskell’s beautiful side take a look at this definition:

--

createAllCodes :: Int -> Codes createAllCodes 0 = [[]] createAllCodes length = [ c:r
Create a list of all possible codes with a specific length.
c <- buttons, r <- createAllCodes (length-1) ]

I think the code is pretty self-explanatory if you know Haskell a bit. The only thing you need to know is that buttons is a list of, let’s say, the colors in mastermind. At the same time there are some, I’d say, quite ugly definitions in my code. Somehow, however, I suspect that they could be rewritten in a better style if I was more experienced.

Generally, I like haskell’s strong type system. Even though sometimes I couldn’t understand the error messages immediately13 the type system often unveiled bugs. Furthermore, adding type definitions makes the code self-documenting.

As mentioned, I was/am a Haskell beginner. Nonetheless, I hoped writing the simulation in Haskell would have went smoother. E.g. I did not fully understand the State Monad when I first used it. See also Put and the State Monad. Also, I had some problems utilizing the, for me new, functional paradigm. See also a question of mine on stackoverflow.

On the plus side I had much less bugs. Actually there were only two off-by-1 runtome errors when using head and !! and a devious infinite mutual recursive loop. Now is a great time to say this: Haskell & its type system is no magic dust. It can help you extensively but it cannot (yet) do everything. There are, however, extensions to Haskell like dependant types which I think could have prevented the off-by-one errors by specifying e.g. a list that cannot be empty resp. that two lists for a function must have the same length at compile time. You could say in the future there are even more possibilites where the type system can help you write better/more correct code. Now that I think about it I did a wrong conversion from seconds to milliseconds in both the Java & Haskell version (maybe even in the Lua version). This could have been prevented by associating units with types but perhaps it would have been overkill. As you can see, I can count the bugs in the Haskell version on one hand. Yet, there were other annoyances. Let’s have a look at Haskell’s warts.

First of all, the record syntax is tedious, very tedious. There are extensions that make it easier like RecordWildCards? but still. Granted, implicit conversion of numbers is not a good idea but on the other hand explicit conversion lead to fromIntegral noise in my code. Sometimes, it seems, straight imperative programming is the best way for a function but then the code looks kind of ugly. The dreaded type related error messages of Haskell are not that bad I’d say. Yet sometimes you don’t understand a word of them. I guess this is the price you pay for an expressive but abstract & general type system. There is indeed some boilerplate especially regarding the update of fields in state records. Whereas in imperative languages you can write something like variable += 1 and it is perfectly expressive, in Haskell on the other hand, due to the cumbersome syntax for field updates and immutable variables, it is to much line noise if state updates/queries are not put in their own helper functions. Laziness was a real problem for showing progress and measuring time of the computation. Granted, here it was a rather artificial requirement. If I really wanted to measure the performance I would have used profiling tools. But the point is, again, to compare (almost) the same implementation in different programming languages.

And although the above paragraph sounds like I am very nitpicking regarding programming languages (which I am) and Haskell, Haskell made damn fun because it allowed me to be very expressive & precise and it gave me a feeling of safety about my program. I understand that Haskell has its weak points, too, and I really felt them and now know (better than before) how to deal with them. Plus, there is so much more to come from Haskell et al in particular regarding the innovation of type systems, that everything can only become better.

Conclusion

The point I wanted to make with this experience report is that I believe the reductions of effects/state to a local and managable minimum helps writing good & correct software. Even more so if the programming language provides syntactical and semantic means to achieve the separation of effects. I would even go so far as to say that this seperation (and perhaps laziness hand in hand) is in some ways similar to the introduction of garbage collection in mainstream languages. Where garbage collections frees the programmer from thinking about managing the memory, separation & laziness free the programmer from reasoning about all the effects a piece of (pure) code can have and the order in which something is declared. As with garbage collection this does not come without tradeoffs but I think the tradeoffs are worth it. " -- http://web.archive.org/web/20120402183419/http://www.eugenkiss.com/blog/2011-05-experience-report-a-mastermind-simulation-in-lua-java-and-haskell.html

" Lazy bindings I like to tell Haskell beginners that any subexpression can be named and "pulled out", modulo name capture, of course. ... any kind of fragment of the language should be nameable and reusable. (Haskell lacks this ability for patterns and contexts; a wart in the design.)" -- http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html

---

" Due to the problems I observed with Haskell, I finally understood the appeal of languages like OCaml, Scala or F#.

Broadly speaking, they are similar to Haskell in that they emphasize functional paradigms, but they are strict, have first class support for mutability (apart from immutability) and imperative programming paradigms. Of course each of them comes with its own baggage of new concepts and ecosystem-wise there are quite a lot of differences.

So far I’ve tried to play a bit with Scala again, but.. I can’t really put my finger on it but I can’t like the language. It’s probably because Scala seems to be even quite a bit more complex than Haskell and there seem to be so many ways to achieve a task. F# seems to be too .NET centric and OCaml seems to have its share of its own problems but that doesn’t mean that I don’t want to check it out someday. " -- http://www.i-programmer.info/professional-programmer/i-programmer/3892-the-functional-view-of-the-new-languages.html?start=1

" I first started with Haskell. Algebraic datatypes, immutability, the help of the type system, automatically derived type classes, type classes in general and some syntactic virtues of Haskell made the beginning of the project really pleasant. However, when it came to state shuffling, monadic code paired with the unforgiving record (updating) syntax I soon felt that the elegance was gone.

Next in line was Java. Ironically, using the “unsexy” Java language I almost fully implemented my project, which hadn’t been the case for the Haskell version although I really felt that missing support for algebraic datatypes, immutability, null pointer safety, first class functions and closures, conciseness and the interlocking of namespaces, classes and files made a lot of things unnecessary cumbersome and unclear. For instance, I had to manually provide copy, equals and hash methods, I had to make silly classes like Pair due to a lack of tuples, had to create classes where a simple closure would have sufficed etc.

Still, the uncumbersome way of writing down control flow, post-fix accessing of methods/fields on a class, great IDE support and first class field manipulation support made up for the weaknesses, that is, just so much that I still had motivation to almost finish the project. "


" Writing a Haskell compiler is a big undertaking, and the work required to compile a moderate number of programs from Hackage is immense. Too many libraries rely on something GHC specific - either language or runtime features." -- http://yhc06.blogspot.com/2011/04/yhc-is-dead.html

---

http://flyingfrogblog.blogspot.com/2011/01/io-throughput-haskell-vs-f.html

---

http://stackoverflow.com/questions/3429634/foldl-is-tail-recursive-so-how-come-foldr-runs-faster-than-foldl

---

" You can easily obtain speed in OCaml and Haskell. The question is at what cost? Usually when I see fast "Haskell" programs, what is really happening is a grand massaging of the language to make it produce the "right" machine code output getting rid of all the overhead tied to the language. This is deeply unfortunate because it makes the impression that your implementation is fast. " -- http://www.reddit.com/r/programming/comments/pylx/ask_reddit_why_is_ocaml_faster_than_haskell

---

http://www.yesodweb.com/blog/2014/10/classy-base-prelude

October 6, 2014 Greg Weber

View source on Github

Haskell's Prelude is changing to favor using Foldable/Traversable instead of just lists. Many Haskellers are concerned that upcoming changes to the Prelude could

    break existing code
    make maintaining code more difficult
    decrease beginner friendliness

Lets discuss these concerns

---

"

Q: What's the point of map in Haskell, when there is fmap?

Everywhere I've tried using map, fmap has worked as well. Why did the creators of Haskell feel the need for a map function? Couldn't it just be what is currently known as fmap and fmap could be removed from the language?

A:

Historical reasons.

First came map, because, hey, there were lists.

Then someone said: "Let there be functors!". And was somewhat miffed, b/c map was already taken. So they said "screw it, call it fmap."

And it was so.

Then Functor became a part of the standard library, and everbody said "this fmap name is lame, but we don't want to change the name of map, because that might break stuff."

So they did not.

Edit: Or, as the case actually is, I'm wrong: see augustss's comment below.

-- rampion

That's not actually how it happens. What happened was that the type of map was generalized to cover Functor in Haskell 1.3. I.e., in Haskell 1.3 fmap was called map. This change was then reverted in Haskell 1.4 and fmap was introduced. The reason for this change was pedagogical; when teaching Haskell to beginners the very general type of map made error messages more difficult to understand. In my opinion this wasn't the right way to solve the problem. – augustss Jul 26 '11 at 8:47 ... augustss is Lennart Augustsson, who for all practical purposes has been part of the Haskell community since before Haskell existed, cf. A History of Haskell " -- http://stackoverflow.com/questions/6824255/whats-the-point-of-map-in-haskell-when-there-is-fmap

---

http://www.haskell.org/haskellwiki/Foldable_and_Traversable

---

http://stackoverflow.com/questions/3529439/haskell-coding-style-map-fmap-or

these 3 are all the same:

map toLower "FOO"

fmap toLower "FOO"

toLower <$> "FOO"

... <$> is the same as `fmap` ... map is just a less general form of fmap?

---

http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal (called the AMP in other places). related to the idea to add foldable/traverable to the Prelude.

---

"classy-prelude has had the luxury of being able to re-think every Haskell wart"

https://hackage.haskell.org/package/classy-prelude

---

" Why change? Module qualified names and generic code.

The motivation for classy-prelude was to confront one of Haskell's most glaring warts: name-spacing and the need to qualify functions. We could certainly have our IDE automatically write import statements, but we still end up with needing to use module qualified names. This isn't really an acceptable way to program. I have not seen another language where this extra line noise is considered good style. For Haskell to move forward and be as convenient to use as other programming languages, there are 2 solutions I know of.

    change the language
    make it convenient to write generic code

Changing the language so that module qualification is not needed is arguably a much better approach. This is the case in Object-Oriented languages, and possible in languages very similar to Haskell such as Frege that figure out how to disambiguate a function based on the data type being used. I think this would be a great change to Haskell, but the idea was rejected by Simon Peyton Jones himself during the discussion on fixing Haskell records because it is not compatible with how Haskell's type system operates today. Simon did propose Type directed name resolution which I always though was a great idea, but that proposal was not able to get off the ground in part because changing Haskell's dot operator proved too controversial.

So the only practical option I know of is to focus on #2. Being able to write generic code is an important issue in of itself. Programmers in most other mainstream languages write code that operates on multiple data structures of a particular shape, but Haskell programmers are still specializing a lot of their interfaces. " -- http://www.yesodweb.com/blog/2014/10/classy-base-prelude

---

" Lists are holding Haskell back

It is taken by many to be a truism that programming everything with lists makes things simpler or at least easier for new Haskell programmers. I have found this statement to be no different than 99% of things given the glorious "simple" label: the "simplicity" is not extensible, does not even live up to its original use case, and ends up creating its own incidental complexity.

I used to frequently warp the functions I wrote to fit the mold of Haskell's list. Now that I use classy-prelude I think about the data structure that is needed. Or often I start with a list, eventually discover that something such as appending is needed, and I am able to quickly change the function to operate on a different data structure.

Using an associative list is an extreme example of using the wrong data structure where lookup is O(n) instead of constant or O(log(n)). But by warping a function I am really talking about writing a function in a way to reduce list appends or doing a double reverse instead of using a more natural DList or a Seq. This warping process probably involves performing recursion by hand instead of re-using higher-order functions. As a library developer, I would like to start exposing interfaces that allow my users to use different data structures, but I know that it is also going to cause some inconvenience because of the current state of the Prelude.

Neil writes that he had an opposite experience:

I have taken over a project which made extensive use of the generalised traverse and sequence functions. Yes, the code was concise, but it was read-only, and even then, required me to "trust" that the compiler and libraries snapped together properly.

This kind of report is very worrying and it is something we should take very seriously. Any you certainly cannot tell someone that their actual experience was wrong. However, it is human nature to over-generalize our experiences just as it was the nature of the code author in question to over-generalize functions. In order to have a productive discussion about this, we need to see (at least a sample or example of) the code in question. Otherwise we are only left guessing at what mistakes the author made.

In general I would suggest specializing your application code to lists or other specific structures (this can always be done with type signatures) until there is a need for more abstraction, and that could be a big part of the problem in this case. " -- http://www.yesodweb.com/blog/2014/10/classy-base-prelude

---

" It isn't Beginner vs. Expert anyways

The most up-voted comment on Reddit states:

What other languages have different standard libraries for people learning and people not learning? What could be a greater way to confuse learners, waste their time and make them think this language is a joke than presenting them with n different standard libraries?

I will add my own assertion here: Haskell is confusing today because the Prelude is in a backward state that no longer reflects several important best practices (for example, Neil had to create the Safe package!) and it does not hold up once you write more than a trivial amount of code in your module.

We also need to keep in mind that using Haskell can be difficult for beginners precisely for some of the same reasons that it is painful for experts. And the same reason these new changes will be more difficult for beginners (mental overhead of using the Foldable/Traversable abstraction instead of just lists) will also create difficulties for non-beginners.

So the changes to the Prelude are going to make some aspects a better for beginners or existing users and others harder.

If we really want to improve Haskell for beginners we need to stop creating a false dichotomy between beginner and expert. We also need to empower committees to make forward progress rather than letting minority objections stall all forward progress. Improving the library process

Some have expressed being surprised to learn about what is going on in the Haskell libraries committee at a late stage. On the other hand, I doubt that hearing more objections earlier would actually be helpful, because the libraries process has not learned from GHC.

Take a look at the extensive documentation around proposed changes to improve Haskell's record system. Creating a solution to Haskell's problem with records was a very difficult process. There were several designs that looked good in a rough sketch form, but that had issues when explored in thorough detail on the wiki. More importantly, the wiki helped summarize and explain a discussion that was extremely laborious to read and impossible to understand by looking through a mail list.

Before creating a non-minor change to GHC, there is a convention of creating a wiki page (certainly it isn't always done). At a minimum there is a Trac ticket that can serve a somewhat similar purpose.

My suggestion is that the libraries process use the existing GHC or Haskell wiki to create a page for every non-minor change. The page for Foldable/Traversable would explain

    what is being changed
    which changes create a breakage
    how type errors have changed
    how library code is affected
    how user code is affected
    best practices for using Foldable/Traversable 

Right now we are stuck in a loop of repeating the same points that were already made in the original discussion of the proposal. Given a wiki page, Neil and others could point out the down-sides of the proposal with actual code and have their voice heard in a productive way that builds up our body of knowledge. " -- http://www.yesodweb.com/blog/2014/10/classy-base-prelude

---

" Wednesday, October 01, 2014 Why Traversable/Foldable should not be in the Prelude

Summary: For GHC 7.10, Traversable and Foldable are going to be in the Prelude. I missed the original discussion, but I suspect it's a bad idea.

Types are how Haskell programmers communicate their intentions to each other. Currently, the Haskell Prelude contains:

mapM :: Monad m => (a -> m b) -> [a] -> m [b]

As of GHC 7.10, as part of something known as the Burning Bridges Proposal (ticket, discussion, I can't actually find a full proposal...), that will become:

mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)

Surely that's a good thing? Aren't more general types always better? Isn't the Prelude an archaic beast from the time before? I'd argue functions which are highly polymorphic are hard to use, and hard to think about, especially for beginners. I'd also argue the Prelude is remarkably well designed, not perfect, but quite an impressive feat.

What makes a type signature complex?

I've been thinking recently about what makes type signatures complex, both to practitioners, and to relative beginners. My rough metric is:

    Fully concrete types are usually simple, as long as they aren't too long. The longer a type gets, the more complex it gets.
    Types with functions in them aren't too bad (order-1 types), but as you go up to order-2 types things start to get more complex.
    Fully polymorphic functions can be simpler than concrete functions, since they declare what you don't need to worry about.
    Functions with type classes are more complex, since you need to read the type signature while looking at the context, and need to know each class being used.
    Simple type classes (Eq, Show) aren't too bad, but custom type classes impose more of a burden.
    As you add more type classes, the complexity grows faster than linearly. Three type classes are not three times as complex as one, but quite a bit harder.
    Higher kinded type classes are significantly more complex than kind * type classes, e.g. Monad, Functor. The reason is that instead of having a hole you fill in, you now have a hole which itself has a hole.
    The higher-kinded type classes Monad and Functor aren't as bad as the others, since Functor is really the "simplest" higher-kinded type class, and Monad is required knowledge for IO.
    As you have more higher kinded type classes, the complexity burden grows even worse than for kind * type classes. Two is significantly more complex than one.

By that metric, the old mapM isn't too bad, but the new mapM is quite complex. It has two higher-kinded type classes, and one of them is not one of the common ones. I appreciate that making Foldable and Traversable key to Haskell will probably lead to them being more used, but now all beginners are going to have to wade through the Monad tutorial, their Foldable tutorial and their Traversable tutorial before they start programming (or just give up).

Why generality hurts

There are two main reasons why generality hurts:

Reading type signatures becomes difficult/impossible. We already have that problem with the Control.Arrow module, which (as far as most people use it), is just a pile of tuple combinators. But unlike other tuple combinators, these are ones whose type signature can't be understood. When I want to use &&& or * I just pick randomly, see if it type checks, then try again. When other people I know what to use these functions they just use an explicit lambda. No one thinks of referring to the documentation, since the documentation presents a unification problem (which most of the people I know could solve), not an intuition.

Reading code becomes difficult. Haskell is brilliant for letting you write a composable pipeline of code that takes some input, does some processing, and produces some output. But that only works if you have enough concrete pieces in each function to read each piece in isolation. As an example:

test = foo . mapM baz . bar

Using the current mapM definition I can, in a fraction of a second, know the approximate shape of what foo consumes, and what bar produces. With the new mapM I don't, and have to keep more context in my head to reason about the code.

Who it hurts

Generality of this nature tends to hurt two types of people:

Beginners are hurt because they need to know more concepts just to get going. As a beginner I read through Data.List regularly to build up weapons in my arsenal to attack larger problems. The new Data.List will be generalised, and reading it won't give the insights I enjoyed. Maybe the beginner can instantiate all Foldable things to [], but that adds a mental burden to exactly those people who can bear it least.

Practitioners, those who are paid to code for a living, will have greater problems with maintenance. This isn't an unsubstantiated guess... I have taken over a project which made extensive use of the generalised traverse and sequence functions. Yes, the code was concise, but it was read-only, and even then, required me to "trust" that the compiler and libraries snapped together properly.

Who it benefits

The benefit probably comes from those who are already using the Applicative/Traversable classes regularly. For these people, they can probably avoid an import Prelude(). I am not against ever changing the Prelude, but I do think that for changes of this magnitude the ideas should probably be prototyped as a separate package, widely accepted, and only then should significant surgery be attempted on the Prelude. The classy-prelude work has gone in that direction, and I wish them luck, but the significant changes they've already iterated through suggest the design space is quite large.

Concluding remarks

I realise that I got to this discussion late, perhaps too late to expect my viewpoint to count. But I'd like to leave by reproducing Henning Thielemann's email on the subject:

        David Luposchainsky wrote:
        +1. I think the Prelude should be a general module of the most commonly
        needed functions, which (generalized) folds and traversals are certainly
        part of; right now it feels more like a beginner module at times.
    It is certainly a kind of beginner module, but that's good. Experts know
    how to import. Putting the most general functions into Prelude does not
    work because:
    1. There are often multiple sensible generalizations of a Prelude
    function.
    2. You have to add more type annotations since types cannot be infered
    from the functions.
    There is simply no need to change Prelude and all packages that rely on
    specific types. Just don't be lazy and import the stuff you need!
    I should change my vote to:
    -10" -- http://neilmitchell.blogspot.co.uk/2014/10/why-traversablefoldable-should-not-be.html

---

http://dev.stephendiehl.com/hask/

--

[–]yitz 6 points 1 day ago

    proposal made by Simon Marlow a year and a half ago that if you import Prelude.Foo then NoImplicitPrelude would get set automatically. This would make alternate preludes easier for folks to push.

That is a really nice idea.

[–]WilliamDhalgren? 3 points 1 day ago*

right.

If I'm reading that design correctly, the leaf class with InstanceTemplates? still needs to be coded for the true hierarchy above it, with a "default instance <classname>" for each superclass template it inherits. The example given has Monad decl still conscious of the default Functor instance in Applicative.

and still gets warnings for any generated instances unless doing a "deriving <classnames>" for all actual classes on the final datatype.

IMHO not as scalable as McBride?'s proposals, where final instances apparently freely mix declaration from all intrinsicly-declared superclasses.

There you only get warnings if pre-empting with explicitly created instances, allowable for a transitional period with a PRE-EMPT pragma, or an error otherwise, without excluding these explicitly from being generated.

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 6 points 1 day ago

When I last spoke with Richard we'd talked about including such a component in the proposal. I'm unsure if its absence is an act of omission or commission.

The ability to split a class is particularly dear to me, if we ever want to have the ability to refine our class hierarchies without doing so on the back of every user.

    permalink
    save
    parent
    report
    give gold
    reply

--

http://www.reddit.com/r/haskell/comments/2if0fu/on_concerns_about_haskells_prelude_favoring/

---

[–]RedLambda? 13 points 1 day ago

    ...and they are not needed or beneficial in everyday use - the normal functions are.

I beg to disagree. Unless you limit yourself to using only lists, the Foldable/Traversable methods and combinators help me reduce the degree I need to sprinkle my code wih Map., Vector., Set., Array., HashMap?., Seq., DList. etc prefixes for many of the operations, instead of simply focusing on what I want to do with the data-structure and not being always reminded which flavor of a data-structure I'm currently dealing with (which often changes over time).

    permalink
    save
    parent
    report
    give gold
    reply

[–]enigmo81 14 points 1 day ago

Funny, I have the opposite experience with Foldable and Traversable. They work over a much wider range of types that I use every day day, including lists. Functions like foldMap get quite a lot of use. I also like having Functor instances for datatypes that are functors, and Monoid instances for monoids.

This particular flavor of Haskell might not be the one you prefer, which is fine, but does that really mean it's not needed or beneficial for everyday use? It meets my bar.

    permalink
    save
    parent
    report
    give gold
    reply

---

"I think a more promising solution to the problem of generic type complexity is making specialization of type signatures easier in code, documentation, and compiler error messages."

---

" other code that can be very difficult to find. What is the type of that function? Is it a method of a class? If so, where is that class defined, and where is the instance defined? You can't necessarily find that out very easily from just the imports in the current module, or possibly not even with the imports together with the build-depends: field in the cabal file.

Even parametric polymorphism has a cost to semantic clarity. What are the types of the arguments of this function? "

---

what are the reasons Sweeney gives in The Next Mainstream Programming Language for not liking haskell?

" Why Haskell is Not My Favorite Programming Language

f(x,y) = x+y a=f(3,”4”) ERROR - Cannot infer instance * Instance : Num [Char] * Expression : f (3,"4")

f(int x,int y) = x+y a=f(3,”4”)

Parameter mismatch paremter 2 of call to f: Expected: int Got: “4 "


toread

	Why Not Haskell? (neugierig.org) 

https://news.ycombinator.com/item?id=3122725

---

teaspoon 1086 days ago

link

Special operators out the wazzoo...Google doesn't help here - I don't even know the verbal names for some of them.

Operators are just functions, so try Hoogle:

http://www.haskell.org/hoogle/?hoogle=%3E%3E%3D


pnathan 1085 days ago

link

FYI: Hoogle dies on :: and `, and gives a wrong result for =>.

Please note that this is just a simple problem, with a solution of printing out the right reference cheatsheet.

The real difficulty comes (IMO) when looking at piles of symbols in code and trying to determine what kind of meaning is coming from the symbol soup (C++ and Perl are notorious for this too).

Quite often (usually?), of course, public Haskell is written in a very clear and readable style. That's a major reason to use Haskell - to write in a readable language.


teaspoon 1085 days ago

link

You won't find :: and => in Hoogle because they're keywords, not functions. For keywords, see the the wiki page that gtani posted:

http://haskell.org/haskellwiki/Keywords

I recommend reading Learn You a Haskell instead; the keywords were second nature to me by the time I finished.

I agree that there's too much "symbol soup" Haskell out there that uses infix functions excessively. Even if you recognize all of the functions, you still need to have their precedences memorized to decode the soup.

---

SkyMarshal? 1085 days ago

link

>because I still have problems making really simple Haskell functions that don't crash.

Do those functions compile, and then crash anyway? I'd be interested to see examples. In my limited experience, if you can get your code to compile, it's pretty stable. Would be interesting to see counter examples.


teaspoon 1085 days ago

link

Non-exhaustive patterns are one thing the compiler can't catch:

  fn 0 = return ()
  main = fn 1

Giving an empty list to head/tail is another:

  main = head []

jstepien 1085 days ago

link

Speaking of the first example, compiling it with -Wall provides some hints:

  $ ghc --make test.hs -Wall
  test.hs:1:1:
      Warning: Pattern match(es) are non-exhaustive
               In an equation for `fn':
                   Patterns not matched: #x with #x `notElem` [0#]

teaspoon 1085 days ago

link

Wow, that's cool. It even works for non-Bounded argument types, as in fn [0] = 0:

  Warning: Pattern match(es) are non-exhaustive
        In an equation for `fn':
            Patterns not matched:
                []
                #x : _ with #x `notElem` [0#]
                0# : (_ : _)

gnuvince 1085 days ago

link

OK, so we were learning about greedy algorithms at school and I implemented a very naive implementation of a change making algorithm for Canadian coins. Here's the Haskell code:

    makeChange :: Int -> [Int]
    makeChange amount = loop 0 [200, 100, 25, 10, 5, 1] []
        where loop total coins@(c:cs) solution
                  | total == amount = solution
                  | null coins = error "no solution"
                  | otherwise = if total + c > amount then
                                    loop total cs solution
                                else
                                    loop (total + c) coins (c : solution)

(I could make this a lot better by returning an [(Int, Int)] and by using integer division, but I wanted to just follow the algorithm described in the textbook.)

To make sure that my code was correct, I wrote a QuickCheck? property:

    quickCheck (\(Positive n) -> sum (makeChange n) == n)

However, running this after compiling my file with GHC causes a stack overflow and I need to Ctrl+C out of the process.

On the other hand, the exact same algorithm in OCaml runs extremely quick and without a hicup.


joeyh 1084 days ago

link

quickcheck is running makeChange with an arbitrary Int. maxBound :: Int here is 2147483647. When given a number that large, makeChange recurses a lot, subtracting one two-dollar coin at a time, so you blow the stack. This is where you need to consult a haskell guru to find a way to make your code tail-recursive -- or find a smarter algorithm (using mod c for example so it only needs to recurse 6 times total).

Amusingly, if you simply change the type to Integer -> [Integer], it all works ok. I suspect that since Integers have unbounded size, quickcheck only tests with reasonably small ones.


pja 1083 days ago

link

It's even worse than that if he's on a 64-bit machine!

It runs just fine on my box with i = 2^32: 10s to completion or thereabouts.

However, the way this is written, the code has to construct the entire list in memory before it can print any of it out so for larger lists it is pretty much guaranteed to blow the stack and / or memory depending on the computational representation.

If it was using a snoclist or something then it could stream the output and perform the calculation in constance space, as it stands it has to hold on to the whole list of integers before outputting any of them.

I'm surprised that the OCaML? version 'just worked' frankly: either a) the OP didn't use QuickCheck? with their OCaML? code or b) the OCaML? QuickCheck? doesn't bother testing across the whole Int space.


pja 1083 days ago

link

Also, a quick test reveals that quickCheck on an Int will by default test 100 Ints across the entire range up to Int::maxbound. If your Ints are 64 bit this really isn't going to work very well on this code, regardless of what language you write it in unless you can stream the output. Any code that holds on to the list is going to fall over, since the size of the list is going to exceed physical memory for larger test values.


gnuvince 1084 days ago

link

The code is already tail recursive, which is why it's doubly puzzling. Also, like you said, using an Integer instead fixes the problem. But I find that fixing these issues distracts me away from the main problem and that doesn't happen in OCaml.

---

http://stackoverflow.com/questions/211216/hidden-features-of-haskell

---

http://www.haskell.org/haskellwiki/Tutorials#Comparisons_to_other_languages

---

catnaroek 13 hours ago [-]

Haskell's actual problem isn't the lack of a comprehensive standard library, but rather the presence of core language features that actively hinder large-scale modular programming. Type classes, type families, orphan instances and flexible instances all conspire to make as difficult as possible to determine whether two modules can be safely linked. Making things worse, whenever two alternatives are available for achieving roughly the same thing (say, type families and functional dependencies), the Haskell community consistently picks the worse one (in this case, type families, because, you know, why not punch a big hole on parametricity and free theorems?).

Thanks to GHC's extensions, Haskell has become a ridiculously powerful language in exactly the same way C++ has: by sacrificing elegance. The principled approach would've been to admit that, while type classes are good for a few use cases, (say, overloading numeric literals, string literals and sequences), they have unacceptable limitations as a large-scale program structuring construct. And instead use an ML-style module system for that purpose. But it's already too late to do that.

reply

---

"Because I was just a beginner in Haskell when I began the project, and key libraries like text didn't yet exist, there are a number of things about the project's design that are less than ideal. If I were starting over, I'd use Text everywhere instead of String. I'd also use a lot more newtypes, and I'd use free monads or type classes so that all of the readers and writers could be used outside of IO. I'd use a data structure that allowed attributes to be attached uniformly to all elements. "

https://groups.google.com/forum/#!topic/pandoc-discuss/0rutNJAVKoc

---


Footnotes:

1. ?SRVERROR, Code:32

2. Packet/binary