Haskell is great and a lot has been said about its advantages. Therefore, here I will here focus on it's flaws. Actually, I think most programmers should learn Haskell.
note: i don't know haskell very well and most of this is probably FLAT OUT WRONG. also, it's years out of date, and much of this may be fixed by now.
See | Ralf Lammel, Klaus Ostermann. Software Extension and Integration with Type Classes. Contrast Fig. 6 with Fig. 7.
If you write your code without typeclasses, like in Figure 6, you can't later use "Eval" and "Print" with anything other than the data type Exp that you defined. This is in contrast to dynamic OOP languages like Python, where you can replace a class with another class, and, presuming it provides the same interface that the old one did, it works fine.
If you write your code like in Figure 7, it's hard to read and hard to understand.
There are a variety of add-on frameworks to solve this problem floating around in the Haskell community. Some of them are preprocessors or language extensions. Here's a list: http://www.haskell.org/haskellwiki/Applications_and_libraries/Generic_programming . <sarcasm>If you like trying to decide which Web framework to use in Python, you'll love trying to decide which Haskell generic programming framework to use</sarcasm>. If you want to actually compare them before choosing, you must be able to read programming language research papers. Here is an example of text from the Haskell wiki describing one of these ("Scrap Your Boilerplate 3", not to be confused with "Scrap Your Boilerplate" 1 or 2, or "Scrap Your Boilerplate Reloaded" or "Scrap Your Boilerplate Revolutions" or "Smash Your Boilerplate"): "SyB?3 also requires the restriction on instance declarations to be relaxed in two ways: undecidable instances allow type classes constraints to be satisfied coinductively (the translation generates a recursive dictionary.) Secondly, SyB?3 relies on overlapping instances to override generic definitions of type-indexed functions for specific types. Overlapping instances are not an essential part of SyB?, but they do simplify the use of type-indexed operations.". Of course, the actual research papers are even harder to understand.
Maybe someday Haskell itself will officially settle on one of these solutions.
Because everything is not a typeclass, you end up using weird prefixes for operators, for example Map.!, so that you can import multiple of these without them conflicting.
If you only mention typeclasses in your Haskell program, you get some sort of an error upon compilation that basically means that the compiler didn't know which actual datatype to instantiate (except that one or two built in typeclasses, I think ones having to do with numbers, have a default type; but I don't think you can define custom ones like that, and I don't think most of the typeclasses in the standard library are like that.
Sorry, I forgot what the error message was. Mb it is this? (todo):
for instance, if i say "array (0,0) [(0,0)]", I get "Ambiguous type variables `t', `a' in the constraint: `IArray a t' arising from a use of `array' at <interactive>:1:0-18 Probable fix: add a type signature that fixes these type variable(s)". Fix (?): default data type instances associated with typeclass?
This leads to trouble when you want to use those libraries on custom data structures.
For example, Parsec, a popular parser library, used to not be able to parse bytestrings (in Haskell, the normal kind of String is a linked list of chars) (now it can, but you still have to call it a different way: http://stackoverflow.com/questions/2090399/using-haskells-parsec-to-parse-a-bytestring ). This is because the Parsec implementation referred to String, which is defined as a datatype, not a typeclass.
In general, lists in Haskell standard libraries are often defined as datatypes, which forces them to be linked lists. Contrast with Python sequences or Python iterators, which are just defined by "protocols", that is, method interfaces that objects must implement to fill those roles (similar to typeclasses).
"Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?" -- Dick Gabriel
"Haskell programmers are in an eternal dialog with the (very intelligent) compiler, but when such intelligent being talks, sometimes the messages are obscure in his metalanguage of meta-types and meta-abstractions." -- Alberto at http://unenterprise.blogspot.com/2008/02/tell-us-why-your-language-sucks.html?showComment=1203431940000#c2736928901933687271
Somehow, every time i use Haskell i find myself spending a long time having to learn about some obscure corner of the type system. Just one person's experience, maybe it won't happen to you.
Haskell only supports positional, not keyword, argument passing.
It's hard to reason about memory leaks (space complexity; "space leaks") in Haskell. There seems to be no way to just turn on "strictness by default" in some large chunk of your program (you can turn on strictness manually by making a small change to the code of each operation, but that's a lot of typing).
Note: remember these complaints are a few years out of date, mb they've been fixed by now
time and space profiling in GHC may be better than it was before (before i learned Haskell), but it still sucks. What is needed is for the profiler to quickly tell you:
In a language with mutable variables, you'd often say:
x = (expression) x = (expression involving x) x = (...)
In Haskell, it has to be:
x = (expression) x' = (expression involving x) x = (...)
that's annoying, especially if you later need to add a step in between x' and x.
When you are rapid prototyping, you constantly want to change the type signatures of your objects and functions. This takes a long time in Haskell (but wait, you say, doesn't Haskell have type inference, meaning you rarely have to write the type signatures explicitly? Maybe in theory, but since i am imperfect, as i code i keep making mistakes which cause compiler errors, and i find the compiler errors impossible to debug without adding a bunch of type signatures to things).
"Defining data as dynamic higher-order functions worked badly when most computing was standalone. And it works even worse now that we have these things called networks. The problem is that a dynamic higher-order function is a sort of ethereal construct, a speck of four-dimensional pixie dust, and the idea of saving it to disk or sending it over the network does not make any particular sense." -- http://www.cppblog.com/cfmonkey/archive/2008/07/31/57671.html
This is more of a theoretical problem than one that I have personally faced, and I suspect that people have developed ways around it.
Since the syntax allows custom precedence, you can't tell just from looking at an expression how to parse it; you have to know something about (or go look up) the definition of every function in it.
I'm not sure about this part, but, i think that if anyone whom you indirectly import imports a typeclass definition or implementation, you're stuck with it.
That stinks, and also, it makes it hard to find the code that performs a given operation when reading code.
A lot of behaviors which seem like reasonable defaults are only present as language extensions, so as a noob you find yourself trying to understand confusing shortcomings of the compiler that would go away if only you had known to enable a bunch of weird-sounding extensions. When you search for information on these situations, you find old-timers saying something like, "oh, just enable extension X, I always use that one". Well, if everyone's first response to a common problem is to tell you to use extension X, extension X should be part of the core language.
The community likes to define functions (or "combinators", as they like to call them here) with names full of punctuations rather than letters. That's valid, but I don't like it, because it seems to subconsciously make everything seem more difficult to understand.
You have to use monad transformers when you want to combine monads, but they are hard.
E.g. at http://kawagner.blogspot.com/2006/12/my-haskell-experience.html , a blogger demonstrates how adding the IO monad to some code made it harder to read. At http://kawagner.blogspot.com/2006/12/my-haskell-experience.html?showComment=1167306780000#c1839087203684138705 , a commenter suggested:
" 1) The idiom do x <- foo return (f x)
is equivalent to fmap f foo
which would make your code a lot more readable.
2) Most probably, your any-inside-of-a-Monad would have been neatly accomplished by liftM any
There are a lot of these tricks and neat little helper function to help you handle the monad enclosures. ... "
So in order to learn to read idiomatic Haskell written by others, in addition to learning how to do all the normal stuff that you'd have to learn for any language, you'll have to learn "a lot of these tricks and neat little helper function to help you handle the monad enclosures".
I never got far enough to need this, but in a blog post http://kawagner.blogspot.com/2006/12/my-haskell-experience.html (note: from 2006, may be way out of date) the author asserts, "Even Java code is more concise than Haskell code which has to change things." as gives an example:
" For example I have to add elements to a map. In Java I can write
map.add(key, value)
In Haskell the code is something like
modify $ \st -> st { st_map = Map.insert key value (st_map st) }
('modify' if from a state monad).
If I want to access those data:
value = map.get(key)
in Haskell:
st <- get let value = Map.lookup name $ st_map st "
a commentor says,
"
Paul Johnson said...
The problem with updating state comes from Haskell's clumsy way of modifying a record. Say I define a record containing fields "foo" and "bar". In Haskell foo and bar are automatically defined as getter functions. But there is no equivalent setter function. Instead you have to use a clumsy bit of special case syntax. To increment the foo field in a record "state" you would have to write
state1 = state { foo = foo state + 1 }The workaround is to write your own setter functions using this syntax."
but the original author points out that it's hard to write pretty reusable setter functions too:
"
Sure, you can write something like
addValue key value = modify $ \st -> st { st_map = Map.insert key value (st_map st) }But chances are that you only need it in once in exactly this form. Problem here is that you can't simply write a generic version of 'addValue' which works not only with 'st_map' but also with some other value in a state, for example 'st_props'. If you only need a getter, it's simple:
getValue prop key = do
st <- get
return $ Map.lookup key $ prop stnow you can simply write "value <- getValue st_props key" (but you still need the '<-' syntax of course and can't simply insert (getValue ...) into some expression).
But this solution isn't possible for setters because "st { st_map = ... }" is a special syntax and not a function. So you first have to create a 'setter-function' (like \st v -> st { st_map = v }) from it and then use this in a generic setter function like the one above: setValue setter getter key value = do
modify $ \st -> setter st (Map.insert key value (getter st))This can now be used this way:
setValue (\st v -> st { st_tables = v }) st_tables key valueBut it isn't much shorter and less readable than simply using the full code at the top so I haven't used this."
another commentator suggests that a specific form of helper may help:
" When using Data.Map with MonadState?