notes-computer-programming-programmingLanguageDesign-prosAndCons-golang2

continuation of notes-computer-programming-programmingLanguageDesign-prosAndCons-golang


http://monoc.mo.funpic.de/go-rant/

http://go-lang.cat-v.org/quotes

---

http://www.stanford.edu/class/ee380/Abstracts/100428-pike-stanford.pdf

--

http://www.informit.com/articles/article.aspx?p=1623555

---

craigyk 4 days ago

link

I went and looked up some of my notes back when I was trying out Go. I hope to learn from others wether these are real issues or simply my misunderstandings.

1. So we don't get generics, but the language built-ins seem to get to be type-parameterized (channels, slices, maps). Unfortunately the syntax for doing so for each of these is inconsistent (probably as a result of being special-cased rather than dog-fooded using language-level generics): []float, map[float]float, chan float.

2. Built-in types seem to receive other special treatments as well, which includes special initialization keywords (make vs. new) and built-in functions (len, cap, etc.) but I don't see why this needed to be the case, even for performance reasons. There's no reason why the these built-in types couldn't pretend to implement built-in interfaces to make more transparent with user-types while having the compiler optimize them with special-case functions for efficiency.

3. Unused variables are a hard error which is a completely understandable stance. Unfortunately, I think people may use workarounds to get around this. Also, I can't believe unused variables are a hard error, but uninitialized variables are not! Instead we are supposed to trust that everything is OK since they get initialized to some kind of "zero" value that isn't even under the developer's control.

4. Other small quibbles: I think pattern matching on function arguments could have been implemented as sugar that uses interfaces and method calls under the covers. Also named return values are ugly, and the function declaration syntax could have been made more concise.

reply

---

" ... A good collections library should have the choices required to get asymptotically optimal algorithms built with almost no fuss, with the choices between ordered/unordered and set/list/map and queue/stack/heap being easy to find. Java undeniably does this really well; even if Go still does this ok.

Sometimes when using Go channels, I think I might be better off just going back to Erlang due to its pattern matching. The most troublesome part of verifying the correctness of Erlang code is following the implicit protocol in the weakly typed structs being passed around. There is stronger typing in Go, but without normal subtype declarations, it becomes convoluted to do declare a whole bunch of structs with some referential integrity (ie: a FooReply? struct that contains a ref to a FooRequest?, let alone an IPPacket struct that contains a TCPPacket

.. Srinivas JONNALAGADDA 3 hours ago
UDPPacket, ... either a safe union or just normal subtyped references, etc.)

Thanks, Rob, for your comment. Some of the subtype requirements can be handled well in Go using interfaces. In my program too, I had some useful interfaces. "

---

" 2014-01-08 Another go at Go ... failed!

After a considerable gap, a gave Go another go! The Problem

As part of a consulting engagement, I accepted a project to develop some statistical inference models in the area of drug (medicine) repositioning. Input data comprises three sets of associations: (i) between drugs and adverse effects, (ii) between drugs and diseases, and (iii) between drugs and targets (proteins). Using drugs as hub elements, associations are inferred between the other three kinds of elements, pair-wise.

The actual statistics computed vary from simple measures such as sensitivity (e.g. how sensitive is a given drug to a set of query targets?) and Matthews Correlation Coefficient, to construction of rather complex confusion matrices and generalised profile vectors for drugs, diseases, etc. Accordingly, the computational intensity varies considerably across parts of the models.

For the size of the test subset of input data, the in-memory graph of direct and transitive associations currently has about 15,000 vertices and over 1,4000,000 edges. This is expected to grow by two orders of magnitude (or more) when the full data set is used for input. Programming Language

I had some temptation initially to prototype the first model (or two) in a language like Ruby. Giving the volume of data its due weight though, I decided to use Ruby for ad hoc validation of parts of the computations, with coding proper happening in a faster, compiled language. I have been using Java for most of my work (both open source as well as for clients). However, considering the fact that statistics instances are write-only, I hoped that Go could help me make the computations parallel easily[1].

My choice of Go caused some discomfort on the part of my client's programmers, since they have to maintain the code down the road. No serious objections were raised nevertheless. So, I went ahead and developed the first three models in Go. Practical Issues With Go

The Internet is abuzz with success stories involving Go; there isn't an additional perspective that I can add! The following are factors, in no particular order, that inhibited my productivity as I worked on the project. No Set in the Language

Through (almost) every hour of this project, I found myself needing an efficient implementation of a set data structure. Go does not have a built-in set; it has arrays, slices and maps (hash tables). And, Go lacks generics. Consequently, whichever generic data structure is not provided by the compiler can not be implemented in a library. I ended up using maps as sets. Everyone who does that realises the pain involved, sooner than later. Maps provide uniqueness of keys, but I needed sets for their set-like properties: being able to do minus, union, intersection, etc. I had to code those in-line every time. I have seen several people argue vehemently (even arrogantly) in golang-nuts that it costs just a few lines each time, and that it makes the code clearer. Nothing could be further from truth. In-lining those operations has only reduced readability and obscured my intent. I had to consciously train my eyes to recognise those blocks to mean union, intersection, etc. They also were very inconvenient when trying different sequences of computations for better efficiency, since a quick glance never sufficed!

Also, I found the performance of Go maps wanting. Profiling showed that get operations were consuming a good percentage of the total running time. Of course, several of those get operations are actually to check for the presence of a key. No BitSet? in the Standard Library

Since the performance of maps was dragging the computations back, I investigated the possibility of changing the algorithms to work with bit sets. However, there is no BitSet? or BitArray? in Go's standard library. I found two packages in the community: one on code.google.com and the other on github.com. I selected the former both because it performed better and provided a convenient iteration through only the bits set to true. Mind you, the data is mostly sparse, and hence both these were desirable characteristics.

Incidentally, both the bit set packages have varying performance. I could not determine the sources of those variations, since I could not easily construct test data to reproduce them on a small scale. A well-tested, high performance bit set in the standard library would have helped greatly. Generics, or Their Absence

The general attitude in Go community towards generics seems to have degenerated into one consisting of a mix of disgust and condescension, unfortunately. Well-made cases that illustrate problems best served by generics, are being dismissed with such impudence and temerity as to cause repulsion. That Russ Cox' original formulation of the now-famous tri-lemma is incomplete at best has not sunk in despite four years of discussions. Enough said!

In my particular case, I have six sets of computations that differ in:

    types of input data elements held in the containers, and upon which the computations are performed (a unique combination of three types for each pair, to be precise),
    user-specified values for various algorithmic parameters for a given combination of element types,
    minor computational steps and
    types (and instances) of containers into which the results aggregate.

These differences meant that I could not write common template code that could be used to generate six versions using extra-language tools (as inconvenient as that already is). The amount of boiler-plate needed externally to handle the differences very quickly became both too much and too confusing. Eventually, I resorted to six fully-specialised versions each of data holders, algorithms and results containers, just for manageability of the code.

This had an undesirable side effect, though: now, each change to any of the core containers or computations had to be manually propagated to all the corresponding remaining versions. It soon led to a disinclination on my part to quickly iterate through alternative model formulations, since the overhead of trying new formulations was non-trivial. Poor Performance

This was simply unexpected! With fully-specialised versions of graph nodes, edges, computations and results containers, I was expecting very good performance. Initially, it was not very good. In single-threaded mode, a complete run of three models on the test set of data took about 9 minutes 25 seconds. I re-examined various computations. I eliminated redundant checks in some paths, combined two passes into one at the expense of more memory, pre-identified query sets so that the full sets need not be iterated over, etc. At the end of all that, in single-threaded mode, a complete run of three models on the test set of data took about 2 minutes 40 seconds. For a while, I thought that I had squeezed it to the maximum extent. And so thought my client, too! More on that later. Enhancement Requests

At that point, my client requested for three enhancements, two of which affected all the six + six versions of the models. I ploughed through the first change and propagated it through the other eleven specialised versions. I had a full taste of what was to come, though, when I was hit with the realisation that I was yet working on Phase 1 of the project, which had seven proposed phases in all! Back to Java!

I took a break of one full day, and did a hard review of the code (and my situation, of course). I quickly identified three major areas where generics and (inheritance-based) polymorphism would have presented a much more pleasant solution. I had already spent 11 weeks on the project, the bulk of that going into developing and evaluating the statistical models. With the models now ready, I estimated that a re-write in Java would cost me about 10 working days. I decided to take the plunge.

The full re-write in Java took 8 working days. The ease with which I could model the generic data containers and results containers was quite expected. Java's BitSet? class was of tremendous help. I had some trepidation about the algorithmic parts. However, they turned out to be easier than I anticipated! I made the computations themselves parts of formally-typed abstract classes, with the concrete parts such as substitution of actual types, the user-specified parameters and minor variations implemented by the subclasses. Conceptually, it was clear and clean: the base computations were easy to follow in the abstract classes. The overrides were clearly marked so, and were quite pointed.

Naturally, I expected a reduction in the size of the code base; I was not sure by how much, though. The actual reduction was by about 40%. This was nice, since it came with the benefit of more manageable code.

The most unexpected outcome concerned performance: a complete run of the three models on the test set of data now took about 30 seconds! My first suspicion was that something went so wrong as to cause a premature (but legal) exit somewhere. However, the output matched what was produced by the Go version (thanks Ruby), so that could not have been true. I re-ran the program several times, since it sounded too good to be true. Each time, the run completed in about 30 seconds.

I was left scratching my head. My puzzlement continued for a while, before I noticed something: the CPU utilisation reported by /usr/bin/time was around 370-380%! I was now totally stumped. conky showed that all processor cores was indeed being used. How could that be? The program was very much single-threaded.

After some thought and Googling, I saw a few factors that potentially enabled a utilisation of multiple cores.

    All the input data classes were final.
    All the results classes were final, with all of their members being final too.
    All algorithm subclasses were final.
    All data containers (masters), the multi-mode graph itself, and all results containers had only insert and look-up operations performed on them. None had a delete operation.

Effectively, almost all of the code involved only final classes. And, all operations were append-only. The compiler may have noticed those; the run-time must have noticed those. I still do not know what is going on inside the JRE as the program runs, but I am truly amazed by its capabilities! Needless to say, I am quite happy with the outcome, too! Conclusions

    If your problem domain involves patterns that benefit from type parameterisation or[2] polymorphism that is easily achievable through inheritance, Go is a poor choice.
    If you find your Go code evolving into having few interfaces but many higher-order functions (or methods) that resort to frequent type assertions, Go is a poor choice.
    Go runtime can learn a trick or two from JRE 7 as regards performance.

These may seem obvious to more informed people; but to me, it was some enlightenment!

[1] I tried Haskell and Elixir as candidates, but nested data holders with multiply circular references appear to be problematic to deal with in functional languages. Immutable data presents interesting challenges when it comes to cyclic graphs! The solutions suggested by the respective communities involved considerable boiler-plate. More importantly, the resulting code lost direct correspondence with the problem's structural elements. Eventually, I abandoned that approach.↩ "

--- http://oneofmanyworlds.blogspot.com/2014/01/another-go-at-go-failed.html

--

"

http://golang.org/pkg/container/

http://docs.oracle.com/javase/tutorial/collections/interfaces/index.html

http://docs.oracle.com/javase/tutorial/collections/implementations/index.html "

"

pron 6 hours ago

link

... and those don't even include Java's long list of concurrent collections.

reply

jwn 6 hours ago

link

Or the other Collection implementations offered by https://code.google.com/p/guava-libraries/.

reply

jbooth 6 hours ago

link

Or the ability to write collections in the first place, as enabled by generics. You can't even write a collection in Go unless you're dealing with interface{} or unsafe.Pointer and casting a lot.

I really like the language, enough to use it and work around the lack of generics, but that's a glaring weakness. They need some way to enable container classes.

reply "

--

" pcwalton 2 hours ago

link

Generics, inheritance, and the factory pattern are completely orthogonal features. Adding generics would not entail adding inheritance, mandating the factory pattern, or any other slippery slope feature. ... Generics have nothing to do with inheritance and the factory pattern. There are lots of languages that have generics but neither inheritance neither factories: SML, OCaml, Haskell, etc.

Inheritance makes generics harder, in fact, because type inference becomes undecidable in the general case.

Factories are basically a workaround for functions not being able to be freestanding (unattached to a class) in Java (the "kingdom of nouns"), a problem that Go doesn't have.

reply"

--

"

pjmlp 5 hours ago

link

... and the concurrent libraries of futures, fork-join, tasks

reply

sixthloginorso 5 hours ago

link

Java's concurrency primitives and libraries are really overlooked in these discussions. Is the syntax too off-putting, or is it merely that Java is unfashionable?

reply

RyanZAG? 4 hours ago

link

Very much unfashionable for the HN crowd. Most of the new research in concurrency is happening in Java and funded by the high performance trading industry - an industry which is very far from the HN crowd. The new Java8 stampedlock is a good example. It's possible to implement it in C++ as well, but because of the guarantees required by the lock it is a very difficult lock to integrate into C++ code. On the other hand, the JRE guarantees the correct constraints for Java code making a stampedlock very easy to use [1]. The performance of a stampedlock also seems to be the best case for any multi-reader environment. [2]

[1] http://concurrencyfreaks.blogspot.com/2013/11/stampedlocktry...

[2] http://mechanical-sympathy.blogspot.ca/2013/08/lock-based-vs...

reply

scott_s 29 minutes ago

link

I find your claim that most new research in concurrency happens in Java strange. Perhaps you are unfamiliar with academic research in concurrency and parallelism? A way to get a small taste is to look at recent papers from the conference Practice and Principles of Parallel Programming (PPoPP?).

reply

jamra 3 hours ago

link

I don't think there is anything wrong with Java as a language, but I have a question for you. Does Java's concurrency engine allow for many threads like Go or is it similar to C# in how your threads are not lightweight and are therefore limited to ixN where N is the number of CPU cores and i is a small integer < 10.

reply

pcwalton 2 hours ago

link

You can't spawn 80 1:1 pthreads on an 8-core machine? Huh?

Operating systems have been able to handle an order of magnitude more pthreads than that since the day pthreads were introduced.

reply

pron 3 hours ago

link

Java doesn't have a "concurrency engine", but a very large set of concurrency primitives: schedulers, locking and lock-free data structures, atomics etc.

To answer your question, yes: my own library, Quasar[1], provides lightweight threads for Java.

[1]: https://github.com/puniverse/quasar

reply

pjmlp 2 hours ago

link

The Java language specification does not define how threads are implemented.

The first set of JVMs did implement green threads, which are what goroutines are. Shortly thereafter most of them switched to red threads, aka real threads.

You can still find a few JVMs that use green threads, like Squawk.

https://java.net/projects/squawk/pages/SquawkDesign

Other than that, java.util.concurrent provides mechanisms to map multiple tasks to single OS threads.

reply "

--

"

cosn 51 minutes ago

link

I wrote most of those already, if anyone needs them, help yourself. Haven't had the need for a bit set, but I guess I can add it to the TODO list :)

https://github.com/cosn/collections/

That being said, I completely agree that the author chose the wrong language for the problem at hand.

reply

redbad 5 hours ago

link
    > Go's zoo of builtin data structures is really, really   
    > poor compared to Java.

This is a poor comparison because idiomatic Go typically doesn't use the `container` package.

reply "

---

" Either stick with what you know or use a tool that has very specific optimizations for the problem you're going to tackle (and no, concurrency is not a good option to jump off the JVM as it has incredible concurrency support already: see https://github.com/LMAX-Exchange/disruptor for the fastest concurrent bus I've seen in any language). "

--

"

_pmf_ 4 hours ago

link

> see https://github.com/LMAX-Exchange/disruptor for the fastest concurrent bus I've seen in any language

The genious behind LMAX is the way they bend Java's object layout features; they achieve nice performance in spite of using Java, not because of it. Some decade old message passing libraries (OpenMP? et al.) will probably outperform LMAX without even trying.

reply

twic 1 hour ago

link

Actually, i doubt it. The machine code that ends up running at the heart of a disruptor is pretty minimal, and it executes precisely zero lock or atomic operations when handing an object from one side to the other. I am not aware of any other message-passing system that lightweight. I would be genuinely interested to hear about one.

reply "

--

"

barrkel 7 hours ago

link

It's not very like Java at all. The only OO-like feature it has is interfaces, which are more like Haskell's type classes combined with existential types than anything else. (The resemblance here is actually very close, implementation-wise: roughly, vtables at the per-instance level rather than per-class level).

The other semi-interesting things it has are (a) channels + parallel functions and (b) the return of error codes for error handling.

Java things it doesn't have start with reflection, bytecode (CPU independence), dynamic runtime code generation, inheritance hierarchies, generics, "everything is an object", array covariance, exceptions as an idiomatic error propagation scheme, scoping based on nested namespaces, no code outside of classes, nested types with scoping to match, inner classes with implicit outer class self pointer, only value parameter semantics, and only reference user-defined types.

In fact most of what makes Java distinct from the subset it shares with C is missing, except for a garbage collector.

reply "

--

" NateDad? 7 hours ago

link

Wow, go is totally not Java-lite. Trying to program in it like it is will just cause headaches. The only conceivable way that I can think of that it could be called that is because it is compiled and has a garbage collector. Otherwise... they are very different languages.

Go is what you'd get if python and C had a bastard love child that turned out different from either, but still retained some of the beauty of each of its parents.

reply

paulnechifor 4 hours ago

link

And Nimrod would be their legitimate favorite son.

reply "

--

"

jerf 5 hours ago

link

Probably more accurately Python--; it cuts out a lot of features from Python, a great deal of which you weren't using,

...

(By "not using", I mean for instance that in Python, like Javascript, at any moment, you may set a new method on a particular instance, and the Python runtime must deal with that, such as by spending a lot of time looking up references that never actually change in a real program. This is a really expensive feature that you are paying for all the time, yet rarely using. A great deal of the JIT speedup you can get in such languages is working out how to skip those lookups optimistically until someone actually changes them.)

"

"

fauigerzigerk 4 hours ago

link

I miss Python's named parameters with default values. They are surprisingly hard to replicate in Go. Also list comprehensions.

reply

NateDad? 3 hours ago

link

Named parameters can be sort of gotten by passing a struct as the parameter, then you can do

type fooParams struct { Name string Age int Address string }

foo( fooParams{ Name = "Bob", Age = 24 } )

Defaults are a lot harder to do in a way that doesn't just suck. You can make a DefaultFooParam? that has all the defaults... but it's not pretty.

List comprehensions never seemed like a big deal to me. It's 3-4 lines for a loop, which is probably easier to read than the list comprehension anyway.

reply

baq 1 hour ago

link

> List comprehensions never seemed like a big deal to me. It's 3-4 lines for a loop, which is probably easier to read than the list comprehension anyway.

that's totally a nope.

    def qsort(L):
        if len(L) <= 1: return L
        return qsort([lt for lt in L[1:] if lt < L[0]]) + [L[0]] + qsort([ge for ge in L[1:] if ge >= L[0]])

"

--

http://oneofmanyworlds.blogspot.com/2011/11/draft-2-of-proposal-for-generics-in-go.html

--

"

tldr: I will be using a lot of generics and do tons of set operations. I will chose Go, a language that doesn't have generics and sets.

(I won't post the Go code for criticism, and I couldn't stand golang-nuts)

reply

shadowmint 7 hours ago

link

and; go isn't the best language to write everything in.

I think that's a fair conclusion for him to have drawn. He certainly wasn't hating on go.

(golang-nuts is openly hostile to any kind of criticism, so much so that there's been talk of a community code of conduct; which the community didn't want. I don't blame him.)

reply

"

--

"

maxk42 1 hour ago

link

I thought it was a pain in the ass until I realized golang really does have a sort of dynamic parameter:

    func test(myVar interface{})

will accept any type at all. You can then use Go's type switch or a call to reflect.TypeOf?() to invoke different behavior based on the parameter's type. To my way of thinking, it's pretty simple.

reply "

--

" Goranek 8 hours ago

link

Seems that 2014 won't be a good year for Go on HN. Same old story as with Node and Mongo. First they ignore you, then they praise you, then they hate you.

reply

--

Nogwater 1 day ago

link

Not knowing enough about Erlang/OTP or Go: Would it be possible for more of OTP to be replicated in a Go 2.0? What's missing? What would it take? What can never be replicated?

reply

jerf 1 day ago

link

You can recover a substantial portion, but there's a few things you can't quite get back. Since Go is a mutable-state language, and also a shared-state language (isolation is by convention, not by language design), you have to live with the consequences of that. In my supervisor tree implementation, the restart of a monitored goroutine is just to fire off a new goroutine on the exact same object again; if you still have bad state lying around and immediately crash again, well, too bad, the supervisor can't do anything about that. (Except not thrash the processor with endless restarts.) In contrast, when an Erlang supervisor restarts a process, it is guaranteed to be a fresh process with no shared state from the one that just crashed. It's much more likely to not immediately crash.

--

waffle_ss 1 day ago

link

Besides the issues that jerf mentioned, Erlang also has per-process garbage collection built right into the VM. Last time I checked Go's garbage collection was still stop-the-world, mark-and-sweep, making it less suitable for soft real-time systems than Erlang.

reply

--

pron 13 hours ago

link

Well, I like Go, and it certainly has its use cases. But I personally worked on a hard-realtime safety critical mission Java project (if the program gave the wrong answer people could die, and if it gave the right answer late, tens of millions of dollars would be lost). We couldn't have done that without Java's concurrent data-structures (which Go doesn't have), without Real-Time Java's hard realtime guarantees, and it would have been very cumbersome without Java's brand of dynamic linking.

So sometimes Go is good enough, but sometimes you need the full power of the JVM.

reply

--

" Thoughts on Go

As the adage says, programmer time is much more valuable than processor time. So you should use Go.

    Go is a high-level language. It is easy to write in just like Python.
    It has a rich standard library just like Python.
    Go’s unit testing framework is phenomenal.
    Unlike Python, Go is statically typed, which makes refactoring large codebases unbelievably easier.
    Like Python, Go supports a form of duck-typing, through interfaces. Interfaces are the bees knees.
    Like Python, Go is garbage collected.
    Unlike Python, Go frees you from worrying about distinctions between synchronous and asynchronous calling conventions or threadpool switching.
    Like Twisted, Go is evented for I/O, but unlike Twisted, Go stack traces are much more manageable and Go’s profiling tools provide much greater clarity about execution order.
    Unlike Python, Go is fast, compiled, and leaves the runtime execution to only the work that needs to be done.

It’s a shame Go didn’t exist when Space Monkey started. We’re very lucky to have it now. " article on dumping Python for Go

--

" The concept of channels is nice, it lends itself easily to data pipelines and to SIMD parallel programming. It is not a breakthrough, though. SIMD is easily done in many many languages. Other than that, it's pretty down to earth. There's a myriad nice things (first class functions, closures, garbage collection, strong(ish) typing), but nothing ground breaking. There are also some choices that smell, namely the lack of inheritance due to the flat type hierarchy (and the obvious need for a fix), or the inability to overload methods and operators. " -- https://news.ycombinator.com/item?id=7803827

" To me, "no imposed GC" is a non-negotiable element of being a systems programming language. So I am far more drawn to Rust than Go. " -- https://news.ycombinator.com/item?id=7805145

" sergiosgc 51 minutes ago

link

http://golang.org/doc/faq#What_is_the_purpose_of_the_project

« No major systems language has emerged in over a decade, but over that time the computing landscape has changed tremendously.

(...)

We believe it's worth trying again with a new language, a concurrent, garbage-collected language with fast compilation. Regarding the points above [cut on edit]:

reply

0xdeadbeefbabe 12 minutes ago

link

Go has pretty neat semantics for channels and goroutines, but it presumes you want threads and blocking IO. Or in other words, "hey kid here's some rope"

Your approach to concurrency matters even more than the tool you use. Twisted (for python) has a good approach for example.

reply

"

--

jerf 17 hours ago

link

"People are leaving Python for Go because people have always left Python for fast compiled languages."

I think the angst about Go comes from the fact that someone who leaves Python for Java may still come back, because you can develop far more quickly in Python than Java. But someone who leaves Python for Go probably isn't coming back... my experience is that it is slightly slower (10-20%, YMMV but we're certainly not talking integer multiples) to cut out a prototype in Go, but that said prototype runs an order of magnitude faster, can potentially be optimized without much work for another order of magnitude (though ultimately this is more a reflection of Python's slowness than Go's speed), and is then much easier to maintain and extend, even on the timescale of a week or two, to say nothing of larger time scales.

A couple of people elsewhere in the comments here assume that Python must still be much easier to develop for than Go. It really isn't anymore; it turns out the combination of garbage collection and structural-typing-esque interfaces pretty much does anything you might have ever wanted Python to do, and a great deal of the rest of the differences turn out to be far less important in practice than in theory.

I first saw the idea in Joel Spolsky's "How Microsoft Lost the API War" [1], under the heading "Automatic Transmissions Win The Day", that the primary innovation of the 1990s was simply garbage collection instead of memory management. (As he is aware the idea is older than 1990, one presumes he means that it became practical and widespread.) The languages that spread this innovation, the "scripting languages" like Perl and Python and PHP and Javascript, changed a lot of things at once, which as any good scientist knows means it was hard to tell what about those changes actually contributed to the enhanced productivity that they certainly did bring. My experience with Go certainly gives me further belief in Joel's thesis... you read a bullet point listing of the Python features and Go features and it seems obvious that Python is a wildly better language than Go, yet... I've learned after all these years and all the languages I've tried out to ask myself, if the features are so important, why don't I miss them? Because in practice, I don't. Rip closures out of Go, and I'd miss that. Rip out the goroutines and I'd miss not having something like generators or something, indeed, the language would nearly be useless to me. But I certainly don't miss metaclasses or decorators or properties. I will cop to missing pervasive protocols for the built-in types, though; I wish I could get at the [] operator for maps, or implement a truly-transparent slice object. But that's about it.

[1]: http://www.joelonsoftware.com/articles/APIWar.html

reply

nostrademons 12 hours ago

link

My experience was Go was about 2-3x slower (development speed) than Python for prototyping. Web programming, though, which is a particular strength of Python and a particular weakness of Go. YMMV, of course.

I actually really do miss list comprehensions and properties and pervasive protocols for built-in types and really concise keyword/literal syntax. I don't miss metaclasses, and I only vaguely miss decorators. (Go has struct annotations and compiler-available-as-a-library, which in some ways are better.) Properties in particular are really useful for the evolvability of code; otherwise you have to overdesign up front to avoid a big rewrite as you switch from straight struct fields to accessors.

I'm actually leaning towards Java 8 + Jython 2.7 as I consider what language to write my startup/personal projects in. Jython 2.7 gives me basically all the language features I actually cared about in Python, and it fixes the Unicode mess that necessitated Python 3. It has no GIL and solid multithreading support. The Java layer gives you all the speed of Go and more. And the interface between them lets you seamlessly use Java within your Jython scripts, so you can easily push code down into the Java layer without having to rewrite your whole product.

reply

jerf 4 hours ago

link

Yes, the Go web prototyping story is a bit weak. If you want to do serious development in it I think it's pretty good, because frankly for years our web frameworks in most other languages have been unbelievably contorted around not having something like goroutines in ways that we've stopped even being able to see because they're so darned pervasive, but if you just want to slam a decent site out (and there's nothing wrong with that) there's no great story there right now.

reply

---

http://ridiculousfish.com/blog/posts/go_bloviations.html

---

in Golang, if you end a file without either a newline or semicolon to terminate the last line of code, you get this error:

" test.go:75:1: expected ';', found 'EOF' "

this dude found that confusing (presumably they wanted to error message to mention the lack of newline, not just the lack of semicolons)

--

many ppl don't like the Golang requirement that every variable be used:

http://ridiculousfish.com/blog/posts/go_bloviations.html#go_damnableuse

--

apparently, one good effect of the requirement that every variable be used is that it forces programs to check return values

--

go's C interop

"Go has a foreign function interface to C, but it receives only a cursory note on the home page. This is unfortunate, because the FFI works pretty darn well. You pass a C header to the "cgo" tool, and it generates Go code (types, functions, etc.) that reflects the C code (but only the code that's actually referenced). C constants get reflected into Go constants, and the generated Go functions are stubby and just call into the C functions.

The cgo tool failed to parse my system's ncurses headers, but it worked quite well for a different C library I tried, successfully exposing enums, variables, and functions. Impressive stuff.

Where it falls down is function pointers: it is difficult to use a C library that expects you to pass it a function pointer. I struggled with this for an entire afternoon before giving up. Ostsol got it to work through, by his own description, three levels of indirection. " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_ccompatibility

--

"

Unicode

Go looooves UTF-8. It's thrilling that Go takes Unicode seriously at all in a language landscape where Unicode support ranges from tacked-on to entirely absent. Strings are all UTF-8 (unsurprisingly, given the identity of the designers). Source code files themselves are UTF-8. Moreover, the API exposes operations like type conversion in terms of large-granularity strings, as opposed to something like C or Haskell where case conversion is built atop a function that converts individual characters. Also, there is explicit support for 32 bit Unicode code points ("runes"), and converting between runes, UTF-8, and UTF16. There's a lot to like about the promise of the language with respect to Unicode.

But it's not all good. There is no case-insensitive compare (presumably, developers are expected to convert case and then compare, which is different).

Since this was written, Go added an EqualFold? function, which reports whether strings are equal under Unicode case-folding. This seems like a bizarre addition: Unicode-naïve developers looking for a case insensitive compare are unlikely to recognize EqualFold?, while Unicode-savvy developers may wonder which of the many folding algorithms you actually get. It is also unsuitable for folding tasks like a case-insensitive sort or hash table.

Furthermore, EqualFold? doesn't implement a full Unicode case insensitive compare. You can run the following code at golang.org; it ought to output true, but instead outputs false.

package main import "fmt" import "strings" func main() { fmt.Println(strings.EqualFold?("ss", "ß")) }

Bad Unicode support remains an issue in Go.

Operations like substring searching return indexes instead of ranges, which makes it difficult to handle canonically equivalent character sequences. Likewise, string comparison is based on literal byte comparisons: there is no obvious way to handle the precomposed "San José" as the same string as the decomposed "San José". These are distressing omissions.

To give a concrete example, do a case-insensitive search for "Berliner Weisse" on this page in a modern Unicode-savvy browser (sorry Firefox users), and it will correctly find the alternate spelling "Berliner Weiße", a string with a different number of characters. The Go strings package could not support this.

My enthusiasm for its Unicode support was further dampened when I exercised some of the operations it does support. For example, it doesn't properly handle the case conversions of Greek sigma (as in the name "Odysseus") or German eszett:

package main import ( "os" . "strings" ) func main() { os.Stdout.WriteString?(ToLower?("ὈΔΥΣΣΕΎΣ\n")) os.Stdout.WriteString?(ToUpper?("Weiße Elster\n")) }

This outputs "ὀδυσσεύσ" and "WEIßE? ELSTER", instead of the correct "ὀδυσσεύς" and "WEISSE ELSTER."

In fact, reading the source code it's clear that string case conversions are currently implemented in terms of individual character case conversion. For the same reason, title case is broken even for Roman characters: strings.ToTitle?("ridiculous fish") results in "RIDICULOUS FISH" instead of the correct "Ridiculous Fish." D'oh.

Go has addressed this by documenting this weirdo existing behavior and then adding a Title function that does proper title case mapping. So Title does title case mapping on a string, while ToTitle? does title case mapping on individual characters. Pretty confusing.

Unicode in Go might be summed up as good types underlying a bad API. This sounds like a reparable problem: start with a minimal incomplete string package, and fix it later. But we know from Python the confusion that results from that approach. It would be better to have a complete Unicode-savvy interface from the start, even if its implementation lags somewhat. " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_unicode

--

complaints about the Go runtime/compiler's errors:

" If you index beyond the bounds of an array, the error is "index out of range." It does not report what the index is, or what the valid range is. If you dereference nil, the error is "invalid memory address or nil pointer dereference" (which is it, and why doesn't it know?) If your code has so much as a single unused variable or import, the compiler will not "continue operation," and instead refuse to compile it entirely. " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_errors

--

" The Go compiler does not support incremental or parallel compilation (yet). Changing one file requires recompiling them all, one by one. You could theoretically componentize an app into separate packages. However it appears that packages cannot have circular dependencies, so packages are more like libraries than classes. " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_compiletimes

--

complaint about golang polymorphism on return type, as applied to channel reads:

" Another syntax / semantics oddity is the behavior of reading from channels (like a pipe). Whether a read from a channel blocks depends on how the return value is used:

 res := <- queue /* waits if the queue is empty */
 res, ok := <- queue /* returns immediately if the queue is empty */
 

This bears repeating: the behavior of a channel read depends on how the return value is (will be) used. This seems like a violation of the laws of time and space! " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_syntax

--

" You can create a goroutine with any function, even a closure. But be careful: a questionable design decision was to make closures capture variables by reference instead of by value. To use an example from Go's FAQ, this innocent looking code actually contains a serious race:

    values := []string{"a", "b", "c"}
    for _, v := range values {
        go func() {
            fmt.Println(v)
            done <- true
        }()
    }
  

The for loop and goroutines share memory for the variable v, so the loop's modifications to the variable are seen within the closure. For a language that exhorts us to "do not communicate by sharing memory," it sure makes it easy to accidentally share memory! (This is one reason why the default behavior of Apple's blocks extension is to capture by value.) " -- http://ridiculousfish.com/blog/posts/go_bloviations.html#go_concurrency

--

https://news.ycombinator.com/item?id=7913918

--

DAddYE? 14 hours ago

link

I'm very happy for this:

"Cross compiling with cgo enabled is now supported. ... Finally, the go command now supports packages that import Objective-C files (suffixed .m) through cgo."

Great news!!!!!!!!!!!!!!!!!!!!

reply

chrissnell 13 hours ago

link

I'm a little confused at what it takes to get this going. I want to use cgo for a linux/arm target, built on darwin/amd64. Do I need to first build a gcc toolchain for linux/arm on my Mac?

reply

4ad 8 hours ago

link

Yes.

reply

yelnatz 15 hours ago

link

Very excited about Sync.pool and the contiguous stack.

reply

https://docs.google.com/document/d/1wAaf1rYoM4S4gtnPh0zOlGzWtrZFQ5suE8qr2sD8uWQ/pub :

"Contiguous stacks

Allocate each Go routine a contiguous piece of memory for its stack, grown by reallocation/copy when it fills up.

"

" Go 1.3 has changed the implementation of goroutine stacks away from the old, "segmented" model to a contiguous model. When a goroutine needs more stack than is available, its stack is transferred to a larger single block of memory. The overhead of this transfer operation amortizes well and eliminates the old "hot spot" problem when a calculation repeatedly steps across a segment boundary. "

"

Map iteration

Iterations over small maps no longer happen in a consistent order. Go 1 defines that “The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next.” To keep code from depending on map iteration order, Go 1.0 started each map iteration at a random index in the map. A new map implementation introduced in Go 1.1 neglected to randomize iteration for maps with eight or fewer entries, although the iteration order can still vary from system to system. This has allowed people to write Go 1.1 and Go 1.2 programs that depend on small map iteration order and therefore only work reliably on certain systems. Go 1.3 reintroduces random iteration for small maps in order to flush out these bugs.

...

Changes to the garbage collector ... Updating: Code that uses unsafe.Pointer to convert an integer-typed value held in memory into a pointer is illegal and must be rewritten. Such code can be identified by go vet.

 Updating: Code confusing pointers to incomplete types or passing them across package boundaries will no longer compile and must be rewritten. If the conversion is correct and must be preserved, use an explicit conversion via unsafe.Pointer. 

"

--

eloff 20 hours ago

link

I just spent the weekend learning Go and writing a single writer multi-reader hashtable for threadsafe access. I picked it deliberately because it's against the philosophy of the language, which is to share by communicating instead of sharing data structures directly. It was painful to write:

  // Do NOT reorder this, otherwise parallel lookup could find key but see empty value
  atomic.StoreInt32((*int32)(unsafe.Pointer(uintptr(uint64(slot)+4))), val)
  atomic.StoreInt32((*int32)(unsafe.Pointer(slot)), key)

However, the non volatile, non unsafe parts of the code were an absolute joy. Testing was a joy, compiling was a joy, and benchmarking was a joy. I was impressed that it allowed me to bypass the type system completely and do nasty, nasty things in the pursuit of performance. I want a language that lets me do nasty things where I must, but that makes the other 95% of the program, and the job of compiling, testing, and maintaining that program easy. Go excels here.

--

steveklabnik 3 hours ago

link

Well, first of all, I'd point to you http://arewewebyet.com/ ;)

Different linking options are found here: http://static.rust-lang.org/doc/master/rust.html#linkage

Basically, as of right now, when you link statically, Rust will not build in glibc (and jemalloc, IIRC). So, you'll need to make sure that your glibc versions line up. My understanding is that glibc isn't able to be statically linked in without breaking things.

You can use `objdump -T` to see these dependencies. On my system, compiling 'Hello world,' I get symbols for glibc and gcc.

(Go gets away with this by reimplementing the world, rather than relying on glibc. The benefit is a wholly-contained binary, as you've seen. The downside is compatibility bugs, like https://code.google.com/p/go/issues/detail?id=1435)

reply

--

tshadwell 22 hours ago

link

For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.

The problem of 'summing any kind of list' is not a problem that is solved in Go via the proposed kind of parametric polymorphism. Instead, one might define a type, `type Adder Interface{Add(Adder)Adder}`, and then a function to add anything you want is fairly trivial, `func Sum(a ...Adder) Adder`, put anything you want in it, then assert the type of what comes out.

When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

The Nil pointer is not unsafe as far as I know, from the FAQ: http://golang.org/doc/faq#no_pointer_arithmetic

The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

Go is not flawless by any means, but it warrants a specific style of simplistic but powerful programming that I personally enjoy.

reply

ithkuil 10 hours ago

link

>When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

Actually using channels as a general iterator just for the sake of using the range operator is considered as an anti-pattern. The reason is not performance (although it has a cost), but the risk of leaking producer goroutines. Your example:

    for v := range mt.Walk() {
      if blah {
        break
      }
    }

How will the goroutine writing into the channel returned by mt.Walk know when there are no more consumers which will possibly read from it?

One way out is:

    done := make(chan struct{})
    for v := range mt.Walk(done) {
      if blah {
        break
      }
    }
    close(done) // or defer close(done)

Picking the right cleanup is error-prone.

What about errors? How will mt.Walk tell you that it had to interrupt the iteration because an error happened? Either your channel has a struct field containing your error and your actual value (unfortunately Go lacks tuples or multivalue channels).

Furthermore uncaught panics in the producer goroutine will generate a deadlock, which will be caught by the runtime, but it will halt your process. One way to do it is:

    errChan := make(chan error)
    for v := range mt.Walk(errChan) {
      if blah {
        break
      }
    }
    err := <-errChan

The producer will use the select statement to write both to errChan and your result channel. The success of writing to errChan is a signal for the producer that the consumer exited. However same thing here about relying on the last statement being executed to avoid a leak in case of returns or panics. Here the defer is less nice since you're supposed to do something with the error:

    func Example() (err error) {
      errChan := make(chan error)
      for v := range mt.Walk(errChan) {
        if blah {
          break
        }
      }
      defer func() {
        err = <-errChan
      }()
    }

Next-style methods just pass through the panics, and allow you to handle errors either by having a func Next() (error, value) or with this pattern which moves the pesky error handling outside:

    i := NewIterator()
    for i.Next() {
      item := i.Item()
      ...
    }
    err := i.Error()

First, any panic that happens inside either your code or the generator will bubble through. Second, if you return from your loop body, you will have to provide your own error (the compiler will remind you about your function signature, if in doubt). You can return early if the iterator can be stopped and GCed out (i.e. it doesn't handle goroutines or external resources), otherwise you'd have to call a cleanup as with channels.

The rule of thumb with Go should be that you don't have to do things just because they use some syntactic sugar. After a while you start to think about beauty in terms of properties not about calligraphy.

However, I do see this as a weak point of the language, which hopefully can be solved by education; after all Go is so simple to learn that you might be tempted to make it look even simpler. But the fact that the language has (almost) no magic, it means that you can actually understand what some code does, which imho outweighs the occasional syntactical heaviness or having to learn a few patterns.

reply

frowaway001 18 hours ago

link

> put anything you want in it, then assert the type of what comes out

... which is exactly what the article mentions and criticizes?

reply

rakoo 22 hours ago

link

> It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

I disagree: if the construction can fail, the constructor must return an error, which will be checked; only if the error is nil can the process continue. There shouldn't be logic on the actual data returned to assert whether a constructor worked or not.

reply

tshadwell 22 hours ago

link

I meant something like this:

http://play.golang.org/p/eqnDLVMHGA (pseudocode)

reply

wyager 22 hours ago

link

>The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

And what happens when you don't check? It crashes. That's the unsafe part.

These crashes are simply not possible in Rust and Haskell, and the type system notifies you if failure is possible (because the function will return an Option/Maybe).

reply

--

dcposch 18 hours ago

link

> A Good Solution: Constraint Based Generics and Parametric Polymorphism

> A Good Solution: Operators are Functions

> A Good Solution: Algebraic Types and Type-safe Failure

> A Good Solution: Pattern Matching and Compound Expressions

People have tried this approach. See languages like C++ and Scala, with hundreds of features and language specification that run into the thousands of pages.

For an unintentional parody of this way of thinking, see Martin Odersky's "Scala levels": http://www.scala-lang.org/old/node/8610

For additional hilarity, note that it is an undecidable problem whether a given C++ program will compile or not. http://stackoverflow.com/questions/189172/c-templates-turing...

--

Go was created by the forefathers of C and Unix. They left out all of those features on purpose. Not unlike the original C or the original Unix, Go is "as simple as possible, but no simpler".

Go's feature set is not merely a subset of other langages. It also has canonical solutions to important practical problems that most other languages leave do not solve out of the box:

Go's feature set is small but carefully chosen. I've found it to be productive and a joy to work with.

reply

runT1ME 18 hours ago

link

You seem completely ignorant of the things you're attempting to talk about. Scala doesn't have "hundreds' of features, nor is the language specification thousands of pages. It's just an outright fabrication to say so.

>Go was created by the forefathers of C and Unix.

Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

>They left out all of those features on purpose

Did they? I don't believe this is the case, as I've heard from the creators many times that they want to add generics but haven't figured out the details yet.

Are you really going to sit here and argue that static typing is important EXCEPT for when working with collection? That parametric polymorphism doesn't make things simpler?

reply

masklinn 13 hours ago

link

> Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

More than thirty years (at the time it was released), the first language with "modern" generics was ML in 1973.

reply

lmm 11 hours ago

link

The Scala specification is two hundred and something pages, around a third the length of the Java specification (largely because Scala has, in some sense, fewer features than Java, in the sense that Java has lots of edge cases with their own special handling, whereas Scala has a smaller number of general-purpose features. The complexity comes because it's easy to use all of them at once)

reply

wyager 18 hours ago

link

>See languages like C++ and Scala

Of the 4 you mentioned (Constraint based generics and parametric polymorphism, operators as functions, algebraic types and type-safe failures, and pattern matching/compound expression) C++ really only has 1 (operators as functions).

>with hundreds of features and language specification that run into the thousands of pages.

This describes neither Rust nor Haskell.

>Go is "as simple as possible, but no simpler".

It has mandatory heap usage, garbage collection, green threads. It's more than generous to call that "as simple as possible".

Of the 5 features you mention that Go has "canonical solutions" to (in the form of external tools), I know off the top of my head that Haskell's Cabal takes care of at least 4 of them. I'm not sure about formatting. Rust probably has similar tools, or if it doesn't, they can certainly be added without changing the language.

reply

dbaupp 16 hours ago

link

> Rust probably has similar tools

Built-in: http://doc.rust-lang.org/master/guide-testing.html

Built-in: http://doc.rust-lang.org/master/rustdoc.html

The newly released 'cargo': http://crates.io/ https://github.com/rust-lang/cargo/ (alpha, but quickly improving). This will be Rust's cabal equivalent, almost certainly with support for generating documentation and cross-compiling (it already has basic support for running the tests described above).

Missing at the moment, but very wanted: https://github.com/rust-lang/rust/issues/3195 .

(Well, to be precise, the compiler has the '--pretty normal' option, but it's not so good. https://github.com/pcwalton/rustfmt is the work-in-progress replacement.)

Already supported, although it requires manually compiling Rust with the appropriate --target flag passed to ./configure, to cross-compile the standard library.

reply

bjz_ 13 hours ago

link

I would be very wary about promoting Cargo as a 'cabal equivalent'. :P

reply

---

mholt 22 hours ago

link

In practice, Go has caused me less frustration than any other language I've used. I feel like the author's complaints here aren't really grounded in much experience, or maybe he's trying to use the wrong tool for the job.

The author's conclusion:

  · Go doesn't really do anything new.
  · Go isn't well-designed from the ground up. 
  · Go is a regression from other modern programming languages.

is hardly sustainable. Go was production-ready in 2011 with a stable version 1.0. It has a surprisingly mature tool chain and vibrant community. Go cross-compiles from my 64-bit Mac to a 32-bit Raspberry Pi or ARM Android phone on a whim. I can deploy my app by copying a single, self-contained binary. Tell me again that Go does nothing new for us.

Go makes concurrent programming safe and easy (with a nice syntax) -- something that we frankly should have done 30-40 years ago when we first started thinking about multiprocessing. Go was invented by folks like Ken Thompson (who created Unix) and Rob Pike (who created the Plan 9 operating system and co-created UTF-8). Tell me again that there isn't good engineering behind Go.

Finally, Go attacks the needs of modern programming from a different paradigm than we have been using for the last 10-20 years. From the first paragraph of Effective Go:

> ... thinking about the problem from a Go perspective could produce a successful but quite different program. In other words, to write Go well, it's important to understand its properties and idioms.

So of course it's different than a lot of other aged languages. Go tackles newer problems in a newer way. Tell me again that Go is a regression from other programming languages.

reply

--

Peaker 15 hours ago

link

> Tell me again that Go does nothing new for us

None of the things you mentioned are new.

> Go makes concurrent programming safe and easy

Mutability & concurrency, nils, interface casts -- these things all go against safe.

> Tell me again that there isn't good engineering behind Go.

You seem to think that a language that has baked in syntax for concurrency, or that has famous people behind it necessarily has "good engineering" behind it. I don't understand how one leads to the other.

When so many mistakes and regressions go into a language, one shouldn't care that famous names are behind it.

> Go tackles newer problems in a newer way

Go is essentially Algol 69 with baked in concurrency syntax.

> Tell me again that Go is a regression from other programming languages

Losing null safety, sum types & pattern matching, parameteric polymorphism and type-classes, all form a huge regression in PL design from the state of the art.

reply

AnimalMuppet? 1 hour ago

link

You're thinking wrong. You're also proving the grandparent's point.

You're thinking in terms of "here's this set of bullet point features that I think a language has to have to be a proper, modern language." But the grandparent was asking you to consider that a different set of features might have value form some real-world problems that Go's authors had really bumped into. You reply, "Nope, couldn't have - it doesn't have my bullet point features!"

There are more things in programming than are dreamt of in your philosophy of what a programming language should be.

reply

--

cgag 22 hours ago

link

Ok, I'll point out again that it emphasizes both concurrency and mutability which is a match made in hell and has a type system that's constantly subverted by null pointers and casts to interface which drastically reduce safety. It has a static type system released in the 2010s that doesn't have generics and deploying static binaries is not a new technology.

reply

rakoo 22 hours ago

link

> Tell me again that Go does nothing new for us.

I do agree with the author here: Go the language does nothing new. Go the platform, on the other hand, is a really pleasant new experience when compared with other languages.

The language is a regression in features compared to what other languages can do, but that is totally understandable when you look at what Go is aimed at.

reply

--

Artemis2 22 hours ago

link

Go packs together a lot of nice things that previously existed in other languages. It still has room for improvement though, as IMHO the language is pretty basic ATM - but exhaustive enough to cover most needs (whether they require stuff like generics or not) in a very painless way.

I love Go, because it fits in my head.

reply

randallsquared 18 hours ago

link

> it fits in my head.

This. I love Go's simplicity. Coming back to Go code I wrote months ago, I can immediately understand what it does virtually every time, which required lots of discipline I didn't always have in other languages. Lots of languages have obscure corners that allow you to do really cool things that aren't obvious, but for the most part, Go doesn't have these; what you see is what's happening.

Are there things that would make Go a better language? Sure! Should the type system be improved? Yup! One thing that makes me cringe is when I open up library code and see interface{} and calls to the reflection package all over the place, but general solutions often require that in Go, and that's a problem. In practice, though, this is almost a feature: if you see that stuff in code you're reading, it's a giant red flag that this code is tricky and possibly slow, and care is needed.

Edit: speeling

reply

--

zaphar 22 hours ago

link

I'm an experienced Go user. I'm also a lover of Haskell and Hindley Milner type systems. and in practice these complaints are not that big of a deal. Generics may or may not get added in the future but in practice you can go a long way with just slices and maps.

And while the Hindley Milner type system is a wonder to behold and I love working in languages that have them sometimes those same languages introduce a non-trivial amount of friction to development.

Go's single best feature and the one around which almost every decision in the languages is centered is an almost total lack of developer friction. If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.

[EDIT: some wording was incorrect]

reply

--

http://yager.io/programming/go.html

--

ericff 4 days ago

link

TJ replied my email, he said:

Hey! Mostly Go because it suits what I pictures as my ideal language pretty well. It's simple, C-like, great concurrency primitives, great standard library. I've had bad experienced with anything JVM so I'd stay away from Clojure, and Erlang has some legacy baggage but I'd still like to give it a better look some day! Elixir is cool but too Ruby for me

---

" Why Go?

Node is still a nice tool and if it’s working for you then you have nothing to worry about, but if things are bothering you, don’t forget to step out of your box and see what else is out there — within the first few hours of using Go for production work I was already hooked.

Again — I’m? not saying Go is the absolute best language out there and that you must use it, but it’s very mature and robust for its age (roughly the same age as Node), refactoring with types is pleasant and simple, the tooling Go provides for profiling and debugging is great, and the community has very strong conventions regarding documentation, formatting, benchmarking, and API design.

The Go stdlib is something I thought was awful when I first heard of Go, being so used to ultra-modularity in Node, and having experienced most of Ruby’s stdlib rot. After getting into the language I realized that most of the stdlib is pretty essential to programs these days, compression, json, IO, buffered IO, string manipulation and so on. The bulk of these APIs are well-defined, and powerful. It’s pretty easy to get by writing entire programs with nearly only consuming the stdlib.

...

Go versus Node

If you’re doing distributed work then you’ll find Go’s expressive concurrency primitives very helpful. We could achieve similar things in Node with generators, but in my opinion generators will only ever get us half way there. Without separate stacks error handling & reporting will be mediocre at best. I also don’t want to wait 3 years for the community to defragment, when we have solutions that work now, and work well.

Error-handling in Go is superior in my opinion. Node is great in the sense that you have to think about every error, and decide what to do. Node fails however because:

    you may get duplicate callbacks
    you may not get a callback at all (lost in limbo)
    you may get out-of-band errors
    emitters may get multiple “error” events
    missing “error” events sends everything to hell
    often unsure what requires “error” handlers
    “error” handlers are very verbose
    callbacks suck

In Go when my code is done, it’s done, you can’t get re-execute the statement. This is not true in Node, you could think a routine is completely finished, until a library accidentally invokes a callback multiple times, or doesn’t properly clear handlers, and cause code to re-execute. This is incredibly difficult to reason about in live production code, why bother? Other languages don’t make you go through this pain.

Personally I think it makes more sense for young startups to focus on reliability over raw performance, that’s what makes or breaks your relationship with customers. This is especially true with small teams, if you’re too busy patching brittle code then you can’t work on the real product.

Node’s sweet spot, to me at least, is that it’s written with JavaScript?, I think capitalizing on that with usability makes the most sense.

"

---

video on golang concurrency:

http://m.youtube.com/watch?v=f6kdp27TYZs

---

http://nothingbutsnark.svbtle.com/how-to-argue-for-pythons-use

."

the Go/Python 3 comparison shows that Python can accomplish solutions in less code than Go every time http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all&lang=python3&lang2=go&data=u64q "

---

http://www.jerf.org/iri/post/2930

wcummings 6 days ago

link

Why not just use Erlang?

reply

jerf 6 days ago

link

I'm comfortable with it, but none of my team wants to use it.

Even after 7 years of experience, it is very klunky to program in, and in reimplementing the Erlang program I had in hand, I immediately added some features that were very hard to implement in Erlang. And note the architecture of the code didn't actually change, this isn't one of those cases where the second version was also radically different than the first so no comparison is possible, it was actually hard to implement in Erlang. Erlang is a very klunky language to work in. Raw Java might be klunkier, but it's easily the klunkiest of the languages I use.

Erlang's clustering, which my code heavily depends on, just sort of breaks sometimes, on machines that are on the same switch, with every bit of hardware switched out over time. It is very opaque to figure out why it isn't working, or how to make it work again. Mnesia breaks even more often on loads that shouldn't be a problem. After 7 years of working with it, at this point even if the solution was just to turn on this one flag somewhere and all my problems would go away, I still would not consider that a positive.

I don't "hate" it, but I've been waiting for years for a more conventional language that had pervasive microthreading in it that I could use for work. Go is not quite it... I'd still like the goroutines to be completely isolated, and I've got a handful of other quibbles... but it's close enough that I might be able to get out of the hell of trying to multiprocess in most other languages, while being something I might actually be able to get adopted. In general, the company I work for has not been overwhelmed by the effectiveness of Erlang.

In practice, the biggest loss of fundamental capability is loss of true asynchronous exceptions, and, well... in practice that's only an annoyance, not an OH MY GOSH I WILL NEVER USE A LANGUAGE WITHOUT ASYNCHRONOUS EXCEPTIONS sort of thing. In most cases if you're throwing an async exception you've really already lost and now we're just discussing exactly how you're losing.

Every language student should study Erlang, and learn from it, and possibly use it. It's a solid language. And there's nothing wrong with trying to port some of that solidity elsewhere. (I'd pick a real DB over Mnesia, though.)

reply

stock_toaster 6 days ago

link

> In most cases if you're throwing an async exception you've really already lost and now we're just discussing exactly how you're losing.

That is a exceptionally quotable. :)

reply

tormeh 6 days ago

link

Have you taken a look at Akka (with Scala)? You sacrifice code hot-swapping and I think little else, but you get JVM performance and one of the most extensible non-Lisp and safe non-Haskell languages in existence.

reply

wcummings 6 days ago

link

Or Akka with Java

reply

tormeh 6 days ago

link

Yeah, but the reasons for choosing Java over Scala are mostly organizational. If your organization can tolerate Erlang it can tolerate Scala.

reply

mratzloff 6 days ago

link

It seems he wanted a static type system, for starters.

reply

rubiquity 6 days ago

link

There's a running joke in the Erlang community that every Erlang programmer will at some point try and re-implement Supervisors and do it horribly wrong. This isn't because the person is a bad programmer, it's just easy to underestimate the amount of time and incredible programming that has gone into Erlang's supervisors.

By sticking with Go, the author gets to keep the static types but likely gets faux-Supervisors at best.

reply

jerf 6 days ago

link

I describe exactly what I get. Whether they're "faux" depends on your definition.

No, seriously, that's really profoundly true. Erlang's definition isn't the only one. But you're free to take it. If you can bend a bit, though, these may not be "faux".

One thing that's easier when it comes to implementing supervisors in not-Erlang though is that the vast majority of languages have something better than "behaviors" to work with, which are bizarrely klunky things. I actually implemented something more like a "gen_server" initially, but it wasn't a win. Some of that "incredible programming" is essential complexity (in Fred Brooks' terminology), but some of it is accidental complexity. Plus Erlang had to build it from true, total scratch; Go is a pre-existing runtime, and "defer" is simple, yet powerful.

reply

rvirding 6 days ago

link

Did you try to implement your own behaviours? Or processes that aren't behaviours as such but still function correctly in supervision trees and in applications?

reply

jerf 5 days ago

link

What I found is that if you implement Serve() and Stop(), honestly, you're done, in the world of Go. If you want a state machine, just write it. Many of the things I'm supervising are goroutines that wrap an API around a channel that implements a server.

reply


threeseed 23 hours ago

link

Couldn't agree more.

Anyone who thinks Go will ever be a replacement for Java is frankly clueless about enterprise software development. Go has almost non-existant integration with enterprise systems e.g. SAP, Hadoop. It lacks operational management capabilities e.g. JMX. And seriously the range of libraries on the JVM covers pretty much everything e.g. banking/finance use cases.

And concerningly there is not a single reasonably sized Go project to get an understanding of how it works with 20, 50, 100 or 500+ developers working on the same codebase.

reply

kasey_junk 22 hours ago

link

I think when people claim that Go is a replacement for Java, what they really mean is "Go has similar performance characteristics as the JVM and also doesn't require manual memory management".

Anyone who has ever worked in enterprise software can tell you, Go is (currently) exceptionally poorly suited for that style of development. I don't think that is an accident though. I think there is at least the implication that enterprise software development models and architectures are flawed at the core. Go seems to be designed from the start to prevent you from developing that way.

I may be reading more into the Go culture than I should, and I certainly don't think that there is proof that enterprise software can't be successful, but Go clearly steers you into building small, self contained servers that do one thing well and can go without change for a long time.

reply

---

Animats 1 day ago

link

Well, it's good to have a hard-compiled language that's (almost) memory safe. Three problems with Go:

reply

kyllo 1 day ago

link

Rust addresses all of these problems. It's memory safe through compile-time reference counting and borrow checking (not garbage collected), and it has exceptions and generics. It also has a more powerful type system with type inference.

Having tried out both Go and Rust I don't see any reason to prefer Go, at least once Rust has a 1.0 release, which is supposed to happen the next 2-3 months.

Both languages were designed to replace C++, but only Rust has the features to actually succeed at that, IMHO.

reply

Animats 1 day ago

link

Sigh. First example of channels in "Effective Go" shares the list to be sorted between two goroutines.

https://golang.org/doc/effective_go.html#sharing

All that's sent over the channels is a flag message to start the process and report completion.

Yes, there's the fanboy answer that they're not "really sharing" because both threads don't access the list at the same time. I've heard that excuse. The threads are sharing data. Deal with it. Locking is (hopefully) provided by abusing the channel mechanism to simulate a semaphore. The channel mechanism isn't doing anything here that a semaphore couldn't do better.

reply

xkarga00 23 hours ago

link

Actually the Effective Go page needs to get updated. They also comment next to c <- 1 that value does not matter where the use of a struct{} channel would be more idiomatic.

But still the Go mantra is valid, the means (channels) provided by the language allow for sharing by communicating and IT IS actually considered idiomatic where appropriate. Also your points about exceptions and generics are non-valid or at least very specific cases. I am working on backend stuff (mostly command-line tools) and i never felt any "lack" of them.

reply

skybrian 18 hours ago

link

It's rather unfair to complain that they're simulating a semaphore when they explicitly said that this is what they're doing.

reply


---

using race conditions in Go to effectively access pointers unsafely:

http://research.swtch.com/gorace

cmelbye 18 hours ago

link

Can you expand on why App Engine doesn't allow multithreaded Go programs? I never quite understood what they were trying to prevent you from doing.

reply

Animats 17 hours ago

link

Here's a description of the race condition exploit:

http://research.swtch.com/gorace

reply

andrewmwatson 20 hours ago

link

They most certainly do let you run Go on AppEngine?!

reply

cmelbye 18 hours ago

link

He didn't say Go. He said multi-threaded Go. GOMAXPROCS=1 on App Engine.

reply

---

golang basically agrees to stick to semantic versioning (semver) without calling it that:

https://golang.org/doc/go1compat


lmm 1 day ago

link

The big advantage of Dart over JS is the type system. Switching to Go would mean throwing that away.

reply

nickik 1 day ago

link

PureScript? and ClojureScript? all have type systems better then the one in Dart. The simplest ways to get a type system is TypeScript?. Not sure how I how it compares.

reply

lmm 1 day ago

link

"Better" is subjective (and I certainly wouldn't consider any dynamic system to be better than dart's) - Dart hits a certain sweet spot IMO, being simpler than a typeclass-based approach but far more usable than Go or Java. TypeScript? is pretty nice but gradual, which is its own set of tradeoffs.

Dart has other selling points than the type system, sure, but degrading it to Go-style types would be a serious loss.

reply


robmccoll 19 hours ago

link

Anybody up for backporting Go's stdlib to C? Doing so in an automated way would be all the better.

There are just so many things in Go that feel like 80% solutions - they make great demos but in every day use you have to fight them (looking at you import system and the GOPATH, magical make() function, magical overloaded accessors, lack of expressions or at least ternary if, pre and post increment are hacks not expressions, lack of coercion to more precise types, no templating / generics / preprocessing, needs an equivalent to realloc, having to go through reflect / unsafe to get things done, lack of proper type resolution for complex types).

There are many things I do like about Go, but much of the time it feels like a very pretty prison compared to the (admittedly less pretty) freedom of C.

reply

---

hendry 1 day ago

link

Will there be a proper debugger I wonder?

reply

ganarajpr 1 day ago

link

This is the highest and most important thing I want from go. A proper debugger that is not insanely hard to setup ( on any machine! ). Gives a complete stack trace and such info. I am not sure how people can live without a debugger in 2014. GDB is not the answer to this - for sure. There are soo many things that can be done ( tooling wise! ) and this one is , I personally think, the first things the golang guys need to do .

reply

LukeShu? 1 day ago

link

Why do you say that GDB is not the answer? In my experience, GDB (and DDD) works great with Go!

reply

ganarajpr 1 day ago

link

You are probably pretty awesome at setting things up. I am not. I tried GDB, golang and windows as a combination and its an exercise in torture.

reply

LukeShu? 23 hours ago

link

That's true, I am great at setting things up... but Go+GDB was zero set up. `go build` then `gdb ${executable_file}`, and nothing else. I suspect that it's Windows being in the mix that gave you trouble?

reply

lprez 23 hours ago

link

In Mac at least you can't just do that, you need to create yourself a certificate and sign GDB. Took me a while too.

reply

module0000 7 hours ago

link

Just an FYI, you can use MacPorts? to `sudo port install gdb`, and it installs gdb as 'ggdb' without the signing mess. To use with ddd invoke as `ddd --debugger ggdb` and it works like a charm.

reply

andrewstuart2 1 day ago

link

I'll admit I'm no expert at gdb, so I'm not aware if it falls short vs debugging c, c++, but what's wrong with using gdb?

reply

_ak 1 day ago

link

Depends how you define "proper" and how you imagine a debugger interface to be with which you can handle a running application with hundreds, if not thousands of goroutines.

reply

https://code.google.com/p/ogle/

---

https://medium.com/@adamhjk/rust-and-go-e18d511fbd95

---

tptacek 20 hours ago

link

Like you†, I've had the pleasure of working with some fairly large concurrent codebases and the character-building experience of tracking down deadlocks, random memory corruption bugs that turn out to be race conditions, and (my most favorite of all) unexpected serializations that randomly bring programs to a halt. Most of that experience has been in C++, with a little C and a little Java mixed in there.

Over & over I see language aficionados ding Golang for not taking advantage of immutability and for allowing shared data --- or, in your case, going a step further and reducing all communication among processes in Golang to instances of synchronized sharing.

What I'd like to know is: why don't all those hundreds of thousands of lines of concurrent Golang code out there, including all the library code I can just "go get" and whose authors have been encouraged by Rob Pike to use, basically, threads with near total abandon (watch his video about designing a lexer!) --- why don't all those libraries and programs randomly deadlock and corrupt themselves all the time?

Because my experience is that Golang code is quite a bit more reliable than, for instance, Python code.

What am I missing? The "share by communicating" model in Golang seems to work pretty darn well, especially given the extent to which Golang begs programmers to make programs concurrent.

† OK probably you more than me but still.

reply

Jweb_Guru 20 hours ago

link

> why don't all those libraries and programs randomly deadlock and corrupt themselves all the time?

The simplest answer would be "they do." In aphyr's recent presentation on Jepsen, where he tested etcd (a Go database implemented on top of Raft), he noted that when he started using it he encountered a ton of easily reproducible races and deadlocks (which he sarcastically noted was surprising because he thought goroutines were supposed to make concurrency issues a thing of the past).

I am not saying that Go channels don't help the situation at all--and the inclusion of a race detector doesn't hurt either--but you still have plenty of ways to shoot yourself in the foot. The thing that probably helps most is that GOMAXPROCS is 1 by default, since data races are a multicore phenomenon in Go.

reply

tptacek 19 hours ago

link

Distributed systems programming is its own special concurrency problem, and distributed systems also exhibit deadlock, races, and serialization, no matter what language they're implemented in. I'm not sure what finding a race condition in a distributed commit implementation says about a language; at the very least, it's nothing you couldn't say about Rust as well, which is also not a language that solves distributed systems concurrency problems.

Maybe I'm wrong and etcd was riddled with concurrency problems between the goroutines of a single etcd process?

In any case: as anyone who has worked on a large-scale threaded C++ codebase can tell you: Golang programs simply do not exhibit the concurrency failures that conventional threaded programming environments do. It would be one thing if Golang code only used concurrency for, say, network calls. But goroutine calls are littered throughout the standard library, and throughout everyone's library code.

reply

rdtsc 17 hours ago

link

It is not a black and white situation probably. Golang is better because it has built-in channels and encourages users to take advantage of them. It also has garbage collection. So those 2 things right of the bat help.

But there are better things out there -- isolated heaps (Erlang), borrow checkers (Rust), stronger type systems and immutability (Haskell) etc. There are no magic unicorns so those things often come at a price -- sequential code slowdown.

Getting back to go. One can of course say, "Oh, send only messages. We are all adults here. Let's just agree to be nice. Stop sharing mutable memory between goroutines!" But all it takes is "that guy" or "that library", doing it "that one time" and then there are crashes during a customer demo or during some critical mission. It crashes and then good luck trying to reproduce it. Setting watchpoints in gdb (or the equivalent Go tool), asking customers "Can you tell me exactly what you did that day. Think harder!" and so on.

Also, as others have pointed, with Golang though, often it is run with just one OS thread backing all the concurrency. So many potential races could be just be hidden.

There is also some confirmation bias involved. When something is broken, often authors don't write blogs about it, don't advertise. They fix it, and move on. So maybe a lot of programs are full of concurrency bugs but just nobody is blogging about it. They've invested time and energy into learning a new ecosystem and now they have to blog about its flaws and so on. That is hard to do.

Another observation is that when spending a lot of time debugging and handling segfaults, pointer errors, user-after free errors, concurrency issues, that becomes the default and expected view of how programming works. It becomes hard to imagine how it could work another way. It becomes obvious that weeks would be spent tracking one concurrency bug or having to add cron jobs to watch for crashed programs and restart them because the system is so complex and non-deterministic, replicating the bug is too hard.

reply

gnuvince 4 hours ago

link

How does Rust's borrow checker cause a slowdown of sequential code? It's a purely compile-time construct and allows for the elimination of a GC, so it's actually a net win in code execution speed.

reply

jaytaylor 12 hours ago

link

Just out of curiosity - is there a problem with sharing data across goroutines when access to/mutation of said data is controlled by a mutex?

    func (*Mutex) Lock
    Lock locks m. If the lock is already in use,
    the calling goroutine blocks until the mutex is
    available.

http://golang.org/pkg/sync/

It seems to me this is a valid alternative to [rigidly] sticking to pure message passing.

reply

lomnakkus 1 hour ago

link

In short: We (as a field) tried them for many many (many!) years now and they have been found lacking -- in practice they're just too hard to get right for large-scale systems.

EDIT: The mutexes themselves are easy enough to get right, it's the systems using mutexes that are too hard to get right.

reply

Animats 11 hours ago

link

In most languages, the language says nothing about what data is protected by the mutex. Modula and Ada did, and Java has "synchronized" objects, but C/C++/Go lack any syntax for talking about that. This typically becomes a problem as a program is modified over time, and the relationship between mutex and data is forgotten.

reply

masklinn 11 hours ago

link

> In most languages, the language says nothing about what data is protected by the mutex.

Or the other way around, what mutex protects a piece of data (or even that a piece of data should be protected at all), so it's easy to forget it and just manipulate a bit of data without correctly locking it.

I was pleasantly surprised to discover that Rust's sync::Mutex owns the data it protects, so you can only access the data through the mutex (and the relation thus becomes obvious).

reply

rdtsc 11 hours ago

link

> is there a problem with sharing data across goroutines when access to/mutation of said data is controlled by a mutex?

Well aside from the deadlock or priority inversion of sorts that should work. Just like it would work in C/C++/Java etc.

The real problem is when the shared data is not controlled by a mutex, but should be.

reply

Jweb_Guru 19 hours ago

link

The issues weren't with Go--which definitely allows for both data races and deadlocks and doesn't claim to eliminate either--but with etcd. And according to aphyr, the team was very responsive and quickly fixed the ones he found.

My point wasn't that Go is _worse_ than contemporary languages like C++ and Java when it comes to data races, only that it doesn't eliminate them. Which, again, it doesn't claim to. Rust does, and it is an important difference between the two languages. Because data race freedom with cheap mutable state requires a garbage-collection free subset of your language [1], I think it's unlikely that Go will ever guarantee this.

[1] as noted by Niko Matsakis at http://smallcultfollowing.com/babysteps/blog/2013/06/11/on-t...

reply

---

http://blog.golang.org/5years

https://news.ycombinator.com/item?id=8585483

---

https://news.ycombinator.com/item?id=8733352

---

http://bowery.io/posts/Nodejs-to-Golang-Bowery/

https://news.ycombinator.com/item?id=8968522

---

golang at cloudflare; some discussion of static linking; some discussion of concurrency; mb other stuff

https://news.ycombinator.com/item?id=4195176

---

https://togototo.wordpress.com/2015/03/07/fulfilling-a-pikedream-the-ups-of-downs-of-porting-50k-lines-of-c-to-go/

https://news.ycombinator.com/item?id=9161366

"The only other languages I know (edit: besides Go) with built-in lightweight thread schedulers are Erlang/Elixir and Haskell, with the former lacking static typing and the latter lacking management-willing-to-use-ability." -- https://togototo.wordpress.com/2015/03/07/fulfilling-a-pikedream-the-ups-of-downs-of-porting-50k-lines-of-c-to-go/

---

vezzy-fnord 15 hours ago

Go is doomed to succeed because it extends the mental model of C with a concurrency model that finds a decent compromise between power and ease of use, makes the typing less prone to subversion, adds memory safety via GC, uses a structural subtyping system through interfaces that brings many OO-like benefits while still keeping to the C struct way of thinking, first-class functions, various syntactic rough edges cleaned up and so forth.

Because the Unix system programming world (and POSIX particularly) is very much built with the conventions and semantics of C in mind, most serious POSIX programming outside of C means you have to deal with painful FFIs, lousy wrappers, overly abstracted APIs that hide details like certain lower level flags and so forth. Some are better at this than others, of course (OCaml is one of the better ones)... but, nonetheless.

So it's unsurprising that many infrastructure developers are jumping to Go. There's just enough new things to incentivize a switch, but not too much that it dissuades from it.

reply

tptacek 14 hours ago

I'm a 90's C programmer and a Golang programmer now, and while there's some truth to this, it's reductive. A 2000s-era C programmer would not write socket code that worked the way net.Conn does. While C code gave us the "pipes and filters" abstraction of Unix, they are not an idiom in C code --- in fact, Golang's reader/writer interfaces feel more like a refinement of Java than a modernization of C.

Golang feels very much like an offspring of Java and Python to me.

reply

vezzy-fnord 13 hours ago

Yes, Go has a heavy focus on interfaces and structural subtyping, as I said. It reflects the trend behind Pike's languages: C-like + some form of CSP + key abstraction.

But, even the net package has a noticeable Plan 9 legacy, like the use of dial/listen, as opposed to the clumsy Berkeley socket way. You might recall this was a central complaint of Pike's in his famous presentation "Systems Software Research is Irrelevant".

Obviously it's nowhere near as pure as ndb, but that's the reality of being in Unix.

reply

tptacek 13 hours ago

I certainly can't deny the Plan9-ism.

reply

---

danieldk 21 hours ago

I think you could argue that both languages (Go, Erlang) are equally a pain in the ass to interop with C in.

Could you elaborate? I have used cgo a lot and it is very nice compared to other FFIs I have used (Java, Haskell, Python, Ruby).

Things will get a bit more ugly >= 1.5 with a concurrent copying garbage collector. Since e.g. arrays backing Go slices may be moved, you cannot pass 'blocks' of Go memory to C-land anymore by taking the address of the first element.

Edit: or do you mean using a Go package from C?

reply

---

[–]breakingcups 20 points 4 hours ago

What is something you'd love to see in Go, but know is impossible until Go 2?

    permalinksavereportgive goldreply

[–]earthboundkid 1 point 37 minutes ago

My wish list:

    No bare returns
    No new
    error with Source() error method added for chaining
    Various library clean ups: more consistency between archive and compress packages and their methods, various io.Reader/Writer helpers in one easy to find place instead of ioutil plus bufio plus bytes, various depreciations removed

---

example comparison of an algorithm in Go and in Elixir:

https://gist.github.com/nathany/723e6057e5c7c70e5772

---

[–]snuggles166 7 points 5 hours ago

Congrats on the release!

Repost from here

I've really enjoyed the time I've spent with Go but feel like the state of dependency management has kept me away.

Am I being stubborn in my longing for an npm, Ruby Gems, or pip? Is there a reason why one of these hasn't emerged/been adopted by the community? (I'm aware of the 1.5 experiment with vendoring.)

Semver and pinning versions has always just made sense to me. I can easily adopt new features and fixes automatically without worrying about things breaking.

How does the community feel this far along?

    permalinksavereportgive goldreply

[–]broady 4 points 4 hours ago*

Thankfully, there are many good solutions (Godep, gb, govendor, etc). The community has not settled on just one.

Support for vendored paths/packages ("new" in Go 1.6) in the go tool is a big step forward already.

In a future release, perhaps the go tool will include a way to download packages into a vendor directory and pin them to a version.

...

[–]ThisAsYou? 1 point 5 hours ago

gopkg.in is commonly used for libraries. There are plenty of tools to manage vendoring (which is now enabled by default in 1.6) such as glide.

Edit, as for npm/gems/pip, there are plenty of problems with those systems which I'm sure plenty of people will go into detail about.

---

[–]mwholt 9 points 5 hours ago

I don't use cgo very often (if ever), although I can see why it's useful/necessary. Still, is there a reason CGO_ENABLED=0 isn't the default value? Or, in other words, what are the disadvantages of a truly static Go binary?

    permalinksavereportgive goldreply

[–]ianlancetaylor 15 points 5 hours ago

As of Go 1.5 and 1.6, the main disadvantage of CGO_ENABLED=0 is that the os/user package will not work. Because there is so much variety in how user information is represented, that package calls out to libc to get its information. With CGO_ENABLED=0, it will always return an error "Lookup not implemented".

Also, specifically on Darwin, the crypto/x509 package won't be able to access the root certificates on the system.

Before Go 1.5, the net package called into libc to do DNS lookups. However, in Go 1.5 the default DNS resolver is pure Go code, and it only calls into libc in unusual situations. Except on Darwin--on Darwin it again has to call into libc in all cases.

So if you don't care about os/user, and you aren't running on Darwin, and you don't actually use cgo yourself, then by all means set CGO_ENABLED=0 for all your builds. But for most people it's probably not the best default, at least not yet.

    permalinksaveparentreportgive goldreply

---

[–]Keblic 4 points 5 hours ago

Congrats on the release!

In your 6 years post you mentioned:

    Early next year we will release more improvements in Go 1.6, including ... an official package vendoring mechanism ...

I was hoping that this went beyond the GO15VENDOREXPERIMENT and actually included an official package management tool.

Are there any future plans to work towards this?

    permalinksavereportgive goldreply

[–]enneff[S] 4 points 3 hours ago*

I think the next step is to nail down the vendor spec, a file format for describing dependencies.

I'm personally thinking about tools that can help with the greater vendoring story, but I don't have anything specific to discuss about it at this time.

    permalinksaveparentreportgive goldreply

---

article on how Golang channels are like Python generators:

http://www.informit.com/articles/article.aspx?p=2359758

this guy thinks it's misleading though: https://news.ycombinator.com/item?id=11211660 http://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-feel-bad/