proj-oot-old-150618-ootTypeNotes2

interestingly, Curtis Guy Yarvin of Nock and Hoon and Urbit fame came to the same conclusion that i did w/r/t the difficulty of backwards inference in Haskell et al's type inference: http://www.urbit.org/2013/11/19/urbit-is-easy-ch10.html

also, the Golang folks came to a similar conclusion regarding everything-is-an-interface, although i wish to pursue that even further than they.

--

python's built-in types: http://docs.python.org/3/library/stdtypes.html

see also http://docs.python.org/3/library/types.html#standard-interpreter-types

rpython's types: http://doc.pypy.org/en/latest/rtyper.html

--

idea from Hoon: "The general principle is that name resolution across a fork works if, and only if, the names resolve to the same axis on both branches." i.e. that's somewhat similar to saying that if you have a value whose type is a (structural) union type, you can access a part of the structure iff the structure in the structure tree 'above' that part is identical in all types in the union

--

GTD/todo.txt @contexts vs. +tags; there is only one @context at a time (modal), used for filtering todo items to a sublist apropos to this context

relevant to type systems?

--

As Joel, I and a number of

As Joel, I and a number of others discussed on #haskell, part of the problem is that haskell's type system is poor at typing the kinds of things he was doing. For example, given n message types and m entities accepting o combinations of message types, there is no good solution for typing this accurately while retaining pattern-matching even with GHC's extensions - it can be done, but at the cost of much unboxing and reboxing and a type syntax that requires an indirection (or a good coding convention) to figure out which message types're actually accepted. By Philippa Cowderoy at Tue, 2006-01-24 01:44

I'd like to hear more about
login or register to post comments

I'd like to hear more about this - I don't really understand the issue with static typing, particularly since Haskell infers types. What was wrong with the general "data Event = ..." approach, with one constructor for each event type sent? It seems like an Erlang tagged tuple directly corresponds to a Haskell algebraic type (the tag atom is the constructor, and then the other elements are the arguments). Does Joel go into this any more in-depth anywhere? By Jonathan Tang at Tue, 2006-01-24 03:28

What's wrong is that it
login or register to post comments

What's wrong is that it gains you little more than dynamic typing does. You know you've got an event, but you've no clue if it's one of the ones you handle. By Philippa Cowderoy at Tue, 2006-01-24 15:03

login or register to post comments

--

...

I second your point about the robustness of the resulting Haskell code. I said in some other post that if it builds then it's probably right. My pet peeve, though, is that Haskell seduces you into coding for clarity and beauty. Haskell seduces you into coding the natural way. The natural way is concise since you can let type inference do the work, use lists, etc.

Then reality kicks in and you realize that the code is not running fast enough or consumes gobs of memory or spends 70% of its time collecting garbage. That's when you start putting strictness annotations all over the place unrolling your lets into cases and so on. It gets ugly.

Now, there are masters out there like Oleg Kiselev (Zipper OS) or Simon Marlow who probably don't have any trouble at all writing code that's both concise, beautiful and fast. I'm not one of them, not with Haskell. I find that doing things the natural way in Erlang hides no surprises and runs reasonably fast. Same thing about Scheme as I just set out to prove. By Joel Reymont at Tue, 2006-01-24 01:47

login or register to post comments

--

conjuctions and atomic negations only? or negations only in dataflow?

in more detail, we may want to restrict the logical operations that can be used to create new types out of combining existing types to make type inference tractable. Ie we could permit the creation of a type that says "X is of type A iff X is not of type B" but also permit "X is of type D iff X is of type B and X is also of type C" and also "X is of type E if X is of type B, or if X is of type C". But without further constraints elsewhere, that would make inference intractable in the general case.

so, some obvious constraints in this area are:

or, as noted above, we could leave this unconstrained and put constraints elsewhere

---

todo read:

http://www.q-lang.io/

---

don't make these mistakes with generics and their implementation:

http://nomothetis.svbtle.com/smashing-swift

three issues:

---

related to the previous:

"

noobermin 1 day ago

link

Just to point out, the last two sound like it's due to the current implementation (compiler) being new, so it isn't so upsetting at all.

Now, not supporting functors is odd, though.

reply

...

And yes, we should expect it as "normal" for new languages, because it is what people have come to expect to have available - although some implementations leave a lot to be desired.

If we consider how C# implements Functors for example, we see that it requires special language support, where the compiler essentially pattern matches over your code, looking for a method named "Select", with a specific signature taking a generic type and a func. This implementation completely misses the point that Functors themselves, while useful, are not special - they are a specialization of a more general concept we have - typeclasses and higher-kinded polymorphism. C# also adds the ability to create Monads, Foldable, Comonads etc, using similar patterns - but you can't create your own typeclasses, and are left to the language designers to add a desired feature.

The decision to add them onto C# like this was made not without knowledge of this, but out of consideration of working with the existing language, which was designed without their consideration, hence, why they're a pretty ugly design compared to the simplicity of Haskell's implementation.

[1]:http://www.cs.tufts.edu/comp/150GIT/archive/mark-jones/fpca9... [2]:http://research.microsoft.com/en-us/um/people/simonpj/papers...

reply "

---

hoon has a good idea:

orthogonal odor and size (generalize the addition of orthogonal size to type attributes)

(but in addition to the ortho 'size', having the 'odor' for atoms plus also structural typing sounds good)

--

the hoon tutorial explains typing as being motivated as a way to access nodes in by name instead of number

---

making the type syntax mirror/the same as the normal syntax:

http://www.haskell.org/haskellwiki/GADTs_for_dummies

---

http://blog.getprismatic.com/schema-for-clojurescript-data-shape-declaration-and-validation/

---

http://blog.cognitect.com/blog/2014/7/22/transit

---

support for 'laws' e.g. Haskell's 'monad laws'. then need to also support 'theorems'. combined with the support for homomorphisms, and for this should allow us to represent mathematics in the type system (although not necessarily to automatically deduce theorems about such things)

--

the idea of haskell data constructors as a type-within-a-type (note that in Haskell implementations, 'dynamic pointer tagging' refers to tagging objects with (a) whether they are already evaluated, and (b) which constructor they are within their type, not to which type they are, because being static thec Haskell compiler has already type-checked that) is interesting and possibly a fundamental way to generalize/simplify. So now we seem to have at least 3 types of types: atomic types (Hoon's odors), composite types (C structs, Python objects, Python and Haskell and Lisp lists, Python dicts, Hoon's structural typing), and the choice of data constructor within a Haskell type (eg is this List a Cons or a NilList?? is this Maybe a Nothing or a Just?) in addition, there are interface types vs representation types, which may be an orthogonal axis

---

java apparently has some sort of pluggable type system via annotations:

http://docs.oracle.com/javase/tutorial/java/annotations/type_annotations.html

---

note that now that we see the need for multiple orthogonal 'views' to be attached to a data value at one time, does this unity views in some way with type attributes?

---

faceted classification: for each (view?) facet, we have a separate suclass relation

---

" ahejlsberg commented on Oct 4

This is the starting implementation of Union Types as proposed in #805. It includes all parts of the core proposal, but not the "Possible Next Steps" (yet). The implementation reflects the decisions we made in the 10/3 design meeting.

@RyanCavanaugh? The pull request doesn't yet include new test baselines (and thus the Travis build will appear to fail). The deltas all look good, but a number of tests are now incorrect and need to be modified. I'm wondering if I can get you to help out with that. Anders Hejlsberg Initial implementation of Union Types e836fe1 CyrusNajmabadi? CyrusNajmabadi? commented on the diff on Oct 4 View full changes src/compiler/parser.ts @@ -1739,6 +1741,22 @@ module ts { return type; } + function parseType(): TypeNode? { CyrusNajmabadi? Collaborator CyrusNajmabadi? added a note on Oct 4

right now it looks like it's not possible to have syntax for an array of union types. That seems like it could be problematic in the future. For example, say you have htis in your .js:

function foo(x: string

number) {
    return [x];}

This function will have a type that is nonexpressable with the syntax of the language (and thus would be a problem for .d.ts files, as well as anyone who wants to explicitly give typings to things). Ryan Cavanaugh Collaborator RyanCavanaugh? added a note on Oct 4

We can write this as Array<string

CyrusNajmabadi? Collaborator CyrusNajmabadi? added a note 27 days ago
number>

Great point. We'll likely have to do that if we don't make additional syntax. As htis is a perfectly reasonable workaround, i'd prefer this approach before adding more syntax. thanks! Doug drubino added a note 16 hours ago

It would be great to have union type aliases, to keep the length of type names managable:

define number

Array<number>Matrix<number> = DynamicMatrix?;

Other values for the define keyword could be alias, type or class Doug drubino added a note 16 hours ago

I imagine this will fall out as a default feature, based on the type inference system of TypeScript?, but I think it bears repeating. Union types should have anonymous interfaces:

class A { commonToBoth: string; uniqueToA: string; }

class B { commonToBoth: string; uniqueToB: string; }

var either: A

B = new A();

Valid because there is an anonymous interface containing the common members either.commonToBoth

The anonymous interface would have the form:

interface `unnamable { commonToBoth: string; }

Or go wild and give this interface a name like:

Intersection[A

Common[A Interface[A Infc[A
B]
B]
B] or
B] for brevity

This does not have any use case I can think of, but perhaps I'm not thinking hard enough.

@DanielRosenwasser? Yes I do. So consider this an upvote. =) Daniel Rosenwasser Collaborator DanielRosenwasser? added a note 16 hours ago

@drubino you mean like in #957? :smiley:

This is a relatively old conversation at this point - you'll find quite a bit has happened recently with respect to union types, such as #914. Doug drubino added a note 16 hours ago

Haskell has union types denoted by Either A B. From conversations with members of the Haskell community, I've heard that this feature causes a lot of switch statements, leading to verbose, branching code. I'm not sure how to handle this, but perhaps one could create implicit type checks and blocks that use the order of the types as their listed on the union type. This keeps the syntax light-weight.

Something like:

var x = A

for x do { /Handle A } or { /Handle B } or { Handle c }
B C;

When combined with the anonymous interface idea, along with syntax to combine or skip blocks if logic can be implemented by common members in certain cases, you might be able enable the user to be quite succinct for common scenarios. Doug drubino added a note 15 hours ago

It might also be interesting to have a way of defining union types. Perhaps there is one master type that all others can be cast to, and the cast is specified in the definition. The compiler would then handle optimizations where casting can be proven to be unnecessary:

class DynamicMatrix? contains number

Array<number>Matrix<number> {
     super: Matrix<number>
     (x: number): Array<number> =  x => [x];
     (x: Array<number>): Matrix<number> = x => new Matrix([x]);
     (x: Array<number>): number = x => x[0] 
         when x.length === 1
         otherwise throw 'Array to number conversion is not possible';
     (x: Matrix<number>): Array<number> = x => x[0] 
          when x.numberOfRows  = 1
          otherwise throw 'Matrix to Array conversion is not possible';}

This is probably a bit more complex than the design team had in mind for union types, but I thought I'd throw it out there all the same.

I believe this, or features like it, would provide us with language constructs that would help to bridge the gap between dynamically typed languages and statically typed ones.

" -- https://github.com/Microsoft/TypeScript/pull/824

---

a comment on HackerNews? inspires an argument against a language providing only the tools to build a type system (one of my ideas), rather than a good base type system plus extensibility:

" "One can of course say, "Oh, send only messages. We are all adults here. Let's just agree to be nice. Stop sharing mutable memory between goroutines!" But all it takes is "that guy" or "that library", doing it "that one time" and then there are crashes during a customer demo or during some critical mission. It crashes and then good luck trying to reproduce it. Setting watchpoints in gdb (or the equivalent Go tool), asking customers "Can you tell me exactly what you did that day. Think harder!" and so on." " -- https://news.ycombinator.com/item?id=8575755

in other words, if the language provides a static type system, then you can use libraries and securely assume that whatever sorts of bugs the language-provided type system eliminates, those kind of bugs won't be present in the libraries. If the type system eliminates many sorts of bugs, then the library ecosystem will be less buggy.

by contrast, if the language only provides the tools for constructing a type system without providing an actual type system, then some libraries will use strong type systems and others will use weak type systems, and the robustness of libraries will be more variable

---

in the context of oot, the argument in the previous entry is already closed, b/c i have already decided that we want gradual typing.

however, maybe we could have a 'taint' system to make ppl very aware of when they are using a library that isn't strictly typechecked.


good example/food for thought about type system complexity:

http://stackoverflow.com/questions/1722726/is-the-scala-2-8-collections-library-a-case-of-the-longest-suicide-note-in-hist

---

usually the program only provides hints about which specific representation data type it wants, but i guess sometimes a specific representation data type may be mandatory (i think i had an example last night, but today i can't think of it though); in this case we might want to use the ! sigil

---

! to demand a specific representation, rather than request

---

"make illegal states unrepresentable" -- Yaron Minsky

---

today i woke up with the phrase 'manifestly bound theorems' in my head. This doesnt seem to be on the internet. Neither does "manifest binding", so i guess i made it up. Can we think of a cool meaning?

How about, "theorems who only apply for a given segment of source code and such that the relevant source code is explicitly labeled as such"? That is, having explicit labels in source code whose sole purpose is to be referenced by the theorem (because the labels say the region of source code where the theorem applies)

i guess an obvious example would be a theorem proving a loop invariant, which is bound to the region of the loop.

type inference and typechecking is also a theorem.

and i guess these (typechecking and theorem-proving) go together if we want our extensible typechecking to do stuff like in Rust.

but if that's what is meant, why not just call them 'scoped theorems'?

i guess a better name is 'region labels': naming a (static) region of code rather than just a specific point in code note: the regions dont have to be a (potential) scope to be labeled, eg begin-label-a if x then {y1; end-label-a; y2; y3;} else {z1; z2; z3; end-label-a;} point invariant: a proposition that holds whenever a specific point in code is reached region invaraint: a proposition that holds whenever any point in the region is reached

(note: what i am calling a '(potential) scope' in the previous paragraph is like the biological concept of a 'clade'; it is a region that includes all and only the descendents of some single node)

theorems can then themselves be 'scoped' to regions, meaning that they can assume things that are only true in that region, and their conclusions may only apply in that region.

also, preconditions/postconditions (like eiffel's require/ensure: https://www.eiffel.com/values/design-by-contract/introduction/ ) a contract (or perhaps also theorem, in cases where it is proven) where assumptions are made at one border of a regions, and promises contingent upon those assumptions are given at another point

(notice the modal demand/promise operator; note that modal logic here just being like the mode bits in Oot Assembly Notes 4 syntax!)

---

i think this is a key (if 'obvious') insight, but i'm not sure enough yet to promote it to the 'ootTypethoughts' page:

generic types are functions (type variable means a fn is involed), and typeclasses are types of types

note dynamic logic also generalized modal operators in this way (in passing) (to review, in dynamic logic, we create model operators to make statements of the form "if action A is taken, the proposition P will be true", written "<A>P". <x> is therefore a modal operator with a variable in it. We might have written it "<>_x")

so a generalization is to replace a constant by a variable (creating a fn), and then introducing types for this variable

anything 'with a variable in it' is a function; you 'call' a function by substitution, yielding an expression for the result, which then must be reduced

---

imo covariant and contravariant are most easily understood in terms of the Liskov substitution principal combined with pre- and post- conditions; a subclass can weaken the precondition and strengthen the postcondition; input types are preconditions and output type post-; so input type are contravariant and output types covariant. in some cases it should be possible to isolate a co- or contra- variant subset from an invariant interface, eg freeze an array and make it readonly for covariance by only exposing those functions that don't take any inputs of the parameterized type

---

'implementations' can be native (implemented in native code), or not. Implementations can have dependencies on other implementations. Note that, although implementation code inheritance is one type of dependency, that this is different from subtyping (these two things are confuted in traditional OOP, in which subtyping is confuted with 'class inheritance').

Type subtyping in Oot applies to types (interfaces), not to implementations; they are like (some kinds of) typeclass Contexts in Haskell. Eg any thing that supports the Float type also supports the Number type; anything that supports the Ord type also supports the Eq type.

Another concept related to subtyping is embedding. Eg the Integers are embedded in the Reals.

Another concept related to subtyping is when a class exposes one or more views as Patterns for structural typing (pattern matching) ('adt's, algebraic data types), and when some of the subnodes in this view are types. For example, the Rational class may expose the View of itself as "a / b", where a and b are Ints. This is also like a 'typeclass context' in Haskell, in that in general we have a Pattern like "a / b" with type variables in it (in this case 'a' and 'b'), and constraints on those type variables (in this case, that a is in the (typeclass) Int, and that b is in the (typeclass) Int). Recall that 'typeclass' in Haskell is just called a 'type' here, as all Oot types can have subtypes (but in a heterarchical structure, eg a type may be a subtype of multiple parent types) and all Oot types can be implemented in multiple ways.

Implementation dependencies are like dependencies in a package management system. Implementations can depend on types (all that is required is any implementation of the desired type), or 'concretely' on other implementations (we demand a specific implementation of a type, because we are going to break into its internals). For example, the Rational implementation can depend on the type Int (meaning that it makes use of Ints internally), or it could be eg a FFI binary containing optimized x86 code which calls subroutines from another binary package containing an implementation of Ints. Outside of FFI, a concrete implentation dependency is one which breaks encapsulation by accessing private variables in another implementation.

does Oot support concrete implementation dependencies aside from FFI at all? If so, does it force you to specify a specific version of the concrete dependency, to prevent the upstream author of the dependency from worrying about breaking code by changing Private variables? Perhaps it only supports concrete dependencies within the same module?

---

todo review haskell typeclass/java interface distinction again? it was something about existential types or something, somewhere on the haskell wiki perhaps.

---

AADTs: Algebraic Algebraic Data Types

eg saying that Rational is a type such that an expression involving the rational is equivalent to an expression of the form 'a / b' where a and b are Ints (and perhaps with the implicit mapping a <--> r.a , b <--> r.b, where .a and .b are fields of a rational instance r).

---

closed data types are in fact very useful; because if all you know is that you have a value of an open data type, you really don't know anything, because even for any property that all known instances of the type have in common, later you could discover another instance that has the negation of that property.

closed data types are eg useful for letting the compiler exhaustively check cases

(but if some cases have 'more data than expected' in them, the open data problem eg where you replace Cons(x,y) with Pair(Cons(x,y), some_metadata) but such that you can still match it against ordinary Cons(x,y) and bind x,y, ignoring the some_metadata, then that's fine, you still 'learned something' from the closed list of cases; the key is that the part that is 'open' is in some sense orthogonal to what you are learning by knowing the closed list of cases; in this example, the Pair(Cons(x,y), some_metadata) really is like a Cons of a list, so this datastructure really can be regarded as being the union of exactly its EmptyList? and Cons cases)

note that pattern matching on Haskell (ADT) datatypes achieves a kind of 'ad-hoc' polymorphism in a slightly different way from typeclasses; in a typeclass, the ad-hoc cases are defined in the instance (all together), but with pattern matching, the ad-hocness is defined at each use site (it has 'use site ad-hoc openness', i guess).

we want to have pattern matching, like Haskell, but we also want to 'open' a way to define new ways to pattern match on existing data types.

so, what we'll do is allow each View of a type to define its own ways to pattern match (its own set of cases (data constructors), etc). The (closed) set of cases available in a View is part of the interface signature of (one of the) interfaces supported by the View; eg its part of the typeclass signature, in Haskell terms.

in a way, this lets us support closed data types and open at the same time as follows; within a given View, we can have a closed list of cases (data constructor), but then we can switch to a different view and have a different list. That doesn't sound so different from Haskell's data types vs typeclasses; but what we're saying is that the SAME data value can masquerade as different data types in different places; so you can't eg create a parsing library whose signature demands linked-list-strings and rejects ByteStrings?, because the ByteStrings? can always be given a facade of a linked-list-string View.

---

so, note that each View can support more than one interface (typeclass).

---

the type system should allow the representation of units (feet, centimeters), types of units (length of space, duration), processing stages (unsanitized string from user input, sanitized string; uninitialized/initialized), program-specific message subtypes

---

in golang,

'arrays' are of constant length (the length is in the type) (like Python tuples) 'slices' are pointers to arrays. However, practically, slices operate as variable-sized arrays.

how to generalize this? is this just the typical way that parametric polymorphism (generics) works? i think it's a little different, eg we are not literally having a different method lookup table for every possible array length

--

vilhelm_s 2 days ago

But perhaps a better type system can still be helpful. E.g. F# supports units of measure, so you can declare 'dollar' as a unit, and write

    let calc_total (meal_cost : float<dollar>) (tax : float) (tip : float) =
       meal_cost * (1.0+tax) * (1.0+tip)

This way it is at least clear from the types that the numbers are meant to be multiplied, not added.

reply

S4M 2 days ago

I never used F#. In your example, would the code:

    meal_cost + tax

trigger a compiler exception?

reply

vilhelm_s 2 days ago

Yes, you get

    The type 'float' does not match the type 'float<dollar>'

There is an online typechecker, so one can try out F# without installing it: http://www.tryfsharp.org/Learn/scientific-computing#units-of...

reply

S4M 2 days ago

And how about:

    meal_cost * tax

?

This one makes sense in the real world, so I suppose an advanced type system allows the programmer to specify what operations are legit or not across different types.

reply

vilhelm_s 2 days ago

One could imagine systems for that, but units of measure doesn't do any customization of different operations. It's just unit checking for for arithmetic, like in high-school math.

reply

---

"Elm language limitations have become a real problem for me. The type system is F#-like: too limited to define Applicative, Monad, or any abstraction or data type that has a type parameter whose kind is not *. Also no higher-rank types or existential types. You can sometimes survive without these things, but eventually you reach a point where you need them. I’ve hit that point." -- https://pchiusano.github.io/2015-04-23/unison-update7.html

--

"A question to think about for type classes is, what is the best way to reuse variable names? Type classes, implicit arguments, and module functors all answer this question, but each comes with some unfortunate tradeoffs. My view here is that there is no need to rush. When our community collectively needs this kind of feature, we will be in a much better position to evaluate the trade-offs between these approaches for the problems we are facing in practice.

...

Until that time, I'd like to pursue the "scrap your type classes" approach which allows you to do explicit dictionary passing through some combination of higher-kinded polymorphism and rankN types. To me this is a minimal set of non-controversial features that get you a lot of what you want, so I think it makes sense to explore this route thoroughly before picking our poison with type classes, implicit args, or module functors. " -- https://groups.google.com/forum/#!msg/elm-discuss/oyrODCgYmQI/T2I_8L-AL6EJ

---

" Java’s type system simply isn’t very powerful or useful. Java actually contains two type systems, meshed together into an unsettling chimera. One of its type systems is bottom-up and vaguely algebraic, consisting of a few primitive types (integers of varying sizes, and single- and double-precision floating point numbers) and arrays of existing types. That’s it. It has int and double[] and char[][] but cannot represent anything more interesting (tuples, strings, variants, parameterized types). To represent those more structured types one has to use Java’s other type system, which is top-down– everything is a subtype of “Object”, and each variable stands not for the object itself, but to a reference to what might be an object of that type, or none (null).

The nastiness of this cannot be overstated. Java’s notorious NullPointerException?, a runtime failure associated with dereferencing a null pointer, is both a common and (from a debugging perspective) a usually-quite-useless error. Java’s type system lacks the power to verify that a String variable, at compile time, will actually hold a string, and not null, a value that must be handled specially by any method that takes a String (ditto, for any other object type) as input. The string type is not available; you must make do with what is, in fact, a (string option) ref— a mutable reference that may hold a String. " -- https://michaelochurch.wordpress.com/2010/10/29/the-next-secret-weapon-in-programming/

---

why Go doesnt use generics:

rsc 17 days ago

parent flag

I'm sorry, but no: Generics are a technical issue and are not a political one. The Go team is not against generics per se, only against doing things that are not well understood and/or don't work well with Go.

There are deep technical issues that must be solved to fit the idea of generics into Go in a way that works well with the rest of the system, and we don't have solutions to those. I wrote on my blog about one issue years ago (http://research.swtch.com/generic), but there are others too. Even supposing you get past the problem on that page, the next thing you would run into is how to allow programmers to omit type annotations in a useful, easy-to-explain way. As an example, C++ lets you write make_pair(1, "foo") instead of make_pair<int, string>(1, "foo"), but the logic behind inferring the annotations takes pages and pages of specification, which doesn't make for a particularly understandable programming model, nor something the compiler can easily explain when things go wrong. And then there's a princess in another castle after that one I am sure.

We have spoken to a few true experts in Java generics and each of them has said roughly the same thing: be very careful, it's not as easy as it looks, and you're stuck with all the mistakes you make. As a demonstration, skim through most of http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.ht... and see how long before you start to think "was this really the best way to do this?" (For example, http://www.angelikalanger.com/GenericsFAQ/FAQSections/TypePa..., but note that the latter page is only one part of the FAQ, not the entire FAQ.)

To be very clear, we acknowledge this fact: there are definite disadvantages to not having generics. You either use interface{} and give up compile-time checking or you write code generators and complicate your build process. But there are also definite disadvantages to generics as implemented in existing languages, and there is a very large advantage to not compromising today: it makes adopting a better solution tomorrow that much easier.

As I said in the interview at http://www.pl-enthusiast.net/2015/03/25/interview-with-gos-r...:

> Go is more an engineering project than a pure research project. Like most engineering, it is fundamentally conservative, using ideas that are proven and well understood and will work well together. The research literature’s influence comes mainly through experience with its application in earlier languages. For example, the experience with CSP applied in a handful of earlier languages—Promela, Squeak, Newsqueak, Alef, Limbo, even Concurrent ML—was just as crucial as Hoare’s original paper to bringing that idea to practice in Go. Programming language researchers are sometimes disappointed that Go hasn’t picked up more of the recent ideas from the literature, but those ideas simply haven’t had time to pass through the filter of practical experience.

I believe generics is one of those ideas. It certainly needs at least one more iteration, possibly a few more than that.

---

types as just a bundle of assertions (this would unify eg graph structural types/graph patterns, and oop classes)

---