proj-oot-ootTypeNotes6

if 'array' is a type constructor (eg if Array(Int), or Array Int to use the Haskell syntax, is an array of ints, then the 'array' is like a function that takes an argument and returns the type of an array of that thing), then the generic type of 'Array' or 'Array(*)' can also be seen as either a pattern, or as a typeclass (the set of types matching that pattern). (aside: however, hetrogeneous arrays may be something distinct, because unless there is an 'Object' or 'all' or 'root' or * or DYN type, there is no type that can be substituted in for the * in the pattern to make the type of hetrogeneous arrays).

so, right there we have three contrasting views on what a type is; add the view of a type as a set of values and we have four; is a type:

---

Often in low-level or minimalist languages, one sees constraints on fundamental composite data structures.

the common ones:

---

C++ concepts (proposed future language feature) are pretty much like typeclasses:

https://en.wikipedia.org/wiki/Concepts_(C%2B%2B)

---

http://fantom.org/forum/topic/1433

" Lastly we come to quality - do type system features really improve the quality of programs by catching bugs at compile time? Lets look at each feature:

    Nullable Types: compiler and runtime ensures an object's non-nullable fields are assigned - this definitely improves quality and its nice to have "fail fast null" errors, but it is all really just nice enforcement of the real value in capturing programmer intent
    Const Types: allowing compiler to fully ensure an immutable object is locked down when the constructor completes has huge value in catching bugs. And more importantly the ability for const types to interact with the concurrency model to ensure mutable state is not shared across threads is by far the #1 advantage Fantom has in quality over Java/C# like programming languages. Furthermore if you think actors is only a "library problem", then you don't appreciate how deeply Fantom's type system and concurrency model are intertwined
    Generics: anyone from the Java community has likely had years of experience pre-1.5 and pre-generics. Do type bugs occur? Yes now and then, but can anyone really make the case that they happened often or were difficult to debug?

Or put another way: how many hours of your life have you spent debugging concurrency bugs? How many hours of your life have you spent debugging type bugs in your collections? Me personally: I've wasted enough hours in my life with concurrency bugs that I'd much rather have the type system address concurrency than collections. Fantom's type system directly addresses concurrency in a much deeper way than even Scala.

...

 I want to draw attention to the fact that nullable types and const types are very important features that address what I perceive as the real quality issues in programs. These features already impose complexity in their interaction with functions, it-blocks, constructors, static fields, etc. So while I would like to get generics into Fantom too, I would never trade const types for generic types.
 "

---

http://fantom.org/forum/topic/1433

" There are different ways to look at parameterized types, but they way I like to look at them is as a "type factory". In Fantom we can define a type as a set of slots (fields and methods) with specific signatures. A parameterized type is where types used in the signature of its slots is left as a variable to be filled in later. The basic abstraction is not much different than a function where the arguments are left as variables to be filled in when you call the function. Where parameterized types tend to explode in complexity is how they interact with sub-typing in an object-oriented language. Taming this complexity is the prime mission of figuring out generics for Fantom. "

" Lets consider a simple example using Java syntax for further discussion:

class Foo<T> { Void bar(T t) }

Any parameterized type solution would at a minimum need these basic features:

    a syntax for declaring a generic type and use of the parameterized types in slot signatures
    a syntax for instantiating a generic type at a usage site
    well defined rules for subtyping

Advanced features to also consider:

    constraints on the type parameters; in our example how can I specify that T must be a subclass of Baz?
    variance rules for the type parameters; in our example above suppose we have created a Foo<Num>, should we allow bar() to be called with an Int or an Obj?
    variance of the generic itself: should Foo<Int> be considered a subtype of Foo<Num> ? How about Foo<Obj> ?
    both Java and C# allow a single method to declare generic parameters which is useful for utility methods

"

" Random Thoughts

As I said at the start, I have no formal proposal. However, after noodling on this problem I have lots of random ideas that I want to jot down (some may seem from left field), but at this point you never know where a random source of inspiration might spring...

Sub-typing is where generics get sticky, especially with the model used with OO subtyping by inheritance. This is where covariance and contravariance issues rear their ugly head (as discussed at length about Fantom's List type). So I've been thinking a lot of how we could simplify or side-step this issue

Are generics useful if they can only be used on final classes?

How would generic sub-typing rules mess with Fantom implicit casting? Even with awesome generic type handling, implicit casting is still extremely useful for reflection code (but maybe we could change implicit casting back to only work with Obj, or have reflection use a new "wildcard" type)

I'm quite enamored with the idea of separating types from implementations - a great example I really like it Clojure protocols, and Go interfaces are sort of the same vein. Suppose a generic type like List automatically generated both an interface and default implementation. Maybe parameterized types could then be "attached" to objects which fit that interface. Maybe it can even happen at runtime as a sort of structural typing feature. Can this path lead to simplification of the sub-typing problem? I can't fully get my head around it, but I keep coming back again and again to this idea as a potential way to both simplify the type system but also make it more flexible.

Is subtype checks something that gets delegated at runtime, not compile time? Like some of the stuff Stephen has been discussing with "explicit conversions"

Maybe Fantom types are true code generation templates that just make an easy way to create a new types by actual syntax substitution - touched on in topic #1420. Obviously very different direction and helps nothing for the subtype issues, but interesting alternative

C# originally didn't have generics variance. Does anyone have experience with that regarding various tradeoffs?

Eiffel has interesting feature called "anchor" types where a method parameter type can refer to a another field/methods type that might be refined in a base class. I haven't thought about that much, but different way to look at the problem. Interestingly from what I can glean, Eiffel always treats Foo as a subtype Foo<A>, if B is a subtype of A (seems to ignore potential contravariance issues).

So anyways, just some random thoughts. Short version: I think generics could be cool, but we have to come up with a design that doesn't impose too much complexity and can integrate with existing type system features, constructors, etc. "

"

    In Fantom we can define a type as a set of slots (fields and methods) with specific signatures.

That's dicey. To be a little less vague, what you have just done is defined a type in Fantom as being equivalent to a record type. There's nothing wrong with this, except it flies in the face of nominal typing (which is what Fantom uses). Using record types for everything is actually the same as using structural types, and that's not a change you want to leap into. Scala has an interesting implementation of structural typing which basically maps them as anonymous interfaces which are implicitly inherited by all conforming types. This allows a limited form of structural typing to work in a predominantly nominal system, but it is still limited. An example of a very useful bit of Scala code which does not (and cannot) compile:

type Foo[A] = { def bar(a: A): String } "

" Eliminating nominal subtyping does indeed solve the majority of the problems with parametric polymorphism in an object-oriented context (see: OCaml). However, some complexities still remain (mostly stemming from Fantom's class-based nature rather than prototype-based). More importantly though, I don't think it's possible for Fantom to go to a purely structural type system at this point. Even ignoring backwards compatibility, there's the huge looming platform issue: structural typing is dog slow on both the CLR and the JVM. "

"

    C# originally didn't have generics variance. Does anyone have experience with that regarding various tradeoffs?

Ah, variance! This is one of the most-maligned features of Scala's type system, so be prepared for a world of criticism if you go this route. Given Fantom's pre-existing philosophy of "just make it work" (e.g. implicit casting), I would advise not implementing any form of variance (neither call-site or declaration-site). Instead, assume everything is co-variant (this is the common case anyway) and let the casts fail. Erik Meijer has a talk where he claims this is a much more rational and pragmatic way to handle the issue. I'm not sure that I agree with him, but I think it does fit better with the overall Fantom design.

If you do decide to implement variance, then you really need to implement both call- and declaration-site variance. To be clear, call-site variance is what Java has. Declaration-site variance is what C# has. Scala has both. If you only implement call-site, then you end up married to existential types and constantly writing code like List<? extends JComponent>. If you only implement declaration-site, then the situation is even more comical. Consider:

Scala code, assuming no type quantification (decl-site variance) class List[+A] { covariant in type A def insert(elem: A): List[A] will not compile! }

Provably, the only way to implement the insert method here is to have a type parameter B >: A defined on insert. C# doesn't allow this, and as a result, its variance is nigh pointless.

So, variance is a rat's nest that I think you should probably avoid altogether. If you choose not to, then be sure to implement both sides of the equation (call- and decl-site), because one without the other is painful and restrictive. "

---

worth summarizing some of the conclusions in the previous post:

---

i don't agree with the following but it's an interesting point of view:

"

qualidafial Fri 4 Mar 2011

My vote is for a simple generics system with similar semantics to what we can do already with List:

fansh> Num[] nums := [1, 2.5, 3.0f] [1, 2.5, 3.0] fansh> nums.each { echo(it) } 1 2.5 3.0 fansh> Obj[] objs := nums [1, 2.5, 3.0] fansh> objs.each { echo(it) } 1 2.5 3.0 fansh> Int[] ints := nums [1, 2.5, 3.0] fansh> ints.each { echo(it) } 1 sys::CastErr?: java.lang.ClassCastException?: java.math.BigDecimal? cannot be cast to java.lang.Long fan.sys.List.each (List.java:529)

Just let me cast to what I want, and let it blow up in my face if I make a mistake. Or, if the cast is invalid e.g. casting a Foo<Str> to Foo<Int>, then give me a compiler error. Anything more than this just turns the type system into an obstacle, and makes me want to tear out generics entirely. To wit:

interface Builder<This extends Builder<This>> { This withFoo(String foo);

  Thing build();}

interface FancyBuilder?<This extends FancyBuilder?<This>> extends Builder<This> { This withBar(int bar); }

Now try to go implement the concrete class for that. Warning: this exercise may make you try to hurt yourself; lock away any sharp objects beforehand.

I do think it would be valuable to enforce some minimum type for each type parameter:

WidgetBuilder?<T : Widget> { T build() }

But that is as far as I'd go with bounded parameters.

"

---

" One thing I often run into when switching from C# to Java, is how one becomes painfully aware of the duality of the Java type-system. Tooling and developers can grok with MultiComparable? : Comparable[X], Comparable[Y] but the compiler itself can not. "

---

" Would parametric polymorphism in Fantom be reified or by erasure? ...

djspiewak Fri 4 Mar 2011

Beware! Reified generics don't play well with variance in any form. It's even worse once you throw in higher-kinded types, though I doubt that's a feature you would want in any case. Additionally, if you want Fantom generics to be at all interoperable with JVM generics, you're going to need to implement them using erasure.

Erasure isn't so bad in general. Java's implementation is particularly troublesome, made all the more so by existential types (which cannot be reified at all) and by the JVM's abysmal array semantics. Scala does a very nice job of smoothing over the inadequacies with a bit of compiler magic (the ClassManifest? typeclass). This magic allows you to do almost all of the things you can do with fully reified generics (even creating arrays!). You still can't do things like inheriting from Comparable<X> and Comparable<Y>, but that's a very dangerous thing to be doing regardless, so I think it's actually a good thing that Scala doesn't allow it.

timclark Fri 4 Mar 2011

    Scala does a very nice job of smoothing over the inadequacies with a bit of compiler magic (the ClassManifest typeclass).

In my experience with Scala generics they almost let me write type safe code like I can in Haskell or ML and then erasure ruins it for me! For example the last time I programmed in Scala I couldn't pattern match on a List[Toffee] without suppressing a warning. I'd gladly sacrifice being able to call my code from Java if when I need a Pizza[] (or List[Pizza] or List<Pizza>) my code won't accept Obj[] (or List[anyRef] or List). djspiewak Fri 4 Mar 2011

If you have a ClassManifest? in scope, then the pattern matching is actually able to handle matching on things like List[Toffee]. It's not behavior you should rely on though.

brian Fri 4 Mar 2011

    Would parametric polymorphism in Fantom be reified or by erasure?

We would probably stick with existing design which is a hybrid. In Java bytecode they are erased and the bytecode generator inserts appropriate cases (just like Java). However we do keep track of the proper type for reflection:

fansh> [2, 3f].typeof sys::Num[]

"

---

" Namespace versus Deployment

Java and .NET to a lesser degree separate the concepts of namespace and deployment. For example in Java packages are used to organize code into a namespace, but JAR files are used to organize code for deployment. The problem is there isn't any correspondence between these concepts. This only exacerbates classpath hell - you have a missing class, but the class name doesn't give you a clue as to what JAR file the class might live in.

This whole notion of type namespaces versus deployment namespaces does offer flexibility, but also seems like unnecessary complexity. Fantom takes a simple approach to managing the namespace using a fixed three level hierarchy "pod::type.slot". The first level of the namespace is always the pod name which also happens to be the unit of deployment and versioning. This consistency becomes important when building large systems through the assembly of pods and their types. For example, given a serialized type "acme::Foo", it is easy to figure out what pod you need. "

---

https://internals.rust-lang.org/t/haxe-influences/2860 concludes that Rust type system wasn't influcenced by Haxe, but that they shared a common ancestor and may have had some convergent evolution to boot

---

" Haxe contains a type system which is both strong and dynamic. The compiler will check types implicitly and give compile-time errors, but it also enables the programmer to bypass type-checking and rely on the target platform's dynamic type-handling. "

---

that's an interesting idea, to rely on the target platform's dynamic type handling rather than re-implementing it ourself.

---

the complication of (contra,co) variance (at declaration site and call-site), as noted above, is sticking in my mind. It reminds me that those type theory people seem to think that 'subclassing' is hard to mix with 'higher order types'; i guess the complicated variance stuff only comes into play with generics, so that's probably what they meant. I guess we want generics and we want subclassing so we have to think about this.

---

remember to read up on those problems with gradual typing (where it seemed like there was 'local minima' after you start giving type annotations but before you finish)

---

a research group did some work on somehow using graph shape annotations to check program correctness or safety:

this is the same group that did the graph language GP (see plbook)

---

the Morte syntax is actually pretty cool:

http://www.haskellforall.com/2014/09/morte-intermediate-language-for-super.html

-- mapid.mt

(    \(List : * -> *)
->   \(  map
     :   forall (a : *)
     ->  forall (b : *)
     -> (a -> b) -> List a -> List b
     )
->   \(id : forall (a : *) -> a -> a)

    ->   \(a : *) -> map a a (id a)
)

-- List
(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)

-- map
(   \(a : *)
->  \(b : *)
->  \(f : a -> b)
->  \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
->  \(x : *)
->  \(Cons : b -> x -> x)
->  \(Nil: x)
->  l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- id
(\(a : *) -> \(va : a) -> va)

this is defining List as in the following Haskell code:

type List a = forall x . (a -> x -> x) -> x -> x

and map as:

map :: (a -> b) -> List a -> List b
map f l = \cons nil -> l (\a x -> cons (f a) x) nil

(this is a trick used by Morte called 'Boehm-Berarducci encoding' that replaces lists with folds over the list; apparently (with some modification, like the \cons nil -> in front of 'map') programs using these surrogate lists can compute the same thing as programs using real lists, but this allows you to avoid recursion)

Note that each function is explicitly accepting types in arguments, and note that parametric polymorphism, and type constructors such as List, are explicitly represented using kinds ('*') in the type signatures. (i don't quite understand where the 'forall' is needed and where it is not, yet).

Morte will compute the result as:

$ morte < id.mt
∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a 
→ x → x) → x → x

λ(a : *) → λ(l : ∀(x : *) → (a → x → x) → x → x) → l

"The first line is the type, which is a desugared form of:

forall a . List a -> List a

The second line is the program, which is the identity function on lists. Morte optimized away the map completely, the same way we did by hand. "

The Morte guy says "The extra 'a' business is because in any polymorphic lambda calculus you explicitly accept polymorphic types as arguments and specialize functions by applying them to types. Higher-level functional languages like Haskell or ML use type inference to automatically infer and supply type arguments when possible.".

But i like the extra type arguments. It's more explicit, and therefore clearer for me to read.

---

example of static typing making programs shorter:

" in the Haskell standard library’s Data.Map module. This module implements a balanced binary search tree, and it contains a function whose type signature looks like this:

lookup :: (Monad m, Ord k) => k -> Map k a -> m a

This is a magical function. It says that I can look something up in a Map and get back the result. Simple enough, but here’s the trick: what do I do if the result isn’t there? Common answers might include returning a special “nothing” value, or aborting the current computation and going to an error handler, or even terminating the whole program. The function above does any of the above! Here’s how I compare the result against a special nothing value:

case (lookup bobBarker employees) of Nothing -> hire bobBarker Just salary -> pay bobBarker salary

How does Haskell know that I want to choose the option of getting back Nothing when the value doesn’t exist, rather than raising some other kind of error? It’s because I wrote code afterward to compare the result againstNothing! If I had written code that didn’t immediately handle the problem but was called from somewhere that handled errors three levels up the stack, then lookup would have failed that way instead, and I’d be able to write seven or eight consecutive lookup statements and compute something with the results without having to check for Nothing all the time. This completely dodges the very serious “exception versus return value” debate in handling failures in many other languages. This debate has no answer. Return values are great if you want to check them now; exceptions are great if you want to handle them several levels up. This code simply goes along with whatever you write the code to do. " -- [1]

i don't think we want to allow this! This sounds like the sort of cross-module backwards inference that i though allowed overly-magic stuff like variadic printf. But it's interesting to think about

---

"

naasking 3 days ago [-]

Decent overview of many concepts, but the opening line isn't strictly true:

> A type is a collection of possible values.

A type is a proposition, and "This binding has one of these possible values" is merely one type of proposition. So a type is much more powerful than a simple set-based interpretation, even though this is how most people think about it.

For instance, in Haskell you can encode a region calculus that ensures prompt reclamation of file handles and other scarce resources [1]. The relations between nested regions to ensure correctness seems awkward to interpret as a set, but it's straightforward if you just think of types as a relation.

[1] http://okmij.org/ftp/Haskell/regions.html#light-weight

reply

rntz 3 days ago [-]

(Static) types may be seen as propositions, but this is just one interpretation of them. We can also interpret types as sets (although this doesn't work out very well except in very simple languages), as domains (most commonly), as cpos, as relations over closed terms (PERs), and so forth.

The types-as-propositions interpretation is a particularly useful one because it lets us take ideas from the field of formal logic and apply them to programming. But it is only one interpretation, and not a definition of what types are.

I don't think it's possible to give "type" a good, short definition. For me John Reynolds' definition comes closest: "Type structure is a syntactic discipline for enforcing levels of abstraction." But that's rather a mouthful, and not a good introduction to what types are.

> A type is a proposition, and "This binding has one of these possible values" is merely one type of proposition.

The type that an expression or variable has does not correspond to the proposition that that expression has a certain set of possible values! You're mixing the object-level (the language & its types) with the meta-level (assertions about the language). This is a total misunderstanding of Curry-Howard. Rather, the values an expression might take on can be interpreted as proofs of the corresponding proposition.

reply

eli_gottlieb 3 days ago [-]

>The types-as-propositions interpretation is a particularly useful one because it lets us take ideas from the field of formal logic and apply them to programming. But it is only one interpretation, and not a definition of what types are.

Ehhh, it kind of is a definition. Think of it this way: a type system is a topology we lay over programs, with open sets representing observable (using finite computation) properties and closed sets representing deniable (using finite computation) properties. The finest such topology possible is the Scott topology, which can distinguish every single observable value and deniable property from every other: it encodes all the information you can have about a program without solving the Halting Problem.

Type systems, then, are coarser topologies over programs than that one, which group observably different expressions and traces into the same open sets, which are then called "types". When those types are guaranteed not to contain _

_, the divergent program, they can also be considered topological spaces, with the programs in them acting as points in the topological space.

reply

naasking 3 days ago [-]

> (Static) types may be seen as propositions, but this is just one interpretation of them. We can also interpret types as sets (although this doesn't work out very well except in very simple languages), as domains (most commonly), as cpos, as relations over closed terms (PERs), and so forth.

Right, I wasn't advancing the propositional definition as the only one, merely as one that sufficiently captures the generality, because the article's current definition of types-as-sets is insufficiently general.

reply

catnaroek 3 days ago [-]

Under HoTT?'s take on propositions as “types where any two inhabitants are equal”, “types as propositions” is even less general than “types as sets”.

As I stated in a comment on that gist, what types denote varies from one type system to another: https://gist.github.com/garybernhardt/122909856b570c5c457a6c...

reply "

---

" > The unsatisfying solution used in practice is to give eval() the type Any, which is like Object in some OO languages or interface {} in Go: it's the type that can have any value. Values of type Any aren't constrained in any way, so this effectively removes the type system's ability to help us with code involving eval. Languages with both eval and a type system have to abandon type safety whenever eval is used.

No, you can give EVAL a returning type of any (T) but still make type inference work.

    (+ 1 (eval (read)))

The meaning of a type in SBCL is: if an expression evaluates without error, then its type is .... In other words, the type of the above expression is NUMBER.

There is also a NIL type (bottom) which represent the empty set of values. If a function has a return type of NIL, it means that it does not return a value normally. This is the case for the ERROR function, or the (LOOP) expression.

reply "

---

	Hacker News new | threads | comments | show | ask | jobs | submit 	bshanks (812) | logout
	
	agentgt 3 days ago 
parent flag favorite on: Why People Should Learn Python

I'm not sure I like implicits as for HKT... "with great power comes...".

HKTs are great for a generic library writer (aka standard library) but I think the over abstract nature ends up confusing your average developer. You basically have your type aficionados obfuscating the codebase for developers who can barely handle generics. It is sort of analogous to using massive reflection / meta-programming in scripting language (albeit far more safe).

Being extremely implicit and abstract comes with a cognitive load.

OCaml's Functors (ie super strict and explicit) on the other hand can get massively repetitive.

I vaguely remember HKTs being expensive compile time. Is that still the case?

lmm 2 days ago [-]

> I'm not sure I like implicits

I think they're a necessary evil. I need extension methods, ad-hoc polymorphism, and a way to use ad-hoc polymorphism while still writing nice DSLs (i.e. what spray calls the magnet pattern). I'm nervous about implicits, but I haven't seen a better solution.

> HKTs are great for a generic library writer (aka standard library) but I think the over abstract nature ends up confusing your average developer.

There are few things more frustrating than having to copy-paste because your language can't handle the abstraction. There are less cases where you need HKTs than where you need basic generics, sure, but sooner or later you do need them. And since everyone hates copy-paste, they end up using annotations or reflection or string-based templating to generate code, all of which end up more confusing to the average developer. At least with HKTs you still have ordinary values that follow the rules of the language, rather than wondering whether you can still factor out a common method because it's called from an annotated method and an unannotated method, and what annotations you need to put on it to make that work.

> I vaguely remember HKTs being expensive compile time. Is that still the case?

HKTs per se are not expensive. Type inference in the presence of HKTs can be expensive (though note that even in OCaml or pure ML there are pathological cases for type inference - http://cs.stackexchange.com/questions/6617/concise-example-of-exponential-cost-of-ml-type-inference ). Scala implicit resolution can be expensive especially when used with recursion; global-uniqueness constraints a la Haskell typeclasses would be one solution, but arguably introduce more trouble than they're worth. One way or another, Scala compilation ends up being pretty slow, which is unfortunate.

reply

premium-concern 3 days ago [-]

The great about types in Scala is that complex solutions to complex problems can be contained and don't infect the rest of the codebase.

The average programmer can enjoy completely consistent names and signatures across different libraries, without having to know that the consistency comes from one method being applicable to various types, not manual labor repetitiously adding the same method to different types which can drift apart at any time without anyone noticing.

HKTs are not that expensive. Slow compile times are usually caused by people doing Prolog at the typelevel in Scala.

reply

---

ew:

" Whether the 1 and 2 in those two types are the same value depends on the type system. That example corresponds most closely to sum types, and a lot of languages (the ML family, Haskell) require constructors on them. They have to be valid identifiers, so you might say data LessThan?5 = One

@bennoleslie bennoleslie commented 7 days ago
Two Three Four Five. If you then try to say data LessThan?3 = One Two, you'll get compile errors: "Multiple declarations of ‘One’", etc. I should probably add a sentence clarifying that completely arbitrary types like HighFive? aren't usually possible due to the constraints of actual type systems. (There used to be one but it disappeared due to other changes.)

@garybernhardt Thanks for the comment. I just tried that exact example in ghci, and in that case there is no error but the type of 'One' ends up being LessThan?3.

Interestingly the type of 1 (in Haskell at least) is a very interesting Num a => a, which as far as I understand, means that the value inhabits any type with a typeclass of 'Num'. I don't understand what kind of magic is going on there, and if it would be possible to do the same magic with a user-defined type/typeclass.

The above leads to some interesting (to me) behaviour. Specifically I can do something like:

> let x = 10000000000000000000000 > x::Integer 10000000000000000000000 > x::Int 1864712049423024128

I'm not sure if is is good, bad or otherwise, but I at least found it interested, and would love to know if I can implement that in a user-defined type/typeclass. "

---

"Typically, to have good inference it's important to have what's known as a "principal type property" which is to say that any program expression can be given a unique "most general type"."

---

" Milyardo commented 7 days ago

Scala supports dependent typing, and is used quite extensively in many projects. "

---

davidberneda commented 7 days ago

Union types (as in TypeScript?) are a good solution (type foo = String

Type class types (as in Delphi / Object Pascal) are wonderful (foo = class of TBar) to constraint to a precise point of a class hierarchy
Int) to constraint different-root types at compile time.

---

" in the case of JavaScript? it's pretty good to not have types. Typical use cases are not really complex apps, but small functionality on a single page. How convenient is to just write:

notifications.innerHTML = "Messages (" + messages.length + ")";

instead of casting everything into the right type adding god know how many lines? "

---

" Some types are purely analytic or "pluggable" in that they can constrain a program or improve our understanding of it but can never affect the semantics of the program itself. Others are intimately tied into the semantics of a language such that prior to computing all of the types of a program it's not possible to know its behavior. "

---

"languages must usually trade off between simple type languages and the power of reflection. Unbounded reflection usually requires very complex type system to have interesting type safety. Bounded (a.k.a. "type safe") reflection is possible in moderately powerful formally typed languages, though, and most uses of reflection can be performed in bounded systems."

---

we should be able to do this sort of stuff with Oot's type system (not based on naming conventions though, just types):

"a linter that typechecked our code, based on our naming convention... Pipeline stage

When you pipeline hardware, you end up with many versions of the same signal, one for each stage of the pipeline the signal traverses. Even without static checks, you’ll want some simple way to differentiate between these, so you might name them foo_s1, foo_s2, and foo_s3, indicating that they originate in the first, second, and third stages, respectively. In any particular stage, a signal is most likely to interact with other signals in the same stage; it’s often a mistake when logic from other stages is accessed. There are reasons to access signals from other stages, like bypass paths and control logic that looks at multiple stages, but logic that stays contained within a stage is common enough that it’s not too tedious to either “cast” or add a comment that disables the check, when looking at signals from other stages.

Clock domain

Accessing a signal in a different clock domain without synchronization is like accessing a data structure from multiple threads without synchronization. Sort of. But worse. Much worse. ...

Module / Region

In high speed designs, it’s an error to use a signal that’s sourced from another module without registering it first. "

---

random example of some type system-specifying terminology for Fortress. Already read it, no need to reread:

https://web.archive.org/web/20110114085749/http://projectfortress.sun.com/Projects/Community/blog/TypesAndRunTImeTypes

---

cmrdporcupine 8 days ago [-]

The problems with the JS type system has nothing to do with its dynamic typing or its prototype model.

JS has unfortunate implicit type coercions, bad scoping rules (not type system related but still crappy), and annoying tri-value silliness (NaNning? all over).

I'm a big fan of prototype OO. JS not so much.

reply

--

discreteevent 8 days ago [-]

This is exactly the intention of optional/gradual type systems. You can prototype away just as freely as with a purely dynamic language. Then when you have learned enough to be more definitive you can start putting in types. Dart is a good exponent of this (I think). Its a fully dynamic language but the compiler will catch the majority of static type errors (for types you define) and you can run it in checked mode during testing to catch other type errors. Typescript also has optional types (without runtime checking) but when I tried it it seemed to be harder to mix and match dynamic/static.

reply

---

"I think between modular implicits[0], multicore support (both coming soon), along with BuckleScript? and js_of_ocaml, Ocaml will be the functional language to beat in the coming years.

0. For those unfamiliar with modular implicits (Scala has them), they are kind of like type classes, so we won't have to write things like "print_string" or "print_int" anymore; we can just write "print". "

---

i guess a key thing about Oot's type system is 'everything is an interface', which maybe is the same as saying it's 'structural typing'?

---

wyager 4 days ago

parent [-]on: Zig: a system language which prioritizes optimalit...

> Maybe type instead of null pointers.

It's often a bad sign when a language advertises this high up on its features list. It means that they didn't really get the true takeaway of the Maybe type (which is that you should support algebraic data types to properly constraint data to valid states), but instead saw a single specific use case of ADTs (Maybe) and thought that's all there was to it.

I've run into this with Java 8. Optional is pretty common now and has eliminated the need for the null pointer in much of everyday code, but they still don't have something like Either to eliminate the need for exceptions. Maybe is extremely useful, but it's a small fraction of the usefulness you get with true ADT support.

reply

---

greydius 171 days ago [-]

> A major difference from Java in Scala is, that container types are not-variant by default!

Only arrays are variant "by default" in java (which was a huge mistake). Variance for generic collections is specified at use, not at the type definition level.

> This means that if you have a container defined as Box[A], and then use it with a Fruit in place of the type parameter A, you will not be able to insert an Apple (which IS-A Fruit) into it.

No. What this really means is that you can't assign an instance of Box[Apple] to a val/var of type Box[Fruit].

---

(polymorphic) typed lambda calculus (system F style?) looks like it may be just what we need (or at least, inspiration for the answer) for a simple (but verbose) notation for types that uses a similar syntax to types as for functions

eg the polymorphic identity fn:

λ(a : *) → λ(x : a) → x

(so, the first argument is a type, 'a'; the second is a value in that type; then, that value is returned)

the type of the previous function is:

∀(a : *) → ∀(x : a) → a

(from the first few sections of http://www.haskellforall.com/2016/04/data-is-code.html , which is a good tutorial for this)

---

good stuff:

https://www.typescriptlang.org/docs/handbook/advanced-types.html

my summary:

---

"...type "soundness" means that the type checker will always complain when your program is invalid. You can't write an invalid program with a sound type system." [2]

so i guess 'soundness' corresponds to our 'strict mode'?

---

my question: is there a duality between (union or discriminated union?) types and subclasses? That is, is saying "Y is a subclass of X, and Z is a subclass of X" like saying one of "X is a union type of Y and Z" (this one would be type erasing, i guess) or "X is a discriminated union type of Y and Z"?

(already copied to plPartTypeSystems)

---

new stuff in Typescript 2.0 that ppl like:

null" )

from https://news.ycombinator.com/item?id=12557808

---

" Tomas Petricek's idea of Co-Effects[0] into type systems. While an effect system tracks what your program does to the world in its type system, a co-effect system tracks what your code requires from the world, aka the executable's environment and platform. There are a lot of benefits to this, but the primary benefit of a co-effect system is that you could safely write cross-platform cross-compiled code within a single compilation unit project by having different `main`s that specify and encodify different platform capabilities. If your `getSDCardFileSystem` function only works on Android because iOS doesn't have SD card capabilities, then it can't be used in a main function bound by the iOS context "

havent read yet: http://tomasp.net/blog/2014/why-coeffects-matter/

---

http://blog.cognitect.com/blog/2016/9/29/agility-robustness-clojure-spec says regarding Clojure Spec "You can declare predicates about data, about function arguments and returns, and even function semantics. This overlaps not only with type systems, but also with validations that are often done with costly bespoke tests and runtime checks."

---

Russ Cox of Go is thinking about adding generics someday: > Today, there are newer attempts to learn from as well, > including Dart, Midori, Rust, and Swift.

A Dart guy notes: " For what it's worth, we are making significant changes to Dart's generics story and type system in general [1]. We added generic methods, which probably should have been there the entire time.

Generics are still always covariant, which has some plusses but also some real minuses. It's not clear if the current behavior is sufficient. " [3] https://github.com/dart-lang/dev_compiler/blob/master/STRONG_MODE.md

---

" ((algebraic data types are)) redundant because Go already has type-switch and type-assertions. Your comments about ints vs arrays vs behavior misses an important fact: An object of any type can be promoted to interface{} (aka "dynamic") and then may be "pattern matched" on via `x.(type)`. Sure, it's pretty crummy pattern matching to only be able to dispatch on a single tag, but there are some fundamental problems with traditional algebraic data types and pattern matching:

1) Abstract data types encourages closed systems. You may view this as a positive: It enables exhaustiveness checks. But I view it as a way to make your program more fragile. Go's type asserts let you convert to interface types, so you can add new interface implementers later and not have to go fix-up old type-assertions.

2) First-to-match pattern matching complects order with dispatch. Each match clause has an implicit dependency on _all_ of the clauses before it, since you can match Foo{x, 1} and if that fails match Foo{x, y} where you know now that y != 1. This is sometimes useful, but as your patterns grow larger, it's simpler to just match Foo{x, y} and then branch on y == 1. A series of type-asserts with if statements has a little bit of order dependency on it: interface clauses and named wrapper types are first-to-match, but type-switch on struct clauses are completely order independent because there can only be one concrete representation underlying an interface{}.

3) Relying on positional fields causes two classes of problems: A) it's harder to grow your system later, since every pattern match needs to mention the new field you added and B) you can't _not_ mention a field by name (or at least give it a placeholder name) at every use as well. This is the same issue as the Foo{x, y} vs Foo{X: x, Y: y} notation. It's considered good practice in Go to use the later, since it's more future proof, ie Foo may grow a Z field and it will be initialized to zero. "

" > First-to-match pattern matching complects order with dispatch

This seems orthogonal to ADTs: one can retain retain behaviour like Go's current switch, and only allow switching on the variant, not doing more detailed pattern matching. This retains order independence.

> Relying on positional fields causes two classes of problems

Again, totally irrelevant: one can pattern-match using the names of fields. "

" > only allow switching on the variant

Go already has this. It's called type-switch:

https://golang.org/doc/effective_go.html#type_switch

> one can pattern-match using the names of fields

Of course they can, but traditional algebraic data types impose order on to fields, which is an undesirable feature IMHO. However, field dispatch loses the nice order-independence attribute. "

" Is the idea here that all types should be open? Should I be able to e.g. add 1.5 as a possible instance for the Int and String types?

Interfaces should be open, but one also needs the ability to represent closed datatypes. "

---

Oot should support both open and closed ADT-like datatypes

---

i_don_t_know 3 days ago [-]

I think they thought of the standard OOP solution to algebraic data types. Rather than pattern match on the value and then do the work, you dispatch on the value through an interface and do the work. I don't know go so here's some pseudo code:

    type Foo = { x : int }
    type Bar = { y : string }
    interface DoStuff { doStuff() -> void }
    function Foo.doStuff() {
        ...do stuff with Foo.x
    }
    function Bar.doStuff() {
        ... do stuff with Bar.y
    }
    function main() {
        value := getSomethingThatImplementsDoStuff()
        value.doStuff()
    }

Compare this to ML:

    type DoStuff = Foo of int | Bar of string
    let main =
        let value = getSomethingThatReturnsDoStuff ()
        match value with
        | Foo of x -> ...do stuff with x
        | Bar of y -> ...do stuff with y

reply

pcwalton 3 days ago [-]

This is basically the expression problem: https://en.wikipedia.org/wiki/Expression_problem

The expression problem is motivated by the fact that the OO style (which Go favors via interfaces) makes adding new cases easy (just add another subclass) but adding new methods hard (you have to modify every subclass), while the pattern matching style makes adding new methods easy (just add another match statement) but adding new cases hard (you potentially have to modify every match statement in the program). Depending on the task at hand, either one may be more advantageous. The solutions to the expression problem are attempts to get the best of both worlds.

A lot of programming language designs deny that this is a tradeoff, which I think is misguided. Sometimes it's more convenient to be able to add methods easily, and sometimes it's more convenient to be able to add cases easily.

reply

treehau5 3 days ago [-]

My understanding is go's interfaces is counter to OO style. It's about behavior, not objects, plus by recommending to users that you write the smallest possible interface as you can, and then compose other interfaces out of those smaller one's, you reduce the surface area of where the expression problem can come into practice!

reply

pcwalton 3 days ago [-]

Go's interfaces are pure subclassing for the purposes of the expression problem.

reply

---

still thinking about embedding number types (eg 'int' embedded in 'real' embedded in 'complex'), approximate number types (eg 'float' is an approximation of 'real'; 'int64' is an approximation of 'int'), and implementation types (eg representing signed integers with twos-complement vs. a sign bit). So perhaps our full-specified types/type implementations have 3 components?:

semantic_class.semantic_exact.implementation

eg a twos-complement int64 would be 'int.int64.twosComplementInt64'. The semantic_class defines the semantics of the type, but not exactly. The semantic_class.semantic_exact defines the exact semantics of the type. and the implementation chooses the exact representation and method implementation of the type. There might be better names for these things.

---

js has the following types:

primitive:

    string
    number
    boolean
    null
    undefined

complex:

    function
    object

note that actually the type of 'null' is 'object', not null; even w3schools says "You can consider it a bug in JavaScript? that typeof null is an object. It should be null." [4]

---

"Among having Numbers and Strings, ClojureScript? provides a set of collection types. Some of them are: Vector, Map and Set."

---

" Consider “time”. In an OO language “time” has to be an object. But in a non OO language a “time” is an instance of a data type. For example, in Erlang there are lots of different varieties of time, these can be clearly and unambiguously specified using type declarations, as follows:

    -deftype day() = 1..31.
    -deftype month() = 1..12.
    -deftype year() = int().
    -deftype hour() = 1..24.
    -deftype minute() = 1..60.
    -deftype second() = 1..60.
    -deftype abstime() = {abstime, year(), month(), day(), hour(), min(), sec()}.
    -deftype hms() = {hms, hour(), min(), sec()}.
    Note that these definitions do not belong to any particular object. They are ubiquitous and data structures representing times can be manipulated by any function in the system.
    There are no associated methods." -- [5]

---

Clojure's protocols are like typeclasses:

" ; From Sean Devlin’s talk on protocols at Clojure Conj

    (defprotocol Dateable
    (to-ms [t]))
    (extend java.lang.Number
    Dateable
    {:to-ms identity})
    (extend java.util.Date
    Dateable
    {:to-ms #(.getTime %)})
    (extend java.util.Calendar
    Dateable
    {:to-ms #(to-ms (.getTime %))})
    "

-- [6]

---

https://hexdocs.pm/elixir/typespecs.html

---

is this true?: "Strong typing essentially means dynamic typing where the compiler can catch virtually every type, with the exception of ambiguous arguments in a pattern match (using an _ operator to indicate a variable or argument that isn’t used to avoid allocated memory for it, just as with Go). " -- [7]

---

4. The Rust code is remarkably "trustworthy", and I'm not just talking about memory safety. If your program even compiles, it almost always works correctly (assuming you understood the spec). This has at least as much to do with Rust's 'enum' and 'match', which encourage people to enumerate and handle all cases, as it does with the borrow checker.

---

" Go's basic types are

bool

string

int int8 int16 int32 int64 uint uint8 uint16 uint32 uint64 uintptr

byte alias for uint8

rune alias for int32 represents a Unicode code point

float32 float64

complex64 complex128 "

---

" > Cloud Spanner uses a SQL dialect which matches the ANSI SQL:2011 standard with some extensions for Spanner-specific features. This is a SQL standard simpler than that used in non-distributed databases such as vanilla MySQL?, but still supports the relational model (e.g. JOINs). It includes data-definition language statements like CREATE TABLE. Spanner supports 7 data types: bool, int64, float64, string, bytes, date, timestamp[20].

> Cloud Spanner doesn't, however, support data manipulation language (DML) statements. DML includes SQL queries like INSERT and UPDATE. Instead, Spanner's interface definition includes RPCs for mutating rows given their primary key[21]. This is a bit annoying. You would expect a fully-featured SQL database to include DML statements. Even if you don't use DML in your application you'll almost certainly want them for one-off queries you run in a query console. "

---

[8]

summary:

"The 'object' type is a new type in 2.2 that matches any types except for primitive types...Because the empty object type ({}) also matches primitives, it couldn’t model APIs like Object.create which truly only expect objects – not primitives"

index signatures "tell the type system what the result of an element access should be" eg:

" interface Foo { [prop: string]: boolean; } "

"Better class support for mixins...We made a few changes that involved loosening some restrictions on classes, as well as adjusting the behavior of how intersection types operate. " eg

" type Constructable = new (...args: any[]) => object;

function Timestamped<BC extends Constructable>(Base: BC) { return class extends Base { private _timestamp = new Date(); get timestamp() { return this._timestamp; } }; } "

"Support for new.target...new.target is an ES2015 feature that lets constructors figure out if a subclass is being constructed. This feature can be handy since ES2015 doesn’t allow constructors to access this before calling super()."

---

"Unlike C, Go has no pointer arithmetic. "

mb we should have no pointer arithmetic -- if it's good enough for Go...?

---

Go arrays cannot be resized [9]. They have 'slices' but each slice has a backing array which can't be resized.

i guess that's nice for performance but it's probably not for us.

---

in Go,

" Slices are like references to arrays

A slice does not store any data, it just describes a section of an underlying array.

Changing the elements of a slice modifies the corresponding elements of its underlying array.

Other slices that share the same underlying array will see those changes. "

---

in Go, "we permit classes and interfaces to derive from object types and intersections of object types. Furthermore, in intersections of object types we now instantiate the this type using the intersection itself."

" An interface or class cannot extend a naked type parameter because it is not possible to consistently verify there are no member name conflicts in instantiations of the type. However, an interface or class can now extend an instantiation of a generic type alias, and such a type alias can intersect naked type parameters "

" All of the above can be combined in lightweight mixin patterns like the following:

interface Component { extend<T>(props: T): this & T; }

interface Label extends Component { title: string; }

function test(label: Label) { const extended = label.extend({ id: 67 }).extend({ tag: "hello" }); extended.id; Ok extended.tag; Ok } "

-- [10]

Mb see also [11]

---

 simon_o 12 hours ago [-]

The biggest issue with claims like "there are only diminishing results when using a type system better than the one provided in my blub language" is that it assumes people keep writing the same style of code, regardless of the assurances a better type system gives you.

"I don't see the benefit of typed languages if I keep writing code as if it was PHP/JavaScript?/Go" ... OF COURSE YOU DON'T!

This is missing most of the benefits, because the main benefits of a better type system isn't realized by writing the same code, the benefits are realized by writing code that leverages the new possibilities.

Another benefit of static typing is that it applies to other peoples' code and libraries, not only your own.

Being able to look at the signatures and bring certain about what some function _can't_ do is a benefit that untyped languages lack.

I think the failure of "optional" typing in Clojure is a very educational example in this regard.

The failure of newer languages to retrofit nullabillity information onto Java is another one.

reply

---

" any variable can be declared as a parameter, that is, something which doesn’t change.

double precision, parameter :: hbar = 6.63e-34

If a parameter is ever changed in the code, the compiler returns an error. In C, there is something similar called a_ const:_

double const hbar = 6.63e-34

The problem is that a ‘const real’ is a different type than a normal ‘real’. If a function that takes a ‘real’ is fed a ‘const real’, it will return an error. It is easy to imagine how this can lead to problems with interoperability between codes.

...

Fortran also has an ‘intent’ specification tells the compiler whether an argument being passed to a function or subroutine is an input, and output, or both an input and output. The use of ‘intent’ specifiers helps the compiler optimize code and increases code readability and robustness.

There are other similar features in Fortran, used to varying degrees. For instance, Fortran 95 introduced the idea of declaring a function as ‘pure’. A pure function does not have any side effects – it only changes variables that are arguments to the function and not any other global variables. A special case of a pure function is an ‘elemental’ function, which is a function that takes a scalar argument and returns a scalar, and is meant to operate on the elements of an array. The knowledge that a functions are ‘pure’ or ‘elemental’ can lead to additional optimizations by the compiler, especially when the code is being parallized.

" -- [12]

---

crawshaw 18 hours ago [-]

I made this point poorly, because I agree with you that Go is quite easy to read and very predictable.

With more words: I have written a parser and type checker for an ML-style language, with parametric types and several other neat tricks in it, and I've now written a parser and type checker for a good subset of Go. The latter has been far more work. I am not entirely sure how to explain the work. Go has lots of syntactic and type conveniences that are easy to use and read, but quite difficult to implement.

As there are few implementers and many users, I think the original designers of Go picked well when they put the work on us implementers.

reply

ainar-g 12 hours ago [-]

Can you elaborate on what syntactic conveniences are difficult to implement and why? Language design is on of my hobbies, so I really would like to know that.

reply

crawshaw 7 hours ago [-]

One good example is untyped constants. They form a large extension to the type system only present at compile time. It is a good example because there is an excellent blog post describing them in detail: https://blog.golang.org/constants

In particular note the difference between

    const hello = "Hello, 世界"

and

    const typedHello string = "Hello, 世界"

one of these can be assigned to named string types, the other cannot.

As a user of Go I find untyped constants to be extremely useful. They almost always do what I want without surprise. Implementing them however, is not trivial.

A tricker example is embedding. It adds a lot of complexity to the type system. As a user, it can be a bit surprising, but I admit it is very useful.

reply

---

https://en.wikipedia.org/wiki/Primitive_data_type

" Overview

The actual range of primitive data types that is available is dependent upon the specific programming language that is being used. For example, in C, strings are a composite but built-in data type, whereas in modern dialects of BASIC and in JavaScript?, they are assimilated to a primitive data type that is both basic and built-in.

Classic basic primitive types may include:

    Character (character, char);
    Integer (integer, int, short, long, byte) with a variety of precisions;
    Floating-point number (float, double, real, double precision);
    Fixed-point number (fixed) with a variety of precisions and a programmer-selected scale.
    Boolean, logical values true and false.
    Reference (also called a pointer or handle), a small value referring to another object's address in memory, possibly a much larger one.

More sophisticated types which can be built-in include:

    Tuple in Standard ML, Python, Scala, Swift, Elixir
    List in Common Lisp, Scheme, Haskell
    Complex number in C99, Fortran, Common Lisp, Python, D, Go
    Rational number in Common Lisp, Haskell
    Associative array in Perl, Python, Ruby, Javascript, Lua, D, Go
    First-class function, in all functional languages, Javascript, Lua, D, Go, and in newer standards of C++, Java, C#, Perl"

---

https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b

---