proj-oot-ootTypeNotes7

[1]

" In one case, Haskell did not quite deliver on its promise. Haskell is one of the few languages that can encode effects. Things like “Can this function access Redis?” and “Does this function log to the console?” can be encoded in the type system. We ended up with a mix of free monads and monad transformers, where we should have picked one. In practice the monad transformers are imposed on us by various libraries, and we should probably rewrite our code in the same style. The real issue, however, is that all of this breaks down when used with threads, because functions such as forkIO and async operate in the IO monad. Suppose we have these functions (slightly adapted from our source):

runWorkerLoop :: (MonadBackingStore? m, MonadLogger? m, MonadIO? m) => WorkerState? -> m WorkerState?

runFetchLoop :: (MonadBackingStore? m, MonadIO? m) => m ()

runWorker :: (MonadBackingStore? m, MonadLogger? m, MonadIO? m) => m ()

Now in runWorker we want to start two threads: one that runs the worker loop, and one that runs the fetch loop. (The fetch loop retrieves jobs from the backing store and puts them in a TBQueue, and the worker loop drains this queue. The queue is used to sequence events; for example a job being completed also enqueues an event.) The issue here is that we cannot run runWorkerLoop and runFetchLoop with forkIO, because we don’t have the right transformer stack. We tried various solutions, but in the end we removed most of the MonadX? m constraints, and changed everything back to IO. "

---

rbehrends 1 day ago [-]

> Any language features which increase cognitive overhead had better offer some extremely compelling benefits to outweigh an increased learning curve.

This is a common misunderstanding that occurs (I think) because most people only know C++-, Java-, and C#-style generics, which can be conceptually complicated, because of the often tricky interactions with the type system (or, in the case of C++, the use for metaprogramming).

In contrast, module-based genericity (SML, OCaml, Ada, Modula-3) is conceptually pretty simple, as it avoids complicating the type system. Its main downside is (relative) verbosity, but Go has never really eschewed verbosity.

reply

kornish 1 day ago [-]

It struck me the other day that when using Go codegen tools which generate packages based on a template, e.g. [0], the Go developer is doing manually what the OCaml compiler will do with functors at compile-time. The concepts are fundamentally the same...the effort just happens in different places.

[0]: https://github.com/cheekybits/genny

reply

pcwalton 1 day ago [-]

They're very different. Generics in languages that have them are typed—you have to declare up front the methods that your generics support. That's the entire thing that makes ML functors functors.

By contrast, generics in languages like C++ and D, as well as the code generation tools for Go, are untyped—you can call whatever functions you want on your types, and if it doesn't work it fails at template instantiation time. This causes the confusing template errors people see in C++.

reply

---

Rust Serde's core 27 types:

"

    12 primitive types
        bool
        i8, i16, i32, i64
        u8, u16, u32, u64
        f32, f64
        char
    string
        UTF-8 bytes with a length and no null terminator. May contain 0-bytes.
        When serializing, all strings are handled equally. When deserializing, there are three flavors of strings: transient, owned, and borrowed. This distinction is explained in Understanding deserializer lifetimes and is a key way that Serde enabled efficient zero-copy deserialization.
    byte array - [u8]
        Similar to strings, during deserialization byte arrays can be transient, owned, or borrowed.
    option
        Either none or some value.
    unit
        The type of () in Rust. It represents an anonymous value containing no data.
    unit_struct
        For example struct Unit or PhantomData<T>. It represents a named value containing no data.
    unit_variant
        For example the E::A and E::B in enum E { A, B }.
    newtype_struct
        For example struct Millimeters(u8).
    newtype_variant
        For example the E::N in enum E { N(u8) }.
    seq
        A variably sized heterogeneous sequence of values, for example Vec<T> or HashSet<T>. When serializing, the length may or may not be known before iterating through all the data. When deserializing, the length is determined by looking at the serialized data.
    tuple
        A statically sized heterogeneous sequence of values for which the length will be known at deserialization time without looking at the serialized data, for example (u8,) or (String, u64, Vec<T>) or [u64; 10].
    tuple_struct
        A named tuple, for example struct Rgb(u8, u8, u8).
    tuple_variant
        For example the E::T in enum E { T(u8, u8) }.
    map
        A variably sized heterogeneous key-value pairing, for example BTreeMap<K, V>. When serializing, the length may or may not be known before iterating through all the entries. When deserializing, the length is determined by looking at the serialized data.
    struct
        A statically sized heterogeneous key-value pairing in which the keys are compile-time constant strings and will be known at deserialization time without looking at the serialized data, for example struct S { r: u8, g: u8, b: u8 }.
    struct_variant
        For example the E::S in enum E { S { r: u8, g: u8, b: u8 } }.

" -- https://serde.rs/data-model.html

the_duke 1 day ago [-]

Serde really is one of the gems of the Rust ecosystem.

It is a (de)serialization framework that can be quite easily implemented for various serialization formats like JSON, MessagePack?, Yaml, toml, ...

It enables automatic and very performant (de)serialization of you data, into different formats.

Often with a simple:

` #[derive(Serialize, Deserialize)] struct Data { ... } `

reply

vvanders 1 day ago [-]

For existing formats Serde is indeed pretty awesome. However when I tried to pick it up I found the lack of support for fixed arrays >32 a show-stopper for me.

... reply

steveklabnik 1 day ago [-]

Active work is going into integer generics, which will solve this issue. It has taken a while but isn't exactly simple either!

reply

makmanalp 1 day ago [-]

Serde is the kind of magic that we all hoped a type system like Rust's would cause. The transcode and zero-copy stuff is so neat. Doing stuff like this under the hood and "for free" is a nightmare in other languages.

reply

---

Sharlin 1 day ago [-]

As an aside:

> Rust's "orphan rule" allows writing a trait impl only if either your crate defines the trait or defines one of the types the impl is for. That means if your code uses a type defined outside of your crate without a Serde impl, you resort to newtype wrappers or other obnoxious workarounds for serializing it.

I hadn't realized this. The justification is reasonable - preventing ambiguity when resolving traits - but it precludes one of the major use cases for traits/type classes: adapting a type from one library to an interface in another library without wrapper types.

reply

---

i havent watched https://www.reddit.com/r/haskell/comments/66p7s7/the_influence_of_dependent_types_stephanie_weirich/ yet, but i glanced at it and saw one interesting point: they give an example of dependent typing for typing regexes. Then they talk about type-level programming in Haskell. I suppose the point is that the practical use of type-level programming is to allow you to do things like type regexes.

---

Rust traits vs. Haskell typeclasses, and generic containers in Haskell:

" Trait definition trait Draw { fn draw(&self); }

Data type implementing the trait struct Circle { radius: i32 } impl Draw for Circle { fn draw(&self) { /* omitted */ } }

Usage fn draw_all(objects: &Vec<Box<Draw>>) { for &obj in objects { obj.draw(); } }

The Haskell analogue is, in turn, based on typeclasses, though the specifics can be a little bit trickier:

{-# LANGUAGE ExistentialQuantification? #-}

-- Typeclass definition class Draw a where draw :: a -> IO ()

-- Polymorphic wrapper type data Draw' = forall a. Draw a => Draw' a instance Draw Draw' where draw (Draw' d) = draw d

-- Data types instantiating ("implementing") the typeclass data Circle = Circle () instance Draw Circle where draw = undefined -- omitted data Square = Square () instance Draw Square where draw = undefined -- omitted

-- Usage drawAll :: (Draw a) => [a] -> IO () drawAll ds = mapM_ draw ds

main = do let shapes = [Draw' Circle (), Draw' Square ()] drawAll shapes

Here, the generic function can use typeclass constraints directly ((Draw a) => ...), but creating a container of different object types requires a polymorphic wrapper4. "

---

 mafribe 23 days ago [-]

For those who understand networking: the Mirage TCP/IP networking stack [1] in pure OCaml is a must. It's an object of extreme beauty, and possibly the most eloquent argument for types, types inference and algebraic data types I can think of. The TCP state machine is mostly specified at type level [2], preventing numerous potential bugs in one fell swoop.

Reading this code is probably most enlightening if you have already written networking protocols.

NB: this has nothing to do with OCaml, other comparable languages with ADTs (Scala, Rust, Haskell, F#) would be similarly suitable.

[1] https://github.com/mirage/mirage-tcpip

[2] https://github.com/mirage/mirage-tcpip/blob/master/lib/tcp/s... https://github.com/mirage/mirage-tcpip/blob/master/lib/tcp/state.ml

---

"I'm not a big fan of null, but in a dynamically typed language it's sort of inevitable. It's the result type of a function with no explicit return." -- [2]

---

 "
 Soundness/Correctness

Flow generally does a better job here. Flow, by default, does not allow nullable types. Typescript added support for non-nullable types in 2.x, but it is something that you need to remember to enable. Flow also does a better job inferring types, whereas Typescript will often fallback to using the “any” type.

Beyond nullability and type inference, Flow also has a better story for covariance and contravariance (you can read more about Flow’s variance here). One such example where this ends up being a problem is in arrays. By default, arrays in Flow are invariant. That means in Flow, this will throw an error: Flow

class Animal {} class Bird extends Animal {}

const foo: Array<Bird> = [];

foo.push(new Animal()); /* foo.push(new A); ^ A. This type is incompatible with const foo: Array = []; ^ B

But in Typescript, this is okay: Typescript

class Animal {} class Bird extends Animal {}

const foo: Array<Bird> = [];

foo.push(new Animal()); ok in typescript

There are more examples you can find online, but the general consensus is that Flow does a better job at type-checking compared to Typescript. "

---

munificent 1 day ago [-]

    > I can't answer a design question like whether to support 
    > generic methods, which is to say methods that are
    > parameterized separately from the receiver.

I work on the Dart language. Dart was initially designed with generic classes but not generic methods. Even at the time, some people on the team felt Dart should have had both.

We proceeded that way for several years. It was annoying, but tolerable because of Dart's optional type system -- you can sneak around the type checker really easily anyway, so in most cases you can just use "dynamic" instead of a generic method and get your code to run. Of course, it won't be type safe, but it will at least mostly do what you want.

When we later moved to a sound static type system, generic methods were a key part of that. Even though end users don't define their own generic methods very often, they use them all the time. Critical common core library methods like Iterable.map() are generic methods and need to be in order to be safely, precisely typed.

This is partially because functional-styled code is fairly idiomatic on Dart. You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.

I do think if you add generic types without generic methods, you will run into their lack. Methods are how you abstract over and reuse behavior. If you have generic methods without generic classes, you lose the ability to abstract over operations that happen to use generic classes.

A simple example is a constructor function. If you define a generic class that needs some kind of initialization (discouraged in Go, but it still happens), you really need that constructor to be generic too.

reply

masklinn 1 day ago [-]

> You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.

`range` is generic, and by virtue of that is a builtin which only works with a subset of the also builtin magically generic types.

The reason why Go "does not feel the same pressure" has nothing to do with its imperative style[0] it is because they special-cased a few generic structures as builtin very early on, unlike, say, Java (which only had arrays as a typed datastucture).

[0] Java and C# are could hardly be more imperative, hell Java is only just adding anonymous functions

reply

---

https://www.stephanboyer.com/post/132/what-are-covariance-and-contravariance

---

grogenaut 1 day ago [-]

I'm not sure static typing is tied to formal verification. In fact if you think about it most proofs are basically typeless I a lot of cases.

reply

Cybiote 1 day ago [-]

Dependent or even refinement type systems are where a great deal of research into proof carrying code is done (for refinement types, since systems are restricted to decidability, the proofs are automatic but more limited).

reply

UncleMeat? 23 hours ago [-]

Type checking can be represented as abstract interpretation. It is the mostly widely used kind of formal verification that exists.

reply

seanwilson 1 day ago [-]

What proof systems are you thinking about? Strong static types seem the most natural way to introduce formal verification into mainstream programming to me.

reply

tom_mellior 20 hours ago [-]

(Not the parent here.) ACL2 is a prominent example of a proof assistant without a strong static type system; its object language is a pure subset of Common Lisp. The lack of static types is one of the reasons I find proving in ACL2 to be painful, but others have done amazing stuff with it.

reply

grogenaut 1 day ago [-]

Well general mathmatecial proofs or TLA+... or would it be better to define "integer" as a strong type which I'm willing to concede.

reply

seanwilson 1 day ago [-]

I don't feel there's much difference between e.g. the Coq way where types are used to bake specifications + proofs into the program and the Isabelle way where you write a program with less complex types and prove properties about the program after the fact.

...

---

ufo 805 days ago [-]

The blog author mentions at one point that Algebraic Data Types cannot determine pattern exaustiveness for things like

    match 100 {
        y if y > 0 => “positive”,
        y if y == 0 => “zero”,
        y if y < 0 => “negative”,
    };

and wonders if there is some kind of "algebraic data values" to notice that the previous case is exaustive.

In Haskell they solve this particular problem with an "Ordering" ADT:

    data Ordering = LT | EQ | GT 
    case (compare 0 100) of
        LT -> "Positive"
        EQ -> "Equal"
        GT -> "Negative"

Using a richer datatype instead of boolean predicates solves most problems. For some more advanced things you need extensions to the type systems. For example, to be able to say "this function, when applied to non-empty lists returns non-empty lists", you need Generalized Algebraic Data Types (gadts).

pcwalton 805 days ago [-]

Same in Rust: http://doc.rust-lang.org/std/cmp/enum.Ordering.html

Veedrac 805 days ago [-]

    match 100.cmp(&0) {
        Ordering::Greater => "positive",
        Ordering::Equal   => "zero",
        Ordering::Less    => "negative",
    }

---

endgame 12 hours ago [-]

Agreed, but Elm is hurting for some way to deal with this stuff. Currently, the API reference lies by claiming things like (==) : a -> a -> Bool, because it lacks a generic mechanism to let the programmer think the necessary thoughts. (The story for comparison functions like (>) is similar.)

Now maybe Elm should stay simple and not jump all the way to multi-parameter type classes with functional dependencies, but it is already forced to pay some of the complexity burden to decide which types admit equality testing, comparison and so on.

reply

ryanplant-au 12 hours ago [-]

Can you elaborate on what you mean when you say that (==) : a -> a -> Bool is a lie? I don't have enough experience with more powerful type systems to know what the more accurate claim would be. Do you mean that it should really be Eq a => a -> a -> Bool, because not all types can be checked for equality?

reply

endgame 12 hours ago [-]

That's exactly what I mean: A type like a -> a -> Bool says that it will work for _any_ a, but we can easily find counter examples. We could pass in functions (say) and write something like (\x -> x * 2) == (\x -> x + 1). The type signature for (==) as written says it _should_ work, but it doesn't because there's no general way to compute function equality.

 quchen 12 hours ago [-]

> writing a fresh map function for each data type

You have to do that in Haskell as well (where »you« can be you or the compiler via DeriveFunctor?); the key difference is at the use site: there is one map function (called fmap) that does the mapping, whereas in Elm (and pretty much all other languages that don’t have higher kinds while we’re at it) you have to use the specific map function for that type.

reply

---

Nullabillity 7 hours ago [-]

> I believe any language that supports "any" allows this. Doesn't Kotlin? https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-any/

Kotlin's Any is pretty much the opposite of TypeScript?'s any. In TS it disables type checking completely (according to the docs). In Kotlin it's the top type (similar to Java's Object type), which anything conforms with, but doesn't expose any operations at all (except for the stuff that the JVM provides for free).

The closest match to TS' any would probably be C#'s dynamic https://docs.microsoft.com/en-us/dotnet/csharp/programming-g....

reply

winter_blue 3 hours ago [-]

Actually, the Kotlin equivalent to TS' any is 'dynamic': https://kotlinlang.org/docs/tutorials/javascript/working-wit...

You can declare a `dynamic` Kotlin variable like this:

   val myObject: dynamic = null

The myObject above is not type-safe. It seems that the reason for its existence is interop with existing JavaScript? code.

The Kotlin `Any` class is actaully more related to Java's `Object` class. (It's the superclass of every Kotlin class.)

reply

---

"

Effect systems, more generally

There's still not a strong consensus on where and how to integrate effects into mainstream compiled languages. The type-systems research world seems to have blown past this point, and now speaks breezily of "type and effect systems" as though they're a solved problem; but most working programmers in most languages have no access to any meaningful effect system, much less a state of the art, extensible, inference-heavy one. Languages like Eff or Koka are leading in promising directions, but it's still far from mainstream or solved: modularity, polymorphism and encoding issues abound, as does the general cognitive load of the feature.

Extended static checking (ESC), refinement types, general dependent-typed languages

This has been revisited over the decades of language design more often than Godzilla movies have been remade, and it's because it's a fundamentally good idea, it's just very hard, in terms of the design space.

The idea is to embed a "logic" in your type system (the boundary is formally not really there, but notationally and cognitively it sure is!) such that users regularly, freely mix their use of types that assert the in-memory shapes of data and functions, with other "logical types" that assert some classes of more-general, (semi-)computable predicates about those data and functions. In other words, let the user write "general" correctness conditions about their programs in a full-blown (but say, primitive-recursive) expression language, and have the "type system" statically check (prove) those conditions always hold (or emit an error message showing a counterexample, just like a type error).

These systems are usually a few steps back from "full blown" higher-order dependent type theory proof assistants, a la Isabelle, Coq, Lean or such; though in some cases the type systems inch into the same territory. The design problem hinges on the annotation burden being low enough that the user doesn't give up in frustration: like most type systems, the user always has to provide some information for the system to infer the rest, but the balance can tilt pretty dramatically towards "way too much annotation" when more-expressive logics enter the game. In many cases, the scale of the annotations overwhelm the program being annotated, and the maintenance burden of those annotations (as the user edits the program) are greater than the coding burden. Which, as a language designer, is Really Not A Good User Experience.

So far, most exercises in this space have ended in frustration, or have had limited application to areas with much higher costs for failure (safety-critical embedded systems, etc.) Most mainstream languages have decidedly more .. uh, decidable type systems than ESC, refinement typing or dependent-typed language projects have proposed. But this does not mean it's a dead end. It means that (if you're among the faithful, which I am) it's a design space that's not yet had its breakthrough product! Cell phones weren't a thing either, until they were. Maybe in another decade or two, we'll be looking back on these as the dark ages of boring types.

Projects in this space run a bit of a spectrum of levels of expressivity in their type systems (and degree to which they unify "logic" and "types"), as well as the spectrum from research testbed to attempt at mainstream viability; I don't feel qualified (and this isn't the place) to do a detailed compare-and-contrast job, so I'll just dump some links. If you want to play in this space, you ought to study at least Sage, Stardust, Whiley, Frama-C, SPARK-2014, Dafny, F*, ATS, Xanadu, Idris, Zombie-Trellys, Dependent Haskell, and Liquid Haskell. " [3]

---

excerpt from previous:

" In other words, let the user write "general" correctness conditions about their programs in a full-blown (but say, primitive-recursive) expression language, and have the "type system" statically check (prove) those conditions always hold (or emit an error message showing a counterexample, just like a type error).

These systems are usually a few steps back from "full blown" higher-order dependent type theory proof assistants, a la Isabelle, Coq, Lean or such; though in some cases the type systems inch into the same territory. The design problem hinges on the annotation burden being low enough that the user doesn't give up in frustration "

---

" Parametric mutability, permission / state polymorphism, ???

Here's a small but tasty question that's unfortunately awkward to answer in most languages with mutability control: how often do you find yourself having to write two copies of a function, that differ only in the mutability qualifiers associated with its parameters or return values? Or, further: does the system of function types and effects scale to expressing polymorphism over the mutable-ness of a type, such that you can fold (say) a higher-order mutating function over a mutable value with the same higher-order function that you'd use to fold a non-mutating function over an immutable value?

(Or I guess if you're a functional nerd: how often do you find yourself writing lifts between monads?) " [4]

---

" Unit-of-measure types

Physical "units" attached to scalars, with arithmetic operators distributing through to the units: a little language support supposedly goes a long way. Frink pioneered some techniques here, F# picked it up (as with many language experiments!) and I do not know if it's "done" yet or even if users consider it a universally welcome idea, but it seems to me that it has potential. It's "small" but rather important not to mix units!

Cost-model type systems

This is a family of features for a type system wherein a complexity bound (a la Big-O notation) is calculated on the resources consumed by performing a computation, letting you "typecheck" the cost-model of a program. The language RAML is one of the more promising general-purpose works in this space, though there's also a long line of work in WCET analysis of embedded systems, timed automata and so forth. " [5]

---

i skimmed this and don't think it's worth reading, but here's Golang's design document for type aliases:

https://github.com/golang/proposal/blob/master/design/18130-type-alias.md

---

"Higher-kinded, Rank-N, Dependent, the fancier the types,..."

---

in Rust, associated constants on interfaces (traits) are neccesary to get constants in 'subclasses' to typecheck:

" In previous Rust versions, you can already define traits, structs, and enums that have “associated functions”:

struct Struct;

impl Struct { fn foo() { println!("foo is an associated function of Struct"); } }

fn main() { Struct::foo(); }

These are called “associated functions” because they are functions that are associated with the type, that is, they’re attached to the type itself, and not any particular instance.

Rust 1.20 adds the ability to define “associated constants” as well:

struct Struct;

impl Struct { const ID: u32 = 0; }

fn main() { println!("the ID of Struct is: {}", Struct::ID); }

That is, the constant ID is associated with Struct. Like functions, associated constants work with traits and enums as well.

Traits have an extra ability with associated constants that gives them some extra power. With a trait, you can use an associated constant in the same way you’d use an associated type: by declaring it, but not giving it a value. The implementor of the trait then declares its value upon implementation:

trait Trait { const ID: u32; }

struct Struct;

impl Trait for Struct { const ID: u32 = 5; }

fn main() { println!("{}", Struct::ID); }

Before this release, if you wanted to make a trait that represented floating point numbers, you’d have to write this:

trait Float { fn nan() -> Self; fn infinity() -> Self; ... }

This is slightly unwieldy, but more importantly, because they’re functions, they cannot be used in constant expressions, even though they only return a constant. Because of this, a design for Float would also have to include constants as well:

mod f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32;

    impl Float for f32 {
        fn nan() -> Self {
            f32::NAN
        }
        fn infinity() -> Self {
            f32::INFINITY
        }
    }}

Associated constants let you do this in a much cleaner way. This trait definition:

trait Float { const NAN: Self; const INFINITY: Self; ... }

Leads to this implementation:

mod f32 { impl Float for f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32; } }

much cleaner, and more versatile. "

---

from [6]:

Function annotations

def f(a: stuff, b: stuff = 2) -> result: ...

    Annotations can be arbitrary Python objects.
    Python doesn't do anything with the annotations other than put them in an __annotations__ dictionary.
    >>> def f(x: int) -> float:
    ...     pass
    ...
    >>> f.__annotations__
    {'return': <class 'float'>, 'x': <class 'int'>}

---

" Traits: they are typeclasses, essentially

Take Haskell’s typeclasses, for example — the cornerstone of its rich and expressive type system.

A typeclass is, simply speaking, a list of capabilities: it defines what a type can do. There exist analogs of typeclasses in most programming languages, but they are normally called interfaces or protocols, and remain closely tied to the object-oriented paradigm.

Not so in Haskell.

Or in Rust for that matter, where the equivalent concept exists under the name of traits. What typeclasses and traits have in common is that they’re used for all kinds of polymorphism in their respective languages. Generics

For example, let’s consider parametrized types, sometimes also referred to as templates (C++) or generics (C#).

In many cases, a generic function or type requires its type arguments to exhibit certain characteristics. In some languages (like the legacy C++), this is checked only implicitly: as long as the template type-checks after its expansion, everything is okay:

template <typename T> T min(T a, T b) { return a > b ? b : a; }

struct Foo {};

int main() { min(1, 2); OK min(Foo(), Foo()); ERROR, no operator `>` }

More advanced type systems, however, allow to specify the generic constraints explicitly. This is the case in Rust:

fn min<T: Ord>(a: T, b: T) -> T { if a > b { b } else { a } }

as well as in Haskell:

min :: (Ord a) => a -> a -> a min a b = if a > b then b else a

In both languages, the notion of a type supporting certain operations (like comparison/ordering) is represented as its own, first-class concept: a trait (Rust) or a typeclass (Haskell). Since the compiler is aware of those constraints, it can verify that the min function is used correctly even before it tries to generate code for a specific substitution of T. Dynamic dispatch

On the other hand, let’s look at runtime polymorphism: the one that OO languages implement through abstract base classes and virtual methods. It’s the tool of choice if you need a container of objects of different types, which nevertheless all expose the same interface.

To offer it, Rust has trait objects, and they work pretty much exactly like base class pointers/references from Java, C++, or C#.

Trait definition trait Draw { fn draw(&self); }

Data type implementing the trait struct Circle { radius: i32 } impl Draw for Circle { fn draw(&self) { /* omitted */ } }

Usage fn draw_all(objects: &Vec<Box<Draw>>) { for &obj in objects { obj.draw(); } }

The Haskell analogue is, in turn, based on typeclasses, though the specifics can be a little bit trickier:

{-# LANGUAGE ExistentialQuantification? #-}

-- Typeclass definition class Draw a where draw :: a -> IO ()

-- Polymorphic wrapper type data Draw' = forall a. Draw a => Draw' a instance Draw Draw' where draw (Draw' d) = draw d

-- Data types instantiating ("implementing") the typeclass data Circle = Circle () instance Draw Circle where draw = undefined -- omitted data Square = Square () instance Draw Square where draw = undefined -- omitted

-- Usage drawAll :: (Draw a) => [a] -> IO () drawAll ds = mapM_ draw ds

main = do let shapes = [Draw' Circle (), Draw' Square ()] drawAll shapes

Here, the generic function can use typeclass constraints directly ((Draw a) => ...), but creating a container of different object types requires a polymorphic wrapper4. Differences

All those similarities do not mean that Rust traits and Haskell typeclasses are one and the same. There are, in fact, quite a few differences, owing mostly to the fact that Haskell’s type system is more expressive:

    Rust lacks higher kinded types, making certain abstractions impossible to encode as traits. It is possible, however, to implement a trait for infinitely many types at once if the implementation itself is generic (like here).
    When defining a trait in Rust, you can ask implementors to provide some auxiliary, associated types in addition to just methods5. A similar mechanism in Haskell is expanded into type families, and requires enabling a GHC extension.
    While typeclasses in Haskell can be implemented for multiple types simultaneously via a GHC extension, Rust’s take on this feature is to make traits themselves generic (e.g. trait Foo<T>). The end result is roughly similar; however, the “main implementing type” (one after for in impl ... for ...) is still a method receiver (self), just like in OO languages.
    Rust enforces coherence rules on trait implementations. The topic is actually rather complicated, but the gist is about local (current package) vs. remote (other packages / standard library) traits and types.
    Without too much detail, coherence demands that there be a local type or trait somewhere in the impl ... for ... construct. Haskell doesn’t have this limitation, although it is recommended not to take advantage of this.

The M-word

Another area of overlap between Haskell and Rust exists in the data model utilized by those languages. Both are taking heavy advantage of algebraic data types (ADT), including the ability to define both product types (“regular” structs and records) as well as sum types (tagged unions). Maybe you’d like Some(T)?

Even more interestingly, code in both languages makes extensive use of the two most basic ADTs:

    Option (Rust) or Maybe (Haskell) — for denoting a presence or absence of a value
    Result (Rust) or Either (Haskell) — for representing the alternative of “correct” and “erroneous” value

These aren’t just simple datatypes. They are deeply interwoven into the basic semantics of both languages, not to mention their standard libraries and community-provided packages.

The Option/Maybe type, for example, is the alternative to nullable references: something that’s been heavily criticized for making programs prone to unexpected NullReferenceExceptions?. The idea behind both of those types is to make actual values impossible to confuse with nulls by encoding the potential nullability into the type system:

enum Option<T> { Some(T), None }

data Maybe a = Just a

Nothing

Result and Either, on the other hand, can be thought as an extension of this idea. They also represent two possibilities, but the “wrong” one isn’t just None or Nothing — it has some more information associated with it:

enum Result<T, E> { Ok(T), Err(E) }

data Either e a = Left e

Right a

This dichotomy between the Ok (or Right) value and the Error value (or the Left one) makes it a great vehicle for carrying results of functions that can fail.

In Rust, this replaces the traditional error handling mechanisms based on exceptions. In Haskell, the exceptions are present and sometimes necessary, but Either is nevertheless the preferred approach to dealing with errors. What to do?

One thing that Haskell does better is composing those fallible functions into bigger chunks of logic.

Relatively recently, Rust has added the ? operator as a replacement for the try! macro. This is now the preferred way of error propagation, allowing for a more concise composition of functions that return Results:

/ Read an integer from given file. fn int_from_file(path: &Path) -> io::Result<i32> { let mut file = fs::File::open(path)?; let mut s = String::new(); file.read_to_string(&mut s)?; let result = s.parse().map_err(

eio::Error::new(io::ErrorKind::InvalidData?, e))?;
    Ok(result)}

But Haskell had it for much longer, and it’s something of a hallmark of the language and functional programming in general — even though it looks thoroughly imperative:

intFromFile :: FilePath? -> IO Int intFromFile path = do s <- readFile path i <- readIO s return i

If you haven’t seen it before, this is of course a monad — the IO monad, to be precise. While discussing monads in detail is way outside of the scope of this article, we can definitely notice some analogies with Rust. The do notation with <- arrows is evidently similar to how in Rust you’d assign the result of a fallible operation after “unpacking” it with ?.

But of course, there’s plenty of different monads in Haskell: not just IO, but also Either, Maybe, Reader, Writer, Cont, STM, and many others. In Rust (at least as of 1.19), the ? operator only works for Result types, although there is some talk about extending it to Option as well6.

Eventually, we may see the language adopt some variant of the do notation, though the motivation for this will most likely come from asynchronous programming with Futures rather than plain Results. General monads, however, require support for higher kinded types which isn’t coming anytime soon. "

---

" Function annotations

def f(a: stuff, b: stuff = 2) -> result: ...

    Annotations can be arbitrary Python objects.
    Python doesn't do anything with the annotations other than put them in an __annotations__ dictionary."

[7]

---

[8]

" Traits are not just interfaces

Rust is not a Java-like object-oriented language. Instead it has traits, which at first seem like Java interfaces — an easy way to do dynamic dispatch, so that if an object implements Drawable then you can assume it has a draw() method.

However, traits are more powerful than that. Associated types

Traits can have associated types. As an example, Rust provies the Iterator trait which you can implement:

pub trait Iterator { type Item; fn next(&mut self) -> Option<Self::Item>;; }

This means that whenever you implement Iterator for some iterable object, you also have to specify an Item type for the things that will be produced. If you call next() and there are more elements, you'll get back a Some(YourElementType?). When your iterator runs out of items, it will return None.

Associated types can refer to other traits.

For example, in Rust, you can use for loops on anything that implements the IntoIterator? trait:

pub trait IntoIterator? { / The type of the elements being iterated over. type Item;

    /// Which kind of iterator are we turning this into?
    type IntoIter: Iterator<Item=Self::Item>;
    fn into_iter(self) -> Self::IntoIter;}

When implementing this trait, you must provide both the type of the Item which your iterator will produce, and IntoIter?, the actual type that implements Iterator and that holds your iterator's state.

This way you can build webs of types that refer to each other. You can have a trait that says, "I can do foo and bar, but only if you give me a type that can do this and that".

"

---

see ootTypeNotes7Old for my detailed notes on Lippert's blog posts that led to the following:

we want the language to have multiple dispatch, so that you can deal with this:

The proper way to deal with this is have a function Attack(Player, Monster) that is then overridden by Attack(Warrior, Werewolf). There are runtime ways to deal with this but we want compile-time checking.

---

see ootTypeNotes7Old for my detailed notes on Lippert's blog posts that led to the following:

we want the language to have the ability to annotate variable declarations with 'negative exception' constraints. eg:

The 'Wizard cannot use a Sword' is a 'negative exception' constraint; because by default, a Wizard is a subclass of Player, and Players have Weapons, and Swords are Weapons, so you could assign a Sword to a Wizard's weapon slot.

Note that in order for this to be possible, you have to also declare the possiblity of negative constraints when you declare the 'Player.weapon' field in Player; because otherwise the Liskov substitution principal forces Wizards, as a subtype of Player, to accept any Weapon, because that's part of the definition of Player (eg imagine that some function has been passed an instance of Player; it doesn't know whether it is of the Wizard or the Warrior subclass, or even some other subclass; this function wants to assign a Sword to the Player; ordinarily that should be fine, because Sword is a Weapon and the Player's field can accept anything of type Weapon; but here we need to change the semantics of the field declaration from 'guaranteed to accept anything of type Weapon' to 'will only accept things of type Weapon, but may not even accept that'; and the typechecker then has to stop trying to work in terms of a 'blacklist' and say that the program is correct as long as it can't find anything non-Weapon being assigned to Player.weapon, but instead work in terms of a 'whitelist', and say that the program is incorrect unless it can prove that whatever is assigned to Player.weapon will meet the constraints (note that we should provide some 'does it meet the constraints?' predicate that the program can call, in case it wants to dynamically attempt to give a Sword to a generic Player and handle the case where that doesn't work, instead of using compile-time checking).

---

" Some languages, like Java and C++, offer explicit interface support. Python is not among them. It offers implied interfaces in places where other languages would use explicit interfaces. This has a variety of effects, good and bad.

In Python, what classes your object is derived from is not a part of your object's interface.

Every use of isinstance is a violation of this promise, large or small. Whenever isinstance is used, control flow forks; one type of object goes down one code path, and other types of object go down the other --- even if they implement the same interface!

Bjarne Stroustrup often cited concerns like these when defending C++'s decision not to provide isinstance. (Now, of course, with RTTI in the C++ standard, C++ does provide isinstance.)

Sometimes, of course, violating this promise is worth the payoffs --- isinstance, like goto, is not pure evil. But it is a trap for new programmers. Beware! Don't use isinstance unless you know what you're doing. It can make your code non-extensible and break it in strange ways down the line. " [9]

---

Rusky 6 hours ago [-]

> (granted, there is some ground to be covered before such an abstraction (((a monad))) would fit in Rust)

I would claim that such an abstraction is fundamentally incompatible with Rust.

Not only does Rust lack the means to write a Monad trait on which to build do-notation, but even given HKT there's no single type signature that the various monad instances would fit. `Result` and `Option` are type constructors while `Iterator` and `Future` are generic traits; some instances require `Fn` or `FnMut?` while others require `FnOnce?`.

And even assuming those problems could be solved, the way do-notation interacts with control flow is not composable with idiomatic Rust. Nested closures prevent the use of return/break/continue and imperative loops; Haskell doesn't have those features so people just lift their functional counterparts into monads. Monad transformer stacks make the situation even worse- they are a pain to compose even with each other, let alone imperative code.

If you want a unifying mechanism for this stuff in Rust it's gonna need to be fundamentally more powerful than monads. Scoped continuations, maybe? Certainly nothing that looks like >>=.

reply

---

gilmi 8 hours ago [-]

> a common problem I have in Haskell is figuring out _which_ monad a certain `do` block is using.

Does type signatures not help in that case?

reply

harpocrates 7 hours ago [-]

Yes, when people use them. There is no really elegant way of adding a type annotation to a `do` block.

    (do x <- pure 1
        y <- pure 5
        pure (x + y)) :: Maybe Int

It isn't uncommon to have to scan the `do` block for some statement that constrains the monad somehow.

This is a known and discussed problem, so I'll refer you to the article usually reference around this issue: https://wiki.haskell.org/Do_notation_considered_harmful

reply

---

there is a design tension between

for example, in a matrix library, you often have scatter/gather operations which take indices as arguments; these indices can themselves be represented as matrices, but they must be integers, not real numbers. So is the type of scatter/gather functions/operators 'Matrix' or 'IntegerMatrix?'? The latter is more typesafe (no runtime 'non-integer matrix provided as index!' errors) but also has more user-visible moving parts.

---

BruceIV? 1 day ago [-]

It's maybe not quite what you're looking for, but Cforall's polymorphic functions can eliminate nearly-all the unsafety of void-pointer-based polymorphism at little-to-no extra runtime cost (in fact, microbenchmarks in our as-yet-unpublished paper show speedup over void-pointer-based C in most cases due to more efficient generic type layout). As an example:

    forall(dtype T | sized(T))
    T* malloc() {  // in our stdlib
        return (T*)malloc(sizeof(T)); // calls libc malloc
    }
    int* i = malloc(); // infers T from return type

reply

baybal2 23 hours ago [-]

Excuse me for my lamerism, but can you tell me what is a polymorphic function?

My idea was that if it is better to do as much compile time checks as possible before you introduce run-time checks. Does that void pointer protection run faster that code that was checked at compile time? How?

reply

BruceIV? 23 hours ago [-]

A polymorphic function is one that can operate on different types[1]. You would maybe be familiar with them as template functions in C++, though where C++ compiles different versions of the template functions based on the parameters, we pass extra implicit parameters. The example above translates to something like the following in pure C:

    void* malloc_T(size_t sizeof_T, size_t alignof_T) {
        return malloc(sizeof_T);
    }
    int* i = (int*)malloc_T(sizeof(int), alignof(int));

In this case, since the compiler verifies that int is actually a type with known size (fulfilling `sized(T)`), it can generate all the casts and size parameters above, knowing they're correct.

[1] To anyone inclined to bash my definition of polymorphism, I'm mostly talking about parametric polymorphism here, though Cforall also supports ad-hoc polymorphism (name-overloading). The phrasing I used accounts for both, and I simplified it for pedagogical reasons.

reply

---

Maybe anonymous objects (for example tuples) are structurally typed whereas typedef'd objects such as a type of struct are nominally typed.

e.g. tuples could be structurally typed even though classes could be nominally typed

---

some ideas for a mode in between dynamic and static typing:

---

changes in Scala 3:

[10] refers to [11]:

"

Consistency

The primary goal of the language constructs in this section is to make the language more consistent, both internally, and in relationship to its foundations.

    Intersection types A & B http://dotty.epfl.ch/docs/reference/intersection-types.html
    They replace compound types A with B (the old syntax is kept for the moment but will be deprecated in the future). Intersection types are one of the core features of DOT. They are commutative: A & B and B & A represent the same type.
    Implicit function types implicit A => B.  http://dotty.epfl.ch/docs/reference/implicit-function-types.html
    Methods and lambdas can have implicit parameters, so it's natural to extend the same property to function types. Implicit function types help ergonomics and performance as well. They can replace many uses of monads, offering better composability and an order of magnitude improvement in runtime speed.
    Dependent function types (x: T) => x.S.  http://dotty.epfl.ch/docs/reference/dependent-function-types.html
    The result type of a method can refer to its parameters. We now extend the same capability to the result type of a function.
    Trait parameters trait T(x: S)  http://dotty.epfl.ch/docs/reference/trait-parameters.html
    Traits can now have value parameters, just like classes do. This replaces the more complex early initializer syntax.
    Generic tuples  https://github.com/lampepfl/dotty/pull/2199
    (Pending) Tuples with arbitrary numbers of elements are treated as sequences of nested pairs. E.g. (a, b, c) is shorthand for (a, (b, (c, ()))). This lets us drop the current limit of 22 for maximal tuple length and it allows generic programs over tuples analogous to what is currently done for HList.

Safety

Listed in this section are new language constructs that help precise, typechecked domain modeling and that improve the reliability of refactorings.

    Union types A | B
    Union types gives fine-grained control over the possible values of a type. A union type A | B states that a value can be an A or a B without having to widen to a common supertype of A and B. Union types thus enable more precise domain modeling. They are also very useful for interoperating with Javascript libraries and JSON protocols.
    Multiversal Equality  (((i think this is basically just saying that an equality comparison between incompatible types is a type error)))...  http://dotty.epfl.ch/docs/reference/multiversal-equality.html
    Restrict Implicit Conversions  https://github.com/lampepfl/dotty/pull/4229

...

    Null safety
    ... With the introduction of union types, we can now do better. A type like String will not carry the null value. To express that a value can be null, one will use the union type String | Null instead. For backwards compatibility and Java interoperability, selecting on a value that's possibly null will still be permitted but will have a declared effect that a NullPointerException can be thrown (see next section).
    Effect Capabilities
    (Planned) Scala so far is an impure functional programming language in that side effects are not tracked. We want to put in the hooks to allow to change this over time. The idea is to treat effects as capabilities represented as implicit parameters. Some effect types will be defined by the language, others can be added by libraries. Initially, the language will likely only cover exceptions as effect capabilities, but this can be extended later to mutations and other effects. To ensure backwards compatibility, all effect capabilities are initially available in Predef. Un-importing effect capabilities from Predef will enable stricter effect checking, and provide stronger guarantees of purity.

Ergonomics

The primary goal of the language constructs in this section is to make common programming patterns more concise and readable.

Enums enum Color { case Red, Green, Blue } ... an enum can only be compared to values of the same enum type. ...

Type lambdas [X] => C[X]

Type lambdas were encoded previously in a roundabout way, exploiting loopholes in Scala's type system which made it Turing complete. With the removal of unrestricted type projection, the loopholes are eliminated, so the previous encodings are no longer expressible. Type lambdas in the language provide a safe and more ergonomic alternative.

Extension clauses extension StringOps? for String { ... }

(Pending) Extension clauses allow to define extension methods and late implementations of traits via instance declarations. They are more readable and convey intent better than the previous encodings of these features through implicit classes and value classes. Extensions will replace implicit classes. Extensions and opaque types together can replace almost all usages of value classes. Value classes are kept around for the time being since there might be a new good use case for them in the future if the host platform supports "structs" or some other way to express multi-field value classes.

Performance

The primary goal of the language constructs in this section is to enable high-level, safe code without having to pay a performance penalty.

    Opaque Type Aliases opaque type A = T
    (Pending) An opaque alias defines a new type A in terms of an existing type T. Unlike the previous modeling using value classes, opaque types never box. Opaque types are described in detail in SIP 35.
    Erased parameters
    Parameters of methods and functions can be declared erased. This means that the corresponding arguments are only used for type checking purposes and no code will be generated for them. Typical candidates for erased parameters are type constraints such as =:= and <:< that are expressed through implicits. Erased parameters improve both run times (since no argument has to be constructed) and compile times (since potentially large arguments can be eliminated early).

"

---

http://www.scala-lang.org/blog/2016/02/03/essence-of-scala.html

" What do you get if you boil Scala on a slow flame and wait until all incidental features evaporate and only the most concentrated essence remains? After doing this for 8 years we believe we have the answer: it’s DOT, the calculus of dependent object types, that underlies Scala.

A paper on DOT ((( The Essence of Dependent Object Types ))) will be presented in April at Wadlerfest, an event celebrating Phil Wadler’s 60th birthday. There’s also a prior technical report (From F to DOT) by Tiark Rompf and Nada Amin describing a slightly different version of the calculus. Each paper describes a proof of type soundness that has been machine-checked for correctness. The DOT calculus

A calculus is a kind of mini-language that is small enough to be studied formally. Translated to Scala notation, the language covered by DOT is described by the following abstract grammar:

Value v = (x: T) => t Function new { x: T => ds } Object

Definition d = def a = t Method definition type A = T Type

Term t = v Value x Variable t1(t2) Application t.a Selection { val x = t1; t2 } Local definition

Type T = Any Top type Nothing Bottom type x.A Selection (x: T1) => T2 Function { def a: T } Method declaration { type T >: T1 <: T2 } Type declaration T1 & T2 Intersection { x => T } Recursion

The grammar uses several kinds of names:

x for (immutable) variables a for (parameterless) methods A for types

The full calculus adds to this syntax formal typing rules that assign types T to terms t and formal evaluation rules that describe how a program is evaluated. The following type soundness property was shown with a mechanized, (i.e. machine-checked) proof:

    If a term t has type T, and the evaluation of t terminates, then the result of the evaluation will be a value v of type T.

... "

---

from https://github.com/zeromq/zproject/

" API Types

This is an incomplete list of API types:

    "nothing" -- for return only, means "void" in C.
    "anything" -- means "void *" in C.
    "size" -- long size (64 bits), "size_t" in C.
    "time" -- long time (64 bits), "time_t" in C.
    "msecs" -- long number of msecs, "int64_t" in C.
    "file_size" -- long file size (64 bits).
    "boolean" -- Boolean.
    "byte" -- single octet.
    "char" -- single character (possibly multibyte, do we care?)
    "integer" -- 32-bit signed integer.
    "number" -- unsigned number, with 'size = "1|2|4|8"'.
    "real" -- single-precision floating point with 'size = "4"' (default) or double-precision with 'size = "8"'.
    "buffer" -- byte array. When passing a buffer argument, if the next argument has type 'size', the binding may fill the size automatically. To return a buffer, you should specify 'size' attribute that defines how to set the buffer size. This can be a constant, 'size = "ZUUID_LEN"', or a dot followed by method name in the same class, e.g. 'size = ".size"'.
    "string" -- character array.
    "sockish" -- a variant socket type, may be a zsock_t, libzmq void *, or an actor handle.
    "format" -- printf format, followed by zero or more arguments.
    "FILE", "va_list", "zmq_pollitem", "socket" -- literally that, in C. (Not sure if it is wise to use raw C types.)
    callbacks - tbd.
    Names of classes, e.g. zmsg."

---

https://github.com/fantasyland/fantasy-land https://github.com/rpominov/static-land https://github.com/sanctuary-js/sanctuary-type-classes https://github.com/sanctuary-js https://github.com/sanctuary-js/sanctuary https://github.com/fluture-js/Fluture

https://monet.github.io/monet.js/

---

...

"Also is his explanation of monads as 'functors that can flatten' a simplification for the purposes of teaching, or is that more or less what they are?"

A little of both. Technically it is correct, but the "flattening" in question applies to many things that most programmers wouldn't consider "flattening". For instance, consider monadic IO as Haskell uses. There is a way in which you can consider the execution of an IO value as "flattening" it, and it corresponds to the mathematical term, but it's not what most people have in mind. There's more to "flattening" than "simplifying data structures in some manner"; it doesn't even always involve what we'd traditionally think of as data structures at all, such as, again, IO.

Personally I think it is an actively unhelpful metaphor for these reasons, as it is very prone to leading to false understanding, but YMMV.

reply

tuukkah 5 hours ago [-]

Functors have fmap. Pointeds have pure. Monads have fmap, pure and join/flatten. So what's missing from "functors that can flatten" is pure, and monads are more precisely "pointed functors that can flatten".

This is the canonical source: https://wiki.haskell.org/Typeclassopedia

reply

---

leshow 18 minutes ago [-]

There's so much unanswered here, does it handle parametric polymorphism? Does it allow sum/product types? Is there inference? Can you bound generic types?

IMO it would be a complete waste of time to add a type system that only prevented the most trivial of type errors: mismatching Integer and String.

reply

samratjp 19 minutes ago [-]

Putting this out here in case if anyone finds it useful - I like a lot of good ideas from Contracts.rb such as the type signature looking more expressive (personal taste - looks Haskell/Elm ish) and support of Maybe's. Granted it's not static but def some good ideas to steal from.

https://egonschiele.github.io/contracts.ruby/

reply

---

[12]

---

monocasa 1 day ago [-]

Unlike Java, Go doesn't have to worry about bytecode backwards compatibility. That was the main issue that had them reaching for type erasure.

Take a look at C# which added it without many issues (because they were willing to break back compat of their bytecode). Go fits this model a whole lot better.

reply

pjmlp 1 day ago [-]

The missing part every time this story gets told, is that Microsoft already had a generics implementation ongoing being lead by Don Syme of F# fame.

They just decided to release 1.0 without generics, instead of waiting for it to be 100% ready.

As Don clearly describes on his blog.

reply

---

https://boats.gitlab.io/blog/post/2018-04-06-async-await-final/ https://news.ycombinator.com/item?id=17536441

---

" There are often times where we want to describe the least-capable type in TypeScript?. This is useful for APIs that want to signal “this can be any value, so you must perform some type of checking before you use it”. This forces users to safely introspect returned values.

TypeScript? 3.0 introduces a new type called unknown that does exactly that. Much like any, any value is assignable to unknown; however, unlike any, unknown is assignable to almost nothing else without a type assertion. You also can’t access any properties off of an unknown, nor can you call/construct them. "

---

jacoblambda 2 days ago [-]

The biggest barrier to adoption of formal verification that I have seen as someone just starting in the field (working through Software Foundations and have a number of projects planned with SPARK, Frama-C, and LiquidHaskell?) is the lack of groundwork. Verifying just your own code is complex enough as it is but working with libraries without any clear specification of their interfaces and behaviours makes this so much harder.

I think there is real value in having verified libraries or at least libraries with well defined specs so that interfacing with other code wasn't so tedious. I think this issue is starting to be overcome with regard to the usage of strong type systems. Truly strongly typed languages are finally getting the libraries and communities built up so that they don't seem quite as daunting.

reply

bor0 2 days ago [-]

To me, personally, the biggest barrier was lack of a proper introduction with a lot of examples.

I try to break this barrier a bit with my upcoming book: Gentle Introduction to Dependent Types with Idris.

I am very interested in this area but it is impossible for newcomers to get a grasp of it without too much digging. Logical Foundations was OK but I was still missing the theoretical explanation ("why does this tactic work? it is magic!").

So with accumulated knowledge from IRC, forums I hope to address this.

reply

---

risto1 5 hours ago [-]

Haskell is a much more complicated language than Ocaml:

That's why Ocaml is easier to learn and use

reply

---

apparently Go is thinking about adding generics using concepts/typeclasses:

"The decision to go with concepts is interesting. It's more moving parts than I would have expected from a language like Go, which places a high value on minimalism. I would have expected concepts/typeclasses would be entirely left out (at least at first), with common functions like hashing, equality, etc. just defined as universals (i.e. something like "func Equal<T>(a T, b T) bool"), like OCaml does. This design would give programmers the ability to write common generic collections (trees, hash tables) and array functions like map and reduce, with minimal added complexity. I'm not saying that the decision to introduce typeclasses is bad—concepts/typeclasses may be inevitable anyway—it's just a bit surprising." [13] (commenting on the 'Generics' section of https://web.archive.org/web/20180829211336/https://go.googlesource.com/proposal/+/master/design/go2draft.md )

---

https://blog.acolyer.org/2016/02/05/is-sound-gradual-typing-dead/

"gradual typing introduces very high overheads, and that it’s hard for maintenance programmers to predict what the performance impact of gradual typing will be (e.g. in some cases, performance degraded by over 100x)....To make all this work, you have to insert runtime checks between the typed and untyped portions of the program. Macro-level gradual typing forces programmers to add typing on a module-by-module basis, micro-level gradual typing allows type annotations to be added to any declaration....When choosing the first module to type, you might experience anything from no performance overhead (1x) to nearly 89x performance degradation"

"