ideas-computer-jasper-jasperNotes4

Difference between revision 2 and current revision

No diff available.

huh, this looks very general, clean, and like the keywordish relations that i've been looking for:

http://www.lshift.net/blog/2010/12/22/conditional-statements-the-lambda-calculus-and-earlylate-binding

" Conditionals in Smalltalk work like this: the class Boolean has two subclasses, True and False. Each has a well-known instance, true and false, respectively [2].

The equivalent of C’s

  if (condition) {
    onefunc();
  } else {
    otherfunc();
  }

is

  condition
    ifTrue: [ self oneMethod ]
    ifFalse: [ self anotherMethod ]

How does it work? Boolean defines a method #ifTrue:ifFalse: taking two blocks as parameters. (Remember, what Smalltalkers call a block other folks call a closure. A Lisper might call the parameterless blocks that #ifTrue:ifFalse: takes thunks.) True and False in turn implement #ifTrue:ifFalse: thusly:

  True>>ifTrue: trueBlock ifFalse: falseBlock
    ^ trueBlock value.

False>>ifTrue: trueBlock ifFalse: falseBlock ^ falseBlock value.

So we evaluate condition to either true or false, send it the #ifTrue:ifFalse: message, and we’re done. "

---

Disciple exceptions: " The exception mechanism in the source language is based around try/catch, and doesn't have a proper semantics. We should probably just use the GHC approach where the 'error' function creates an exceptional value instead of causing a side effect.

    If we were going to treat exceptions as side effects, then perhaps we could do something like in Benton and Buchlovsky's "Semantics of an effect analysis for exceptions" paper. "

---

" The network is reliable.

    Latency is zero.
    Bandwidth is infinite.
    The network is secure.
    Topology doesn't change.
    There is one administrator.
    Transport cost is zero.
    The network is homogeneous."

--

"

    Structured programming (in a first-order language) limits a program’s control flow graph to being constructed using a particular grammar, which makes it easier to understand what are the potential control traces in a program.
    Data hiding (encapsulation) limits which modules may access the representation of an abstract type, which means that in other modules we can reason in terms of a simpler interface, and when inside the abstraction boundary we can assume the unprivileged modules rely only on the external interface.
    Immutability prevents objects from being modified by other modules or processes, which eases reasoning about concurrency and time, and allows us not to think about how another module might modify objects that we share with it.
    Purely functional programming goes further than immutability, eliminating non-local information flows, which means that we can understand expressions in terms of their text and lexical environment, and ignore time in extensional reasoning.
    Static types constrain which types of values can flow where, which means that there are fewer possibilities to think about in case analyses, and particular kinds of errors need not be considered."

---

[esq-ue tu speak Jasper](query, desired type of response=bool)

---

the words "I" and "you"

hypotheticals

a way of making a request, that is, of indicating the desired response

a way to say what someone [i or you, or someone else] known or what is unknown to them [and, branching off 'know', we then have more complex concepts such as 'understand']

a way to indicate an object, not by a noun phrase, but by specifying some properties about it, 'the thing which was here yesterday', for example

past and future tense?

plurality?

of course, there are different levels of interpretation; if text is quoted, 'i' and 'you' are unresolved, otherwise they are ('i' relative to 'speaker', 'you' relative to 'listener')

kant's correspondance between table of judgement and categories may be useful for uniform notation; e.g. coq's and haskell's definition of data types/constructors by cases == OR in the same way that disjunctive judegment == community (i think), and kant's hypothetical == causation link tells us that we can reuse the symbol for 'if' for 'right arrow' e.g. the horn connective (possibly this is also the same as the symbol for subtyping, too, e.g. ontological logic? see what descriptive logic does there)

the difference between 'i' and 'me' is like the different roles in a relation; the relation is like the very (look at RDF), and subject/object are roles (positions in the relational operator)

--

as one task (but not the only task) of philosophy is to form new language, programming language creation is rather philosophical

--

the Curry-Howard Isomorphism is somewhat related to Kant's relationship between the table of judegments and the categories

for example, union sum types mean a type where you don't know exactly which type you have, but you know you have one of the ones on the list. this is like Kant's category of Community. union sum types are related to logical disjunction; this is like Kant relating them to Disjunctive judgements.

in other words, Kant's relationship between the table of judegments and the categories is like the relationship between the logic of types (e.g. 'disjunction' for a union type), and their meaning as types (e.g. 'you have one of these types but you don't know which'), which is like the relationship described at http://web.archive.org/web/20130702221947/http://www.haskell.org/haskellwiki/GADTs_for_dummies between normal Haskell code, and what typeclasses would look like if expressed in a uniform notation with normal Haskell code

--

by this logic, an 'implication' type would indeed seem to be subtyping, e.g. 'Q -> P' means' 'not(Q) OR P' which is equal to 'not(Q) OR (Q AND P)'. Consider the relationship between Q=Integer, P=Number. Either something is not an integer, or you assume that it is both an integer and a number.

todo why is subtyping hard when it interacts with polymorphism?

--

substance, e.g. essential vs. accidental properties:

(a) in class-based OOP systems we have this distinction, in prototype-based ones we don't

the idea that everything is of some single type of 'substance' is like single inheritance

(b) one might also relate it to what is statically known (although this is a slightly different idea than (a)); essential properties can be statically known, accidental ones cannot.

(c) how to apply this to EverythingIsAnInterface?? First, think of multiple inheritance. We can generalize the idea of essential properties as being those that are part of 'the' substance, to saying that an object (a piece of data) has multiple substances, each of which carry/imply various essential properties. Second, realize that really what we are doing here is making an epistemological distinction; if and only if the reason that we know that an object has a given attribute is due to it being a substance, then we call that attribute 'essential'. This lets us see that in this case interfaces are like classes with respect to methods specified in the interface.

In fact, this means that which attributes are essential and which are accidental is relative to what we consider to be substances. In one context (say, serializing an object), we might not know that the object under examination supports the 'dict' interface, all we know is that it has a function called 'keys' attached to it. In this context, 'keys' is an accidental property. In another context, we might know that a given variable must contain an object of type 'dict'; in this context, the 'keys' function is an essential property of that variable.

(b) is explained as the context of dynamic languages by saying that it's not whether something is STATICALLY known, it's whether it is a property of an object's TYPE.

in the structural typing context of Jasper, typing is (mostly? or totally? what about type labels and the .types meta attribute?) equivalent to pattern matching. so essential/accidental is not so much about compile-time/run-time but about whether you may assume something because you've required it to be matchable against a pattern.

this kind of thinking probably should inform what we decide about class vs. prototype inheritance

btw the wording that "a substance IMPLIES various essential properties" is further support for the implication-as-subtyping

--

todo why is subtyping hard when it interacts with polymorphism?

" Subtyping and type inference

The type inference problem in the presence of parametric polymorphism and subtyping like in Java and C# is significantly harder than in Haskell (98) or SML. You're very unlikely to ever get anything like those FP languages' type inference. Scala is a good language to look at in this regard. It too uses (only) local type inference which is more or less as much as one can reasonably expect. That said, you can do way, way more with local type inference than C# is doing (as Scala demonstrates.) " -- http://lambda-the-ultimate.org/node/2862

--

" .. You ought to formalize the 'laws' of this particular graph theory, and explain it with node/edge diagrams and examples.

Definitely! I have one heck of a conference paper to write about this. It doesn't help that much of the existing type theory out there is based on term structure, not graph theory. However, I've done this before for Jiazzi, so I think its doable. It also turns out that any type theory that involves nominative subtyping is also based on some amount of graph theory. .. " -- http://lambda-the-ultimate.org/node/4764

"

inferred row polymorphism?

This seems to be an extension of the concept of row polymorphism to type/object/state inheritance and construction (read as a complement; I like extensions of excellent systems). That is, where row polymorphism accepts use site variance, YinYang? treats it as a point of assertion, constructing the structural definition from the implicit name-use constraints. YinYang? does for specification-definitional simplicity what RP does for use-site specification.

Then again, I've fallen for row polymorphism recently so I'm seeing its face in every crowd. Does somebody have a distinction between the systems I'm missing? By Chad Wellington at Wed, 2013-06-19 01:31

That is a quite useful
login or register to post comments

That is a quite useful comparison. I don't think YinYang? is very close to row polymorphism because it is completely nominative, but I'll try to learn more about the topic to be sure. By Sean McDirmid? at Wed, 2013-06-19 09:49

login or register to post comments

An explanation, sadly long-winded

Here's the analogy that I saw, with apologies in advance: (1) I'm very good at mixing up (mixing in?) concepts under names that are inappropriate, (2) I think in Lisp syntax so I'm not quite following your YinYang? syntactic conventions, and (3) this ended up a lot longer than I'd hoped.

From your YinYang? post: "what class an object extends depends on what it does" The analogous Row Polymorphism style: "the elements required of input types depend on names used"

I have seen the view of row polymorphism that I would characterize as "implementation-oriented", where RP is an indexing trick to expose subsets of a product type. I.e., the type (a * b * c) would carry around the map (a->1, b->2, c->3) so that a use-site requiring (a * c) could look up the appropriate indices in the type it is handed.

I'm here using RP in terms of its type-requirements inference rather than its implementation. That is, seeing a function use (a * c) tells us that it can use any type exposing "a" and "c". To repeat the example from the post:

(def foo (a b) (+ a b)) ;

; for example (def
+must be a function of a, b
+(a b)
    (_add_ a.as_number b.as_number)); so now we know that "a" and "b" need an "as_number" named property ; thus, we can infer that "foo" is polymorphic over any types with "as_number" property  

We can apply this kind of reasoning to the ThicknessListener? function:

(def ThicknessListener? (w) (if w.Pressed (assign w.Thickness 10) nil)) ; "w" must have "Pressed" and "Thickness" properties

deriving type (w
:(product Pressed Thickness))

What I see YinYang? as extending this RP-like name-polymorphism inference is the capability to infer construction, and not merely inferring possible uses. Thus, if we use the ThicknessListener? as so:

I see YinYang? extending this in two ways

    namespaced product types through the trait extension hierarchy
    inferred type construction

We can see the first (namespace names) in the trait definitions:

(defTrait Widget () ()) (defTrait Bordered (Widget) (Thickness : double)) (defTrait Button (Widget) (Pressed : bool))

These imply that the named-trait "Pressed" can only be inferred in some type already declared or inferred to be a "Widget". In RP, I might make a type (say, "Cylinder") with "Thickness:double" and "Pressed:bool" properties, referring to an actual physical object made of pressed metal rather than a graphical widget.

We can see the second (inferred construction) in this hacky use of the above:

(let (w (new Widget)) (assign w.Pressed false) (ThicknessListener? w) )

The ThicknessListener? invocation exports the need for a type that has Pressed and Thickness properties; this can be validated against the declaration of w as a Widget type. Then, this export informs the instantiating environment that it will have to construct a type with the given traits and their initialization/methods mixed in. A (merely) row-polymorphic world would need a full declaration:

 (let
    (w (Thickness Pressed))
    (assign w.Pressed false)
    (ThicknessListener w) )

And doesn't have any native way of delineating identically-named properties. By Chad Wellington at Mon, 2013-07-01 21:56

This is quite useful,
login or register to post comments

This is quite useful, thanks!

The fact that YinYang? traits are nominal allows us to infer construction, while the structural approach have no hope of doing that (it doesn't even make sense).

But structural typing also has a problem with large libraries and code completion in general. I mean, when all you care about is if some string of bytes is equivalent, what does that mean for a code completion menu? By Sean McDirmid? at Tue, 2013-07-02 00:08

Hmm, okay.
login or register to post comments

I guess I think of row polymorphism primarily as a shorthand for composition of application. No type inference required (or allowed!) in the usual case, because typing is something of a Gordian knot under row polymorphism. Every function in the row takes a function as its argument and returns a function as its return value. Further, every function they take (and return) also takes (and returns) functions as its operands (and return values). That has already put us beyond the power of many languages' type systems to express any difference in function types.

But in a language with an absolutely complete type system, we may still have some generics with superposed distinct type signatures, either as functions in the application or as arguments/returns along the application row, or both. In that case an inference process may deduce which type signatures among them can be used.

However, you need the information from the ending type of the row, backpropagated through an arbitrary number of intermediate functions, in order to statically determine what the type-signature of the function something ought to return (and therefore the choices of the type signatures available of the function returning it and the choice of possible function types all of those can take as arguments) needs to be; the type of the argument alone is not sufficient.

The unfortunate things about backpropagating type inference on generic functions under row polymorphism are first that it is hard to guarantee that the inference process gives a unique result, second hard to guarantee that it gives a result at all without actually performing it, and third that it may require exponential compute power relative to the length of the longest application row between any two monomorphic functions.

It simply had never occurred to me to think of type inference on generic functions as part of an implementation of row polymorphism, nor even really to consider generics with multiply-typed definitions to be compatible with row-polymorphism. Now that you have drawn it out I see how the type-inference process required would be related to the type-and-trait inference here.

I'm sure that I've ever seen that type-inference process actually implemented on a row-polymorphic system anywhere, nor worked with any row-polymorphic system that allowed generic functions at all for that matter. I'm somewhat amazed that you were even able to conceive of it.

I guess I was having a blind spot due to my preconceptions about the limited expressiveness of most type systems and the incompatibility of these paradigms. Thanks for explaining.

Bear By Ray Dillinger at Tue, 2013-07-02 00:14

login or register to post comments

"

"

The Yin and Yang of Modular Object Type Inference

Better as a working title. Here is the working abstract to go with it:

    Type inference does not deal well with languages that support nominal subtyping: only local type inference often works reliably without costly global analyses. We present the YinYang language where all program term and object types are inferred with full support for encapsulated mutable fields, nominal subtyping, genericity, easy overriding, downcasting, along with local reasoning about type errors and other forms of semantic feedback. Inference supports these good OO features by forgoing typical unification that computes types, principal or otherwise, and instead by tracking type constraints in term assignment graphs that modularly deals with fields. Required mixin-like trait extensions are then propagated along these graphs towards assignees, providing a basis to form objects and provide semantic feedback; e.g. detect incompatibilities. This paper describes our design, trade-offs made for feasibility, and implementation. 

By Sean McDirmid?