ideas-computer-jasper-jasperTypes

see also jasperTypesLinks.txt

Not sure if the Jasper type system has principal types. Apparently the ability to represent GADTs destroy principal types but i don't understand what they are: http://www.haskell.org/haskellwiki/GADT

Since we are more focused on types as a bag of unrelated properties, perhaps we dont have principal types. I'm not sure.

things that make type systems confusing

the Haskell type class defn syntax

"class Eq a where " looks like a is a type parameter of Eq. (btw does haskell allow a destructuring bind on a in that statement?)

http://stackoverflow.com/questions/3467279/how-to-create-a-polyvariadic-haskell-function

Hmm, the more i think about it, the more i think that we should ban type inference determining ad-hoc polymorphism which is polymorphic only in the return type (see [1] for more reasoning on that). This won't make the language any less powerful than, say, Python (in Python there is no type inferencing so in order to make an ad-hoc polymorphic choice, the class of the value that the object that you are dispatching on must differ (which would correspond to an input value in Haskell and probably in Jasper). So i'm proposing that in Jasper, you can have all the type inferencing you want, it's just that type inferencing won't be allowed to drive ad-hoc polymorphism from the "future" (from the types of the outputs).

Or, try THIS one on for size: the statically known (at the time of compiling the typeclass) kinds of the return types of different ad-hoc poly fns of the same inputs may not vary. Hmm.. but that sounds like you cant have two instances of the same kind of two different typeclasses. For example, it would prevent the empty homogeneous list from being both an empty list of ints and an empty list of fns, at different points in the program. No... it just prevents the existence of a fn, 'empty', that returns that. You can still have Int.empty and Fn.empty; or even, empty(Int) and empty(Fn). So, you can't have a single fn, 'empty', that sometimes returns statically known lists of ints and sometimes statically known lists of Fns, on the same inputs. This should prevent any type-level inductive hooey like Haskell's variadic fns, which uses the static knowledge of how many times the result of a fn will be applied in the future, which seems like a 'syntactic' thing, to alter the behavior of the function over the various applications. It still allows 'messages from the future', that is, it allows the compiler to automatically select among multiple instantiations of return-type-only-ad-hoc-polymorphic functions based on the expected return value, but it guarantees that all of the possibilities will be of the same kind (and kind incorporates the notion of 'syntactic' difference here)

.

example from http://kawagner.blogspot.com/2006/12/my-haskell-experience.html

" Lets have two simple, small functions:

giveElements :: (a -> Bool) -> [[a?]] -> [[a?]] giveElements condition = filter (detectElement condition)

detectElement :: (a -> Bool) -> [a] -> Bool detectElement = any

The above seems a bit useless but I've extracted from it's surrounding code to show the problem. The 'giveElements' function is able to extract certain elements from a list and returns the filtered list. The 'detectElement' function does the job of deciding which elements are in and which are not. It does this by checking if any element of a list fulfills a certain condition.

So short so nice, but after some work on your program you discover that your 'condition' function required access to outside state to do it's job. Maybe the function has to do some a small modification of some data or should simply write a short message into a log. To allow this, you have to rewrite the above functions to use a monad. For this example lets simply use the IO monad here. Now the functions become:

giveElements :: (a -> IO Bool) -> [[a?]] -> IO [[a?]] giveElements condition = filterM (detectElement condition)

detectElement :: (a -> IO Bool) -> [a] -> IO Bool detectElement c els = do r <- mapM c els return $ or r

The first function still looks quite similar, only the 'filter' function is replaced by 'filterM' now. 'filterM' is the monadic version of filter and gives the monad back from the condition. So without filterM the state won't get up from the detectElement function (and of course there would be a type error).

detectElement looks a bit more different. The reason is that there is no monadic version for 'any'. So I use a monadic map (the mapM function) to create a list of Bools by applying the condition to each element and then check if at least one of the results it true. This doesn't requires more evaluations of c than the first version because of lazy evaluation, but it still required a complete rewrite of my code. It's still readable, but not as much as before.

...

And if you have more complex functions the above translation process from a non-monadic into a monadic-one is much more difficult and IMO it also destroy the clarity of the code. Also state monads have bad runtime characteristics because of their 'inside-out' evaluation which can fill up the stack quickly. You can use IORefs instead - but then you can't simply transform your program to using continuations anymore for example. And if you put a IO monad into a state-monad, you have to lift all IO-accessing functions via liftIO.

And whats the point of having a pure functional language, going thru lots of trouble to maintain the purity, only to get things like IORef which you tried to get rid of in the first place? If you design a program to use IO monads from the beginning everywhere, then you have fewer problem maintaining and expanding it - but then you're using Haskell like a normal imperative language. Whats the point of Haskell then? ...

Its not a problem of functional programming in itself, its a Haskell thing: While monads are quite powerful in principle IMO they are a pain to use in practice. Because monads show up in the return type of functions, the easy composition of functions which is the prime reason for the power of functional programming only works as long as all those functions use (compatible) monads, and sadly many functions simply don't exists in this form. "

hopefully Jasper's attribute types will allow the monad-ness to be treated as orthogonal to everything else.

---

ok, whether or not we have principal types, i think languages to date have focused too much attention on them. Martin Odersky says http://www.codecommit.com/blog/scala/is-scala-not-functional-enough#comment-4162 "As to combining Hindley/Milner and subtyping: In principle it’s possible, but in practice it does not work so well. The problem is not so much that you end up too often with Any, but that inferred types become impractically large.". i'll have to look at scala to see what he came up with instead.

anyhow, my point is that some extant powerful type systems, like Haskell's, force the programmer to raise the level of abstraction but still to be very precise. You end up with long type expressions because of this. Also, you have to take all the facts about the type of an object and put them together into one long type expression, making sure that they are composed together in the proper fashion. For example, monads and monad transformers; the programmer things "this is a list of ints, and it is in the IO monad". To the programmer, these are two orthogonal pieces of information about the type of the object, but in the type system, one wraps around the other, and in a certain order (and it gets worse with monad transformers).

btw i'm sure someone has used the name "attribute types" before but it's probably something different from what i mean.

with attribute types, we never try to make the programmer interact with a single expression containing all the type information about a value (although they can query the compiler for that during debugging if they want). For example, this value is a list of something; this value is in the IO monad; this value is a list of ints. The programmer can look at either of these three as the type of the value in different contexts (note that the third implies the first).

---

in jasper, what haskell calls 'existential types', which means expressions which dont completely determine a type, e.g. 'some type with a Show method' will be the rule, not the exception. this allows things like heterogenous lists. i think the usual problem with this is (esp. if you feed such a list through a function which reshapes it), then you lose the typing information on what individual items were, and all the type system knows is that they are elements of the heterogenous list, with only whatever properties are common to all items in that list. then how do you then polymorph on individual elements later?

e.g. if you construct a structure where you know that the first item in each list is a function reference, and all the rest of the items are integers, how do you get the type system to let you apply the function reference later? all it knows is that these are each members of a heterogenous list. i think the Right Thing for this case according to languages like Haskell is to create a data type (ADT) that specifically has a fn in one field and then a list of ints in another. But this imposes the type system on the programmer; in addition to the task of writing the program, they have the new task of explaining it to the compiler. And what if your knowledge about the structure of the list is hard to express; what if you know that the list is a valid AST for some language; do you want to translate the entire semantic grammar of that language to Haskell's type system?

in jasper we want to make the task of explaining the program to the compiler optional. the 'promise' assertion will help here -- 'promise' tells the compiler to take something as a given (and to check it at runtime, unless runtime promise checks are disabled). This helps because you can tie promises to positions in code, rather than have them be universal truths about data types -- 'if we are here in the code, then you may assume this'. This saves you from having to construct a new data type for every state of knowledge that you have about your data (e.g. 'if this node of the AST is under a FOR-LOOP node, then the second child is an iteration variable symbol'), and from having to encode all your knowledge about the valid state transitions into the type system (e.g. an AST node of type AST-FOR-LOOP's second child is a node of type AST-UNDER-FOR-LOOP-ITERATION-VAR). Instead at the relevant point in your code you say 'we must be in an AST node corresponding to a for loop iteration symbol; i promise that this node is of type Symbol' This makes the program harder to reason about but easier to write, and even easier to read if the reader isn't an expert at the type system. It is equivalent to having weak typing/allowing unsafe type coercion like in C.

i think the other usual problem is, if you pass elements through a function that only demands an existential type, and creates new elements, you must distinguish between functions which guarantee to create new elements of the same type it was given, or functions which guarantee only that the new elements will satisfy the existential type.

---

i guess our type system would be described as

a type and effect system, optionally static, weak, inferred, structural (with optional nominal types)

---

in Haskell, existential types let you define a type which doesnt say 'this is a list of Ints', but rather, something like, 'this is a list of objects whose types are in the Show typeclass'. In Jasper all types are existential because 'everything is an interface'; you describe the properties that a type in a certain slot has, e.e. the interfaces that types belong to, not the types themselves.

--

although it should not be mandatory to state which exceptions your code might throw (that's the compiler's job), it should be possible for a caller to insist that you throw only a subset of a given set of exceptions, and that anything else is a type error

--

--

could use Coq for type assertions and proving stuff about Jasper programs, in the same way than Shen uses Prolog

see also http://adam.chlipala.net/cpdt/html/Intro.html , http://www.cis.upenn.edu/~bcpierce/sf/index.html

would eventually have to port Coq into Jasper. note that Coq's not-Turing-completeness (via requiring 'obviously decreasing' structural recursion) is key.

--

hmm.. in addition to the restrictions contemplated above to prevent http://stackoverflow.com/questions/3467279/how-to-create-a-polyvariadic-haskell-function , might want to have more restrictions on global type inference than on local (see http://lambda-the-ultimate.org/node/2862 ).

--

taints and dataflow analysis (?)

the type system can be extended to detect 'taints' or 'untaints'. an untaint is the negation of a taint. a taint can be defined either as a whitelist (anything using only operations in the whitelist is untainted; anything built only from untainted operations is untainted; everything else is tainted) or a blacklist (anything using any operations in the blacklist is untainted; anything built only from tainted operations is tainted; everything else is untainted).

Note that the taint system has negation at the top (taint = not untaint) and at the bottom (the base case can be a whitelist or a blacklist), but that the bottom must be all one polarity or all the other, not both. This should simplify things by making it unable to express all boolean functions. I'm hoping that most of the type inference system can use positive logic, perhaps with a few extensions to deal with stuff like this.

note that the necessity of making a blacklist complete would require Jasper to fix a set of primitive operations that everything is compiled to. An alternative is to first design the system to be fully expressive, but then to only support a published list of primitive properties that are guaranteed to be defined correctly (e.g. blacklisted if necessary) for every language primitive (e.g. stuff like side effect-less/side effect-ful, deterministic (the other part of referential transparency), reads, writes, moves, null). This achieves the goal of giving the language the right sort of machinery that can be repurposed to other tasks.

a taint can be calculated based on the operations used to produce a value (e.g. 'side-effect free' is an untaint of this sort; a function built out of side-effect free parts is side-effect free), or based on all operations applied to a VARIABLE (e.g. 'unique' is an untaint of this sort; the unique value could be 3 or it could be a function, but what matters is that if it ever read into another variable, then the original value is never used anymore; another one is 'const'). these two are related; e.g. if we want a variable containing a reference to be immutable, then we can only apply immutability-preserving ('const' in that variable) functions to it; but a const function if a taint of the first kind (defined inductively because a function that only applies const (in that variable) functions to that reference, and doesnt use any primitive mutation ops on it, is itself const).

we need ways to express: (a) a value or variable has the property (e.g. immutability) and (b) that an operation preserves or breaks the property (is whitelisted or blacklisted) (e.g. a function that does not write to a reference is 'const' in the D language, meaning it doesnt break the immutability)

perhaps this is not yet enough. We should be able to express D's transitive const (http://dlang.org/const-faq.html). We should be able to express Rust's pointer type system ( http://static.rust-lang.org/doc/tutorial.html#boxes-and-pointers ). We should be able to express [2].

---

swkane had a good idea that i summarized as:

" that's an interesting perspective that i hadn't considered; let the types stay complicated but try to find a way to make readable labels for them so you only have to dig into the complexity when you need to. "

--- http://www.haskell.org/haskellwiki/GADTs_for_dummies ( http://web.archive.org/web/20130702221947/http://www.haskell.org/haskellwiki/GADTs_for_dummies )brings up an excellent point: type classes are like arbitrary functions on types, with normal Haskell stuff like pattern matching, algebraic data types (multiple constructors), guards, etc, except with a confusing relational (rather than functional) syntax.

this brings up the obvious point: Jasper could be like this but use normal syntax

seems like there is really a lot of mileage to be had just by delivering a uniform notation for data, code, patterns, types, typeclasses

--

remember that since type signatures are optional and principal types are not the core concept, you need a way to add type signatures to someone else's library code (without modifying their source code) ---

i guess maybe the reason why e.g. Haskell has an idiosyncractic syntax for typing is that they want to syntactically prohibit things they dont know how to automatically typecheck in a decidable, relatively complete way (relative to some nicely characterizable subset of dynamically valid programs)

GHC's typechecking algorithm has expanded over the years as progress was made

i guess i think it would be better to have a very simple, uniform syntax, assumptions, and compiler type errors that are simple enough to use in debugging, then allow for the case that sometimes the typechecker simply wont be up for the task, and the programmer must use assumptions to bridge the gap

--

see also PrincipalTypingConsideredHarmful?

--

the idea of unifying the type language and the normal language is discussed here:

http://lambda-the-ultimate.org/node/1085

some interesting pointers from that page to check up on later:

--

http://lambda-the-ultimate.org/classic/message6587.html#6615

--

book on the theory of coq:

http://www.amazon.com/kindle-store/dp/B000QCUCZU#reader_3642058809

interestingly this sort of asserts that Coq is the first real useful language that makes practical use of Curry-Howard:

http://lambda-the-ultimate.org/node/1532#comment-17865

"

    Would it be possible to take an arbitrary mathematical theorem (let's say: "string concatenation is associative", or Fermat's little theorem) and express it as a type?

Yes, and in fact this is exactly what happens in automated theorem provers based around constructive type theory, or the calculus of (co-)inductive constructions. A good example of this is Coq. I would suggest reading through some of the tutorials/introductory documentation if you wish to learn more. Also, there is a good book on the topic (on Coq, Curry-Howard, and related issues): here. "

--

the roots of type theory lie in the effort to avoid things like Russell's paradox in the foundations of mathematics [see Stanford encyclopedia of philosophy]. The idea is to restrict the set of valid terms of a logic by putting type requirements on operators. You structure the requirements to prevent operators from applying to themselves or to things like themselves that can indirectly cause similar problems.

In some systems the way you know a term is invalid is that such a term has no type. The type system presents axioms and inference rules to make "type judegements" which assign types to terms, and if no type can be assigned to a term, it must be invalid. In this way the type system specifies a subset of otherwise acceptable terms.

For example, in simply typed lambda calculus, the predicate "lambda x . not[x x]", namely, the predicate of predicates that don't apply to themselves, has no type.

This semantics is problematic for Jasper's notion of type attributes because the idea here is that we can have terms that we don't know much about; terms for which we have only partial information, or no information, about what type they are. For example, the Jasperian approach to the above "bad" function would be either to say, "you can't apply a predicate to itself" [which still leaves open the door for more cleverly designed paradoxes], or to say, "i can't figure out the type of this and that's okay", again leaving the programmer to fend for themself against this sort of paradox.

e.g. if someone tried to define the function "lambda x: x x", jasper would let them. Calling this function would cause an infinite loop, but so what?

Surely there is some way to finesse that sort of usage paradigm onto conventional type systems but this indicates that they may not be an appropriate basis for Jasper. Especially because the whole original reason for their existence [e.g. to prevent Russell-ish paradoxes] is functionality that we are willing to do without. While we do want a logic to make type judegments on terms, we don't want to equate 'cannot be typed' with 'badly typed'. [Pierce's definitions still seems on-target, see Wikipedian defn quote]

otoh i guess we do want our type system to be consistent, even if we dont care about ruling out paradoxes in the object language, so we might need a little type theory just to ensure that. However, another approach would be to allow paradoxes and make our type logic paraconsistent, or something similar [e.g. is it paraconsistent to say that if we can derive both that x is of type t, and also that x is not of type t, that x is not of type t? i feel like there's some better way to phrase that :)]

remember also that perhaps the real challenge is that we want to allow the user to extend the type system. so even if what we provide is consistent, the user could add something inconsistent.

Also, recall that we want to allow the user to easily add things like "unique" [cannot copy] and "immutable" [cannot write to], as well as ownership types a la milewski [and wasn't there something else with a different form too, in terms of where the quantifier and where the 'not' was? todo]. Note that these are certainly over terms, not over base-level values [unless in our language all terms are values, like i guess in Haskell], as "unique" isn't really a property about what kind of value something is, rather it's about what operations you are allowed to apply to it.

it does seem as if there's one world in which you have functions applied to value objects like numbers, giving you values, and another in which you have terms being inductively generated and reduced by rewrite rules [implications], giving you something like grammars/proofs [methods which produce ever-larger sets of 'valid' terms by iteration]. Perhaps this second world is the pattern-matching/logic/type system part of Jasper. But where does probabilistic logic fit in here? Here terms are not either derived or not derived, they are valued.

-- consider using sequents as the fundamental logical operation

---

took a very brief look at http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/ and i agree:

" More disappointingly, for me at least, is that even relatively enlightened languages, such as Haskell or F#, fail to support this basic machinery. In Haskell you have type classes, which are unaccountably popular (perhaps because it’s the first thing many people learn). There are two fundamental problems with type classes. The first is that they insist that a type can implement a type class in exactly one way. For example, according to the philosophy of type classes, the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest. The second is that they confound two separate issues: specifying how a type implements a type class and specifying when such a specification should be used during type inference. As a consequence, using type classes is, in Greg Morrisett’s term, like steering the Queen Mary: you have to get this hulking mass pointed in the right direction so that the inference mechanism resolves things the way you want it to. "


copied from elsewhere:


"

Lessons learned from previous work

There are many commonalities in various approaches to race freedom.

    Ownership should be built into the type system. In OO languages each object must have an owner. In C-like languages each piece of data must have an associated lock.
    There should be efficient mechanisms for passing data between threads
        By value
        By reference. (The object being passed must be a monitor.)
        As immutable data, by const reference
        By unique reference using move semantics

Most of those ideas are expressible through type qualifiers. "


btw the 'promise' vs. 'demand' thing w/r/t const vs. immutable and lent vs. unique (lent should be called noalias or somesuch) seems pretty common (wasn't this also the deal with Haskell's existentially quantified types or maybe with Haskell typeclasses vs. Java interfaces? or were those other things?). should have syntax for it, so you can just say 'i promise it will be immutable' and 'i demand it be immutable' without learning two keywords for each of these cases and remembering which one is which (or at least, have a keyword naming convention like promiseImmutable demandImmutable)

--- ...


hmm if patterns define types. thinking about logics for patterns. the usual tradeoff b/t expressiveness and feasible inference.

since type are in a hierarchy, this suggests descripition logics.

i feel like we can cut out negation somehow. but not quite sure..

some types, like 'immutable' or 'functionally pure', are 'purity' types, meaning that they are true whenever some condition of impurity never obtains in some transitive/data-flow-y manner.

others, such as Integer, are 'classy' types, defined as subsets of other classy types. in this case they aren't true unless you have a reason to believe that they are.

so it seems like you may want to write some notation and then put a 'not' in front of it. but you'll still never need a 'not' inside of it.

mb logics without negation are called "positive"?

i think the 'purity' types correspond to positive 'impurity' types, e.g. 'mutable', which we can use negation-as-failure semantics on, e.g. if you never any code that would mutate this value along the dataflow, you may consider it immutable.

description logics have negation, and are PSPACE-complete (at least as hard as NP-complete), so that's pretty hard (but hey, at least it's decidable)

positive first-order logic is NP-complete (but hey, at least it's decidable) ( http://ecommons.library.cornell.edu/handle/1813/7017 First Order Predicate Logic Without Negation is NP-Complete by Kozen, Dexter)

do we really need both universal and existential quantifiers in our patterns, though?

hmmm... if there exists any mutation along the dataflow, then the variable must be mutable, so there's a there-exists. and a list of ints must contain all ints, so there's a for-all. but perhaps these are two different kinds of reasoning? e.g. it doesn't seem like you would ever need a forall if you are looking at a list of known types and trying to deduce new types, e.g. if the thing has type Int and type single-world-in-memory you don't need to look at both of those to deduce type Number, that's just a there-exists. similarly, you might want the type of a list of ints to say each element is an int, but you don't care about a list of things-with-at-least-one-int, so there you only need forall, not there-exists.

but just thinking about graph patterns, sure, sometimes you want to match a node which has at least one arc going to (Int or Real or Complex), and sometimes you want to match a node whose arcs all go to Int.

mb just that you can have either forall or thereexists but not both on one expression?

but what if you want to look at a list and see if it's a list of ints; you look at every item in the list (follow each arc), look at each of their types (follow the type arcs in the meta view), and then look for at least one of these type arcs to terminate at Int. But this is saying, forall arcs in the list, there exists a type arc from the target of the initial arc that targets Int. So we are mixing them.

we want OR; if something is an Int or a Float then it is a Number.

and i guess the last statement shows that we want Implies too?

could we possibly get rid of AND?

hmmm.. mb the two different activies above are, reasoning from the types to allowed values, vs. reasoning from program code to types.

e.g. reasoning from program code to types, we see that if there is a possible mutation, we must have type Mutable.

reasoning from the types to allowed values, we see that a list of ints must contain all ints

e.g. if you see a single non-int go into that list, then you deduce the impure type "non-list-of-ints", which is like Mutable.

similarly, from Immutable, you deduce that for every step in the dataflow, there is no mutation.

i think i may be conceptualizing this wrong. first off, for straight-up graph MATCHING, the problem is simple (this is like checking if an assignment to variables satisfies some set of FOL statements). the inference complexity comes when you are not presented with a single graph, but rather with a statement in this language (a statement about the type of the graph, i.e. a graph pattern), and you must deduce merely from that whether it will match another graph pattern, without reference to a single particular graph.

also, when i say that we want to deduce that because something is one type, it is another type, the types form a lattice. we can require the users to explicitly construct the rungs of this lattice, we don't have to infer them from the definitions of the types.

also, when i say that the compiler should infer if matching this graph pattern sufficies to show that that graph pattern will be matched, we can require a user to use the assertion facility to write a proof of this for us; the compiler is only checking the proof, not writing it. so if the problem of finding the proof is NP, then of course the problem of checking the proof is in P.

so mb it doesnt matter if theorem-proving in this logic is NP.

mb we just tell the programmer:

wait a minute... i think i am misunderstanding quantifier here.. we won't have statements like 'for all arcs a from node n, and for all arcs b from node n, a = b'. we'll have statements like 'for all arcs a from node n, let the target of the arc be m. for all arcs b from node m, ...'. we'll only quantify once at each level. so no, the quantifiers won't be exchanged. but used this way, this isn't really FOL quantifiers. we can't express things like 'for all arcs a from node n, there exists an arc b from node n such that b's label = a's label + 1'.

also, the reachability is like * is regexs, leading to the analogy that the reachability component of these patterns is like a regex over linear paths instead of over linear strings. a character in a string has a single successor but a node in a graph can have many (i.e. the outgoing arcs; or even the ingoing ones, i see no need to require the programmer to always respect directedness here, although that may be a good default, and some types of graphs may not permit the 'what links to here' query). so we need forall and thereexists to choose how to handle the multiple (Kant would say manifold) successors.

remember that we need to ask things about the reified edges, too, like what their label is

so i guess the type system will accept two kinds of definitive statements; "type A is a subset of type B" and "if something matches this pattern, then it is of type B". If you say "variable V is type B", then the system will enumerate all known types which are subsets of type B, and go through all known "if something matches this pattern, then it is of type T" for each such type T, looking for a match.

note: semantically, giving a pattern with "C or D" for type B is the same as saying "type C is a subset of type B" and "type D is a subset of type B". However, the system may not bother to infer this. (or will it? perhaps it should try replacing each type in the pattern with all of its subtypes..? or can we leave it up to the user to assert those things? no, if you have a type of 'list of numbers' and you assert that a list of Ints is of this type, that assertion should typecheck without you having to explicitly say it is a 'list of numbers' first. so it does try to match each type in the pattern against any subset of that type; but what it does not do is try to match the patterns of each of those subset types (because then we'd be recursing..). so if complex numbers are defined as things of the form (float, float), and you have a list like [(float, float), (float, float)], but you haven't asserted that the (float, float)s are complex numbers, then this list won't typecheck as a list of numbers ([Num]), because although it known that complex numbers are a type of number, it's not going to bother to match these patterns.

if you have a 'list of list of numbers', and you give it a list of list of ints, this should typecheck, but why? because it substitutes the 'list of numbers' match structure directly into the list match structure at the top. so it'll substitute in match structures of things directly mentioned, but not their subset types.

note: this way we never have to actually go into the 'type' metanode in our patterns for typechecking (except mb if we are doing something weird, not sure if we NEVER have to..)

how can we make the syntax for these patterns like [Num] for list of numbers? i guess that's just universal quantification. but mutability is existential..

with mutability: i guess it's important to be able to say that fns by default mutate unless we say they don't.. so we allow negations at both ends of the pattern (at the atomic elements, and at the top), just not in the middle.

hmm.. also should have an 'exactly one' operator (in addition to forall and exists), and an 'is-injective' operator (no two arcs have the same target), and a 'is-function' (no two arcs have the same label) operator.

hmm, now that we aren't doing inference, why not allow FOL anyways?

also remember we that we need a cons operator.. and i guess we need the analog of cons for graph nodes (i guess it's 'add this edge to this node') probably other more graph-y ones will arise.. the cons operator and its like must be able to appear on the left hand side of function definitions. Haskell used : and Shen used

.

also need parametric types; e.g. "swap(a,b)" of type A,B -> B,A where A, B are type variables

hmm.. principal types are too restrictive (since we want to uniformally treat type qualifiers such as 'unique'), but mb each node should only have one type when seen from each perspective?


@@ could refer to the last mentioned perspective (lexical scope), sort of an 'ibid' for Jasper


mb -f means 'negation of f' (negation of True is False, negation of 3 is -3), and f- is 'inverse of f'

---

messages to handlers are a graph with a set of nodes distinguished as 'headers'. handlers match on any of the headers, and then get the graph, with that header as an entry point

---

another fundamental attribute of nodes is reverse arcs, that is, 'what links here'

---

http://www.shenlanguage.org/learn-shen/prolog.html

---

http://clojure.org/Protocols

http://clojure.org/datatypes


easy for loops like python enumerate

---

---

types:

there are three kinds of properties associated with types. one is propositions that hold true for every value of the type; these are things like "list of Ints". the second is propositions that hold true for every access of the value in the type; these are things like "unique" (seen as 'copy never applied to this'), "immutable". the third are things about how multiple accesses to the value relate to each other, or how access to this value relates to access to other values (this third definition is unsatisfactory because it is so general). included in this are things like 'unique' (seen as 'this variable's value cannot be changed without accessing it directly'), 'deterministic' (vs stochastic), 'no side effects', 'purely functional'.

a suitable notation for the first one need only use positive connectives, and possibly quantifiers, over values, with subtyping.

a suitable notation for the second one need only be expressed as marking each primitive operator with respect to the type, and then confirming that no negative primitive operator of the type is every applied to a variable containing the value (e.g. if you want it to be immutable, then no mutation is ever applied)

i don't know to capture the third one simply, since defining 'no side effects' seems to be impossible without a language for the semantics of the programming language. of course, 'no side effects' can be treated as the second case if we allow it as a primitive.

--- should name those. mb 'value types', 'expression purity types'.

---

types can have properties, e.g. conditions, and these can either be necessary or sufficient or both. necessary: every value of this type satisfies this condition, or in the second case above, every access to every value of this type satisfies this condition.

sufficient: if there is a value satisfying this condition, or if there is a variable whose accesses satisfy this condition, then it is of this type.

--- in fact, we should have a general notation that lets us write 'necessary', 'sufficient', and 'iff'

---

should we allow types whose semantics aren't fully defined, that is, ones for which a list of sufficient conditions is not provided? yes, because this is how we use the type system to attach arbitrary tags and qualifiers for use in metaprogramming.

---

note that we need to attach tags both to locations in the expression tree, and also to types.

mb ~ can be expression tree tags, not needing ^ also? or are footnotes different from arbitrary tags?

--

i guess if we use -> for logical implication then move should be :=

--

compiler/interpreter option to say if the necessary conditions of supertypes of the type of something are checked

--

'promise' has a different meaning for 'expression purity types' than its usual meaning; we are only promising that the expression purity holds within the function we are defining, not in general; but when we demand it, we demand it for all past and future, over all functions.

for value types, 'promise' has the usual meaning.

--

want a generic way to add modals and quantifiers to statements. 'promise' and 'demand' are modals.


mb should look at

http://en.wikipedia.org/wiki/Category:Logic_symbols

http://en.wikipedia.org/wiki/Modal_operator

interesting application of modal operators in http://en.wikipedia.org/w/index.php?title=Modal_operator&oldid=503040598

--

if Jasper is general enough it should enable us to give commands that people would describe as rather philosophical, rather than practical computer programs that we know how to execute. E.g. the 'immortality program':

   Search for a program whose semantics is extremely similar to the input-output relationships of the agent known as Eugene Baylis Shanks III, who has the attributes in the language English "human being born in Tennessee in 1979 C.E. on the planet Earth in this universe". If such a program is found, execute it.

This contains all sorts of complex ideas, all of which however are generic enough that Jasper (or its std libs) could include them: search (without specifying how to carry out a search; e.g. looking for a program on the Internet or deterministically searching the space of all programs, or stochastically searching the space of all programs), a program, the semantics of a program, similarity, an agent, a name, the idea of attributes specified in a foreign language, a conditional on the result of search, the execution of a program.

note that placing the concepts of 'search' and 'similarity' in a std lib means giving Jasper the ability to express ontologies (not necessarily baking a large ontology into Jasper, however, although i think a bit of that would be a good idea also)

btw i used 'is' instead of 'are' on purpose because 'the semantics of a program' SHOULD be a similar concept.. perhaps as noted earlier, in this context 'semantics' is interchangable with the word 'meaning'..

---

end copied from elsewhere

---


"

Defining New Types

The real innovation in Qi is the use of sequent notation to define types. This is an enormously powerful means of defining types which gives Qi capacities beyond ML or even more recent languages such as Haskell. The use of sequent notation derives from Gerhard Gentzen who developed the sequent calculus treatment of first-order logic. In Gerhard's system a sequent is a pair composed of two lists of logical formulae called the antecedent and the succeedent respectively. The sequent is valid if the conjunction of the elements of the antecedent logically implies the disjunction of the elements of the consequent. The concept of logical implication is defined by laying down a series of sequent rules that say how sequents may be proved.

If the succeedent is restricted by only allowing one element in the list, then the system is a single-conclusion sequent calculus system; which is the kind that Qi is designed to represent. Qi requires that user-defined types be defined in terms of sequent rules. In other words, to define a type in Qi is to say how objects can be proved to belong to that type.

For instance, suppose we wish to define a type binary composed of all non-empty lists of 0s and 1s. Using sequent notation we would write:

if X e {0,1} X : zero-or-one;

This says that for any X, any sequent whose conclusion is X : zero-or-one (meaning X is of the type zero-or-one) is provable if the side-condition is satisfied (namely X is an element of the set {0, 1}). In Qi we writeif

(element? X [0 1]) : zero-or-one;

We state the simplest condition for being a binary number. Next we have two rules that say how bigger binary numbers are built out of smaller ones.

X : zero-or-one; [X] : binary;

\ binary_right \ X : zero-or-one; Y : binary; [X

Y] : binary;

\ binary_left \ X : zero-or-one, [Y

[X Y
Z] : binary >> P;
Z] : binary >> P;

This says that if we have a list of at least two elements X and Y (followed by 0 or more elements Z), which is assumed to be binary, then we can replace this assumption by the assumption that [Y

Z] is binary and X : zero-or-one. We need both rules because the first rule says how to prove conclusions of the form .... : binary and the second tells us what we can infer from assumptions of the form .... : binary.

Here is a script showing the definition of the datatype binary as a non-empty list of 1s and 0s.The function complement calculates the complement of a binary number. Notice there are no special constructor functions for binary numbers - we just use lists of zeros and ones in a natural way. But this means that there is no obvious way of telling if [1 1 1 0] is intended to be a list of binary numbers or a list of numbers. To avoid headaches and combinatorial problems in typechecking, Qi insists on the explicit typing of functions.

" -- http://www.lambdassociates.org/qilisp.htm

---

--- 18

" @9:

    If a function mutates local variables, but does not mutate global state, for all practical purposes it is just as good as a stateless function. From the callers perspective it is stateless.

Indeed, and Haskell's type system is rich enough to express this constraint. See http://www.haskell.org/haskellwiki/Monad/ST#A_few_simple_examples

Posted by: Dan

November 10, 2009 7:41 PM

" -- http://scienceblogs.com/goodmath/2009/11/philosophizing_about_programmi.php#comment-2064185

me: yes, but... the other guy's point is that having to use the ST monad for that isn't quite as convenient.

toread: http://cs.anu.edu.au/~Ben.Lippmeier/project/thesis/thesis-lippmeier-sub.pdf

---


decidable dependent types stuff:

Dependent types are based on the idea of using scalars or values to more precisely describe the type of some other value. For example, "matrix(3,3)" might be the type of a 3×3 matrix. We can then define typing rules such as the following rule for matrix multiplication:

    matrix_multiply : matrix(k,m) × matrix(m,n) ’ matrix(k,n)

where k, m, n are arbitrary positive integer values. A variant of ML called Dependent ML has been created based on this type system, but because type-checking conventional dependent types is undecidable, not all programs using them can be type-checked without some kind of limitations. Dependent ML limits the sort of equality it can decide to Presburger arithmetic; other languages such as Epigram make the value of all expressions in the language decidable so that type checking can be decidable, ...

http://www.cs.nott.ac.uk/~txa/publ/ott-conf.pdf Observational Type Theory (epigram 2)

http://en.wikipedia.org/wiki/Intuitionistic_type_theory something to do with Epigram

http://en.wikipedia.org/wiki/ATS_%28programming_language%29 supercedes Dependent ML

  ATS also has the good idea of token prefixes which indicate boxed or unboxed types

---

---

SAGE has an interesting idea: "hybrid type checking":

"Sage performs hybrid type checking of these specifications, proving or refuting as much as possible statically, and inserting runtime checks otherwise."

---


perl's notion of grammatical modes, i.e. context-dependence, is interesting, but i don't think that, e.g. plurality (scalar or list context), is a good choice of mode. in natural languages, we have plurality and gender and case and ?what else for nouns, and (conjugation), tense, aspect, mood, evidentiality, ?what else. more generally, in natural languages we have parts of speech: nouns, verbs, prepositions, adjectives, pronouns, adverbs, conjunctions, and interjections (see http://en.wikipedia.org/wiki/Part_of_speech for more). kant provides a system of "categories", or (hyper)logical modes ("hyper" if logic is taken to be merely boolean or first-order logic, since Kant's system is more expressive than these). anyhow, the "best" grammatical modes for programming languages are probably different than these (we don't seem to need gender, for example, although who knows, mb a clever use of gender could make it easier to read complex nested expressions... hmm, i think that's actually a good idea, todo...).

why have conjugation/declension? not for matching, as in natural language (but see the idea above about using "gender" to write expressions with less parens..). but for power/reuse

perl also makes the mistake of having similar-seeming operators that have different coerced types (i.e. == is numeric context, eq is string context). unless the morphology of the operator makes it very clear what type it is, this is confusing. also, it does the reverse of what you want in programming languages (to use modality for power/reuse); here we have one "conceptual" operation with many different lexical realizations; what we want is one lexical realization per conceptual operation, with modifiers.

here are some ideas for "grammatical modes" that might be good for programming languages:

default type (perl has numeric, boolean, string; but i dont think this works too well, at least compared to alternatives like explicit coercion, implicit coercion, python protocols, or haskelltypeclasses) vectorize quoted referenced (meta-level?) ignored concurrent assertion boxed/wrapped declarative/logical statement/constraint variable (comment; in parens b/c i consider this to be at a lower level than the actual programming language) (literal; in parens for the same reason as comment) purity access permission scope header/declaration/vardecl vs body transaction (todo: generalize) maximization polarity (positive vs. negative, max vs min, true vs false; or tripolarity (t/f/maybe, +/-/neutral, etc) (mb generalize to n-ary arity? mb generalize to algebra, include things like the wu xing?)

a clean mapping from noun->data and verb->function may not be the best. because in the semantics of natural language, nouns are state, yes, but verbs have lots to do with time. now, a pure computation has no relation with time (it is beyond, or before, time). but an impure computation with I/O side effects does have to do with time. this suggests that pure computations should all be nouns, and impure should be verbs. but whether a computation is pure or impure depends on how you define which bits of state are "inside" the system; a computer can mutate purely internal variables without becoming impure, but if it mutates (or reads) external variables, it becomes impure (monads have to do with defining which state is in the system). so the question of whether a variable read (or write) is pure or impure is context-dependent. but whether a word is a noun or verb is not (very) context-dependent. so this questions requires further thought (todo).

keyword arguments are like adverbs/adjectives

note: i like perl's "it" pronoun ($_), but if you let "print" be a shortcut for "print $_", you need to have another way to pass "print", the function. which is fine. jasper's version of $_ should have lexical scope by default, unlike Perl.

todo: read http://www.google.com/search?q=haskell+open+data+paper&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:unofficial&client=iceweasel-a , including SYB3, and Open Data Types and Open Functions



search for attribute typing, attributive typing, attribute type system, attributive type system, partial typing, partial type system

property typing, property type system

predicate typing, predicate type system

propositional typing, propositional type system

fractional typing, fractional type system

--

success typings

toread http://www.it.uu.se/research/group/hipe/papers/succ_types.pdf

--

so mb what we need is just for each term to keep track of (a) a set such that if the term has an attribute within this set of type attrs, it definitely is ok, (b) a set such that if the term does not have an attr within this set of types, it definitely fails. do we also need the negations, e.g. attributes such that if it has them it fails and if it doesnt, it succeeds? do we also need to explicitly keep track of promise vs. demand?

--

my hunch, not fully developed, is that we'll end up needed just

 (a) a set such that if the term has an attribute within this set of type attrs, it definitely is ok, (PERMITTED, or P)
 (b) a set such that if the term does not have an attr within this set of type attrs, it definitely fails. (OBLIGATIORY, or B)

by adding their own custom type attributes and logic to go with them (see http://www.it.uu.se/research/group/hipe/papers/succ_types.pdf for an example of the form of such a logic; i havent read it yet but it looks like sequent calculus will do the trick), the user can extend these to be more expressive.

for the default, we'll choose readable and tractable over expressive.

my hunch is that it will suffice for one of (a) or (b) to be CNF and the other to be DNF.

for example, let's say that you see

f(x) = 3

we start by looking at 3 and saying, P = B = {3}, and looking at x and saying:

P = {}, B = {*}

now we look at f(x) and we say,

f: t(x) -> t(3)

therefore P(f) = B(x) -> P(3), so P(f) = * -> {3} and B(f) = P(x) -> B(3), so B(f) = {} -> {3}

umm... not quite.. the reasoning has to go both ways at once.. also, why is the literal 3 appearing as the type?

hmm for the both ways. if a value is being consumed as the input to a function, then that value's B is like offers, P is like hmm.

so,

P(3), B(3) = {3}, {3} P(x), B(x) = ?,?

--