notes-computer-jasper-jasperNotes2


auto conversion would be nice. for example, you should be able to tag a value as a "log return" and then print it in the form of "arithmetic return". note that this transformation is nonlinear, so "log return" is a "unit", adding one type of unit = multiplying another type.

and the units should themselves be sets or tuples of attribute tags, so that you can have "log inches" and write the code for "log<-->1" conversions and "inch<-->foot" conversions and then autoconverst "log inches<-->feet"


want a standard convention to pass keyword dicts of 'extra' information UP the call chain, info that will not be understood and will be ignored by most passers


multiappend


the Cycle library i wrote should be trivial in Jasper, even though it's more list-oriented than graph-oriented


some neat python flow control: break continue next else(else can be applied to loops)

" The break statement, like in C, breaks out of the smallest enclosing for or while loop.

The continue statement, also borrowed from C, continues with the next iteration of the loop.

Loop statements may have an else clause; it is executed when the loop terminates through exhaustion of the list (with for) or when the condition becomes false (with while), but not when the loop is terminated by a break statement. This is exemplified by the following loop, which searches for prime numbers: "


neat syntax for sending messages:

            limitTicks.append = findEdgeDispersionWithMinimumReturn(cycle, self._b, edgeIdx)

that is, just reuse the single graph edge traversal/__setitem__ syntax

---

i just fixed an annoying bug of the form:

def handleMessage(self, key, newValues):

  blah =  sum([self.storage[key].get('quantity',0) for key in self.openOrders[name]['keys']])
  ...
  self.storage[key] = newValues

do you see it? 'key' is being bound in the for loop in the 'blah' line, so the values will be stored in the wrong key

--- this common pattern should be supported cleanly:

        somehash.get(somehashkey, []).append(some_item_to_go_into_the_list_associated_with_that_hash_key)

(perhaps just use defaultdict?)

or even: thelist = somehash.get(somehashkey, []) thelist.append(some_item_to_go_into_the_list_associated_with_that_hash_key) somehash[somehashkey] = thelist

to notify the dict-like object that something has changed (in case its a persistent store, for example)

also this pattern:

if somekey in somedict: do_something_using somedict[somekey]

mb this can be subsumed into a python-like "with"?


list comprehension conditions should short-circutly evaluate and short-circut the evaluation of the main body (i think python does this)


docstrings are awesome

ipython's '?' is awesome

generator comprehensions

too bad you can't mix numpy arrays and comprehensions


just as the columns in a dataframe can be referred to by name or by an integer index, jasper edges can have multiple labels, each of which lives in a different universe

python's for..else

---

why not let Python's default keyword argument syntax be used in expressions, too?

---

topic maps

http://www.ontopia.net/topicmaps/materials/tao.html

" Each topic that participates in an association plays a role in that association called the association role. In the case of the relationship “Puccini was born in Lucca”, expressed by the association between Puccini and Lucca, those roles might be “person” and “place”; for “Tosca was composed by Puccini” they might be “opera” and “composer”. It will come as no surprise now to learn that association roles can also be typed and that the type of an association role is also a topic!

Unlike relations in mathematics, associations are inherently multidirectional. In topic maps it doesn't make sense to say that A is related to B but that B isn't related to A: If A is related to B, then B must, by definition, be related to A. Given this fact, the notion of association roles assumes even greater importance. It is not enough to know that Puccini and Verdi participate in an “influenced-by” association; we need to know who was influenced by whom, i.e. who played the role of “influencer” and who played the role of “influencee”.

This is another way of warning against believing that the names assigned to association types (such as “was influenced by”) imply any kind of directionality. They do not! This particular association type could equally well (under the appropriate circumstances) be characterized by the name “influenced” (as in “Verdi influenced Puccini”). (See section 3.4.3. for an example (involving Tosca and Rome) of how the scope feature might be used to make this happen in practice.) "

" Secondly, associations are completely different in structure from RDF statements. They have roles representing the involvement of each topic in the association, and they go both ways. That is, in topic maps saying that I am employed by Ontopia is the same statement as saying Ontopia employs me. This means that the issue of whether to make my employment a property of Ontopia or of me is a non-issue in topic maps; it will always be both. "

" To summarize, a statement in RDF can be a name, an occurrence, or an association in topic maps. Names compare easily to RDF: they are RDF statements where the object is a literal and where the property has name semantics. The remaining RDF statements where the object is a literal are occurrences. However, RDF statements where the object is another resource are either occurrences or associations, depending on the semantics of the statement. If they are associations role types must be supplied. In addition to this comes the cases where in a topic map the association has more than two roles, in which case an intermediate RDF resource must be created. "

" In RDF there are three kinds of nodes:

    literals (which we have already discussed).
    URI nodes. A URI is just a node that has a URI label on it, where the URI identifies the resource represented by the node. Since the URI directly identifies the resource represented by a node, RDF assumes that nodes with the same URI represent the same resource.
    blank nodes. These nodes are called blank, as they have no URI label. (Two examples can be seen in the RDF diagram example in section 2.2..) For blank nodes the only way to discover which resource they represent is to look at the statements made about them. RDF itself provides no standardized way to identify which blank nodes represent the same resources, although higher-level vocabularies like OWL do. (More about this in section 4.1.2..)"

" Let's say we use the URI http://www.ontopia.net/ to identify a thing. Now, what does this actually identify? The information resource we get by resolving the URI? Or the thing described by that information resource? In practice, one finds that URIs are being used in both ways.

Topic maps distinguish these two cases, so that when assigning a URI to a topic as an identifier, the URI can be considered to be a subject address or a subject identifier. In the first case, the subject identified is the resource. In the second case it is whatever is described by the resource. In RDF, however, this distinction does not exist, and given a URI node there is no way to tell a priori which of the two ways the URI should be interpreted.

This is actually quite a thorny problem for interoperability between topic maps and RDF, and is also indicative of differences in the thinking behind the two. RDF practitioners would say that RDF models consist of statements and resources, ignoring the fact that the resources are not really part of the RDF model, but are represented by RDF nodes. In RDF, the distinction between the RDF model and the world it represents is not given much emphasis, whereas in topic maps this distinction premeates the whole model. "

also reification and levels; topic maps are for making a back-of-the-book index of hypertext, and talk about "occurences" of the topic in the referenced hypertext; this should be generalized into a single data model


in fact, should have a separate document for properties of the graph data model:

nLSD RDF OWL RDF Schema

chu spaces

some predefined relations (see also OWL): name inverse type isa subclass (implies?)

  these can all have a qualifier 'typically' or 'logically', with 'typically' being the default

XDI


topic maps unify provenance and qualification as 'scope':

" Qualification

Sometimes one wishes to qualify assertions made about things in order to record which authority claims they are true, what the source of the assertion is, or in what context the assertion is true.

In topic maps there is a built-in feature for this: scope. When an assertion is made in a topic map (in the form of a name, an occurrence, or an association) a scope is always attached to it. The default scope is the unconstrained scope, which means that there is no known limit to the validity of the assertion. Topics can be added to the scope to restrict under what circumstances it is considered to be true. Some examples are shown below.

    Topic maps are called "topic maps" in English, but in Norwegian they are called "emnekart". This is best represented by creating a single topic with two names, one in the scope English, and one in the scope Norwegian.
    There is a topic map with topics for each officially identified issue with the topic map standard, and these have occurrences of type "opinion", which gives an opinion about the issue. Each opinion is scoped with a topic representing the person who held the opinion.
    There is a group of people who believe that Francis Bacon wrote Shakespeare's plays, and this might be represented by adding extra authorship associations between the plays and Francis Bacon, and scoping it with a topic representing this group of people.

In RDF there is no built-in feature for this, except that literals may have a language identifier attached to them, which is a kind of qualification. (A language identifier is a string conforming to RFC 3066[RFC 3066], such as en-uk or pt-br.) However, it is possible to achieve this through reification, since this turns the statement into a node about which statements (including qualifying ones) may be made. On the other hand, reification in RDF is, as we noted above, rather awkward in practical use.

Again it should be added that not having direct support for this makes RDF more light-weight, but this is a feature that is quite often needed, especially for internationalization, but also for tracking the sources of assertions. It should be added that although RDF itself does not support this, support for it can be built into individual applications by creating extra resources for assertions.

The key problem here is that statements in RDF have no identity, which means that it is impossible to make resources that represent them (without changing the statements) and since the model does not directly support qualification support for qualification cannot be added through reification. This is one of the most fundamental differences between topic maps and RDF, and one that has so far frustrated all attempts to model topic maps in RDF in a natural way. "

---

" 8. Addressable and non-addressable subjects

The subject of every assertion (or statement) in an RDF model is a resource, identified by a URI. The subject of every assertion in a topic map is a topic, representing a subject, which may be addressable or non-addressable. Addressable subjects are identified by their URIs (as in RDF); non-addressable subjects are identified by the URIs of (one or more) subject indicators. This important distinction is not present in RDF. "


" /* topic types */

  [person = "PERSON" @"http://taoriver.net/nLSD/person/"]
  [parent ; person = "Parent"]
  [child ; person = "Child"]
  [childOf : Hidden = "is child of"]
   
  /* template provides visualization direction */
  childOf([child] : From, [parent] : To ) / Template
   
  [sakura : child = "Sakura Kimbro-Juliao" ]
  childOf(sakura : child, [kitty : parent = "Amber Straub" ] : parent )
  childOf(sakura : child, [lion : parent = "Lion Kimbro"] : parent )"

the open annotation spec ( http://www.openannotation.org/ ) has three parts to every annotation (three RDF nodes), the annotation itself, the annotation body. Optionally, the target node itself points to two nodes, the actual target, and another (potentially freeform) node which defines the part of the target which is being addressed.

--- should study this language to learn to be faster:

Topic: Julia: A Fast Dynamic Language For Technical Computing

Speaker: Jeff Bezanson MIT

About the talk:

Julia is a general-purpose, high-level, dynamic language, designed from the start to take advantage of techniques for executing dynamic languages at statically-compiled language speeds. As a result the language has a more powerful type system, and generally provides better type information to the compiler.

Julia is especially good at running MATLAB and R-style programs. Given its level of performance, we envision a new era of technical computing where libraries can be developed in a high-level language instead of C or FORTRAN. We have also experimented with cloud API integration, and begun to develop a web-based, language-neutral platform for visualization and collaboration. The ultimate goal is to make cloud-based supercomputing as easy and accessible as Google Docs.


let's say you had personalized (perspectivized) reputation graphs -- each perspective is essentially a function that maps each node (or some subsets of nodes) to a floating point number (or, 'labels each node with a floating point number')? would jasper express this as a "graph overlay"? or another graph that could be 'merged in'? or a function? or all three, with conversion operators?

also, category theory diagrams (maps on the nodes AND edges)


inversed, uniques/injective&surjective as a core feature:

No programming language to date (except possibly haskell) understands whether one function is intended to be the inverse of another, whether items in a list are unique, etc. This would allow clean syntax for things like inverse (^-1 even, or just ^-) and some amount of automated reasoning and type inference.


Colored exceptions nonlocal returns


SPEcify by postconditions, Jasper finds a matching library function Or equiv codecoin Jasper tries to deduce formal postconditionss


python multiprocessing module

optional output args


datomic data model:

"datom": entity/attribute/value/transaction

(like RDF: subject/attribute/value + scope, except scope is just time)

transactions (time) are totally ordered and are first-class

can get the db as-of, or since or between, any transaction (transactions)

immutable

not good for write-heavy workloads, as everything must go thru the transactor


use simple graph mutation constructs in place of special case for 'before' and 'after' hooks, other AOP stuff <--- CRUCIAL


since this is a homeoiconic language of graphs, type system should mb be a graph constraint language? of course this is (iso?)morphic to logic..

category theory reifies paths


some issues topic maps introduced me to:

reification hypergraphs (needed to cleanly do more than binary relations, also negates the need to use inverse relations so much, e.g. a trinary relation between child, mother, father) distinction between pointing to the hypertext document that can be retrieved at a URI vs. pointing to the concept signified by an URI scope

came up in OAF convo:

justification stance (agree/disagree, mb also the edge labels in argument maps) provenance scope constrained target

further notes of mine:

scope temporal scope vector-clock like temporal scope, e.g. time as a partial order

notion of reference further empowered by constrained target (like empowering it to represent distinction b/t URI and concept signified by URI) constrained target as "add on"; should be able to swap out any part of standard (e.g. target) for "see this node for something that fulfills the semantic role of this, but uses a format not defined in this standard" (in this case, we use an alternative trick which might be even better; the target IS the the constrainedTarget node, which has a way of specifying a new way of finding a target)

constrained target is a way of INTRODUCING a new URI that allows us to talk about the constrained target

from Rob's PDF, a Semantic Tag subclass of Annotation: oac:Reference Target is a reference or citation to the tag resource oac:Description Target is a description or depiction of the tag resource oac:Classification Target is an instance of the tag resource oac:Quality Target has the tag resource as a quality oac:Membership Target is a member of the set identified by the tag resource oac:Relationship Target has a relationship [expressed or not] with the tag resourc

note that hypothes.is style annotations are none of these


"distinction between pointing to the hypertext document that can be retrieved at a URI vs. pointing to the concept signified by an URI"

mb use C-like syntax throughout, letting * and & travel from signifier to signified and vice versa (e.g. travel down a 'meta' edge as contrasted with just traveling down a normal edge -- or should these be the same, with the edge type differentiating?)?


e.g.

bob = 3 print bob

is like

global.'bob'.set(3) print *bob

also seems related to boxing and unboxing... perhaps we have our long-sought general "step down a level/step up a level" concept taking shape? but levels are just special types of edges, are they not? * and & allow a generic language of graph motions that can then be specialized by choosing the special edge type

jasper graph motions, jasper graph mutations (note how immutable should we be?)


a real-time garbage collector for java:

http://www.ibm.com/developerworks/java/library/j-rtj4/

notes on restrictions:

" Most roots are malleable to some degree during execution in terms of their object references. For this reason, changes to their reference set must be tracked, as we discussed in Write barriers. However, certain structures, such as the stack, can't afford the tracking of pushes and pops without significant penalties incurred on performance. Because of this, certain limitations and changes to scanning stacks are made for Metronome in keeping with the Yuasa-style barrier:

    Atomic scanning of stacks. Individual thread stacks must be scanned atomically, or within a single quantum. The reason for this is that during execution, a thread can pop any number of references from its stack -- references that could have been stored elsewhere during execution. Pausing at mid-scan of a stack could cause stores to be lost track of or missed during two partial scans, creating a dangling pointer within the heap. Application developers should be aware that stacks are scanned atomically and should avoid using very deep stacks in their RT applications.
    Fuzzy barrier. Although a stack must be scanned atomically, it would be difficult to keep determinism if all stacks were scanned during a single quantum. The GC and JVM are allowed to interleave execution while scanning Java stacks. This could result in objects being moved from one thread to another through a series of loads and stores. To avoid losing references to objects, threads that have not been scanned yet during a GC have the barrier track both the overwritten value and the value being stored. Tracking the stored object, should it be stored into an already processed object and popped off the stack, preserves reachability through the write barrier.

...

Issues to consider when using Metronome

Metronome strives to deliver short deterministic pauses for GC, but some situations arise both in application code and the underlying platform that can perturb these results, sometimes leading to pause-time outliers. Changes in GC behavior from what would be expected with a standard JDK collector can also occur.

The RTSJ states that GC doesn't process immortal memory. Because classes live in immortal memory, they are not subject to GC and therefore can't be unloaded. Applications expecting to use a large number of classes need to adjust immortal space appropriately, and applications that require class unloading need to make adjustments to their programming model within WebSphere? Real Time.

GC work in Metronome is time based, and any change to the hardware clock could cause hard-to-diagnose problems. An example is synchronizing the system time to a Network Time Protocol (NTP) server and then synchronizing the hardware clock to the system time. This would appear as a sudden jump in time to the GC and could cause a failure in maintaining the utilization target or possibly cause out-of-memory errors.

Running multiple JVMs on a single machine can introduce interference across the JVMs, skewing the utilization figures. The alarm thread, being a high-priority RT thread, preempts any other lower-priority thread, and the GC thread also runs at an RT priority. If sufficient GC and alarm threads are active at any time, a JVM without an active GC cycle might have its application threads preempted by another JVM's GC and alarm threads while time is actually taxed to the application because the GC for that VM is inactive.


i guess Haskell's association/binding/precedence rules make sense, except for custom precedence: left-associative, functions bind tight

should we allow infix or not?


stuff on how to write a programming language (for the jvm, or in general):

http://programmers.stackexchange.com/questions/84278/how-do-i-create-my-own-programming-language-and-a-compiler-for-it

http://createyourproglang.com/

http://stackoverflow.com/questions/716613/how-to-make-a-net-or-jvm-language

http://stackoverflow.com/questions/3380498/create-a-jvm-programming-language

http://londongeeknights.wetpaint.com/page/Creating+a+language+on+the+JVM

http://www.java-forums.org/advanced-java/29920-creating-new-jvm-language.html

http://londongeeknights.wetpaint.com/page/Creating+a+language+on+the+JVM todo: http://skillsmatter.com/podcast/java-jee/language-on-jvm

some jvm bytecode tools: asm jamaica jasmin


on the beauty of having non-class-related functions in Python:

"

petercooper 21 hours ago

link

I notice no mentions of Ruby yet, and I wonder if this discussion is pretty much impossible to have in the Ruby world, given Ruby is solely an object oriented language, even if you try to use it procedurally. Not writing classes would be against so many Ruby conventions and accepted style that it's a non-topic?

Or is it that classes in Python specifically aren't that much of a win over the language's implementation of other paradigms? I sure haven't hit into any problems using classes or objects in Ruby, even when sometimes the use feels a little contrived to begin with, but.. I also have no choice :-) (maybe Rubyists have a Stockholm Syndrome with OO ;-))

reply

ryanf 21 hours ago

link

I think part of the reason this doesn't come up in Ruby is that Ruby doesn't have Python's powerful namespacing system.

In Python, it's reasonable to have a package with just functions in it, whereas in Ruby, writing a top-level method means polluting every object in the system. You can write modules that have singleton methods on them, but you still don't have anything as flexible as Python's "from pkg import foo, bar"—the caller needs to either write "ModuleName?.my_function" at every call site, or use "include" and end up with the module's methods as part of the consuming class's interface.

reply

judofyr 21 hours ago

link

Implicit self and late binding in Ruby makes it hard to implement a proper namespacing system without breaking other semantics :(

After working with Perl (which has a similar namespacing), I must it's something I really miss when I come back to Ruby.

reply "


gg_ 7 hours ago

link

I believe he is referring to this talk: http://tele-task.de/archive/video/flash/14029/

reply

gruseom 2 hours ago

link

Yes thanks - and if anyone watches it, be sure also to check out the stuff on Bob Barton that we posted to HN. I learned about a magnificent part of computing history thanks to that talk.

Edit: here you go:

http://news.ycombinator.com/item?id=2855500

http://news.ycombinator.com/item?id=2928672

http://news.ycombinator.com/item?id=2856567

http://news.ycombinator.com/item?id=2855508

The first two are teaser comments. The second two are the important pieces. Also the paper Kay praises at the start of the talk is short and very worth reading.


"

daviddaviddavid 18 hours ago

link

I dislike classes but I like bundling data along with methods which operate on that data. I often find that JavaScript?'s classless objects are a great solution. It's just so natural to be able to create an object with object literal syntax without first creating an ad hoc class and then instantiating it:

  var o = {
      name: 'David',
      greet: function () {
          console.log('Hi, I am ' + this.name);
      }
  };

reply

jmaygarden 15 hours ago

link

Wouldn't this be better?

    function greet(name) {
        console.log('Hi, I am ' + name);
    }
    var o = function() { greet('David'); };

reply

njharman 16 hours ago

link

> bundling data along with methods which operate on that data

What can you possibly believe a class is other than exactly that?

reply

daviddaviddavid 4 hours ago

link

I realize that is one thing that classes offer. My point is that you can achieve exactly the same thing with objects rather than classes.

Very often when writing Python (or whatever) you'll create a class which is only ever instantiated once and the instantiation is done simply for the benefit of calling one method.

In such cases I find classes to be overkill. What I really want is the object. I don't need to factor the object's behavior out into a class.

Many of my sympathies are expressed much more thoroughly and eloquently in the many papers you can find online by Walter Smith outlining the motivations behind NewtonScript?.

reply

haldean 15 hours ago

link

Polymorphism and type hierarchies.

reply "


Python protocols reference: http://www.rafekettler.com/magicmethods.html

https://github.com/RafeKettler/magicmethods

---

in Python if D3_norm_to_D is a list and you mean to do

for (i, gene) in enumerate(D3_norm_to_D):

but you actually do

for (i, gene) in D3_norm_to_D:

you'll get

TypeError?: 'int' object is not iterable

it would be nice if the tuple destructuring routine would give a different error like 'tuple cannot be filled because 'int' object is not iterable'

--- from doug: a prob with c# is that the syntax for a primitive type (array) doesnt match the syntax for custom types:

> int[] intArray = new int[]{ 1, 2, 3, 4 ,5 }; > List<int> intList = new List<int>(){ 1, 2, 3, 4, 5 };


from steve: no if statements

(this fits in well with my search to make the syntax more homeoiconic; and to use the graph structure to represent control flows explicitly, too)

levels example?: map vs. switch vs. ???

cheap syntactic vectorization operator? or is that just part of the above map vs. switch levels generalization (we already have a map operator)


toread https://www.google.com/search?client=ubuntu&channel=fs&q=extensible+programming+language&ie=utf-8&oe=utf-8

www.danielzingaro.com/extensible.pdf

---

leave some syntactic constructs to be customized, e.g. perhaps the programmer can load a library to say what text inside `` means, or what happens when a # is seen

---

dollar signs within double quotes for interpolation syntactic sugar for sprintf a la Python %


make sure DSLs at the level of Ruby on Rails are doable

---

in Python, i think

data[y-filter_shape[0]/2:y+filter_shape[0]/2+1,x-filter_shape[1]/2:x+filter_shape[1]/2+1]

!=

z = y-filter_shape[0]/2:y+filter_shape[0]/2+1,x-filter_shape[1]/2:x+filter_shape[1]/2+1 data = data[z]

but it should be..


array1[array2] should produce [array1[array2[0]], array1[array2[1]], ...] by default, unless syntax is used to 'quote' array2 to treat it like a single index into array1 or, actually i think it would be better if the quoted situation were the default, and syntax must be used to acheive the above result (syntactic map syntax? or something else?)

in Python you must do

[array1[idx] for idx in array2]

which is too verbose. with numpy it's just array1[array2].


syntactic or at least metasyntactic ability to support for things like "assert this array has no nans" "assert every member of this array isfinite" "assert every member of this array is positive" without typing out the list comprehension each time


cross-subroutine "break" and "continue" and the like via exceptions. generalize somehow (not sure how but it seems generalizable) (is this what 'colored' exceptions could do?)..


exceptions that traverse a control tree or dag or graph, like balloons navigating a maze, according to rules (constrained or Turing-complete? if like balloons, 'always go up' could be a constraint -- but mb this itself could be generalized), rather than simply going up (raising) and down (continuing after ignoring the exception if so directed by the catching exception handler) a linear stack of control frames


ok i can easily see how doing down in the call stack can be generalized to a tree (connection machine's starlisp e.g. parallelism, also nondeterminism a la nondeterministic finite automata). but how can going up be generalized?

the subroutine that called you has two meanings: it caused you to be invoked, and it is where control will return when you are done. both of these can be pluralized.

first, you can be invoked by multiple subroutines via that parallelism style in which a method is only invoked once all of its headers have been called (as each one is called it can block (or it could be asynch), forming a barrier).

second, you can return to multiple subroutines via continuation-passing (continuation passing style, cps), and you could have a list of them instead of a single continuation, just like in something like starlisp you could have a list of subroutines to call (to constrain, could limit this to data parallelism instead of control parallelism, e.g. you must call the same subroutine but with different return values -- at the least, there should be syntactic support for the data-parallel style of multiple calls and multiple returns, even if the more general control style is available)

in this way we can unify arguments and return values -- each subroutine has a variety of entry points (doors). the front door is entering via the headers. the other doors are entering via returning from subroutines. using continuations, as above, you can enter via any door at any time. and so calling and returning are unified.

i feel like primitives like 'any' and 'all' will be important in any constrained languagefor balloon rising (exceptions navigating through a maze).

is the intelligence of the balloon navigating the maze in the maze walls or in the balloon (in the exception or in the exception handlers?) it seems like "both" is an easy answer, but is it the right one? something tells me that it should be in the exception handlers -- if you make the 'balloon' 'fully aware' then it' just like normal program execution, so what's the point? if you put the intelligence in the walls, you can still have the balloon seem almost intelligent via coloring it and attaching 'antigens' to its 'outer coat' -- that is, it can accumulate properties (only boolean or general?) which are added and subtracted to/from it by the exception handlers as it rises.

yes, i rather like that -- there are two modes of execution, the straight-forward mode and the looking glass mode. in the former, the code at the point(s) of prior execution have control (i.e. determine the routing of the point of execution; the past controls the future), and in the latter, the 'walls', the environment the code around the potential points of future execution have control (the future controls the past).

if you can enter a subroutine via multiple doors, isn't this just GOTO? what about GOTO Considered Harmful? well, you can't just barge in via a back door, you have to be invited. first, you can only go in a back door in exception handling mode. in the inverse/looking glass/exception handling mode, it's like pattern matching in biology, the exceptions (mb not so exceptional anymore) are 'colored', or more generally display 'antigens' (boolean predicates, or more likely just generic data against which boolean predicates can match), and the handlers, the 'back doors', are like receptors, gates defined by boolean predicates which match only some exceptions. There are multiple 'spaces' of exceptions floating around to be scanned (like the 'blackboard' parallelization scheme). you could also consider the "lock and key" metaphor in which the key must be specifically granted to the bearer by a party connected to someone who owns the house (i.e. at some time in the past, you have had to get a continuation for a point of entry by passing through that subroutine, and without this continuation you cannot match the gate). maybe this is the more general case of the special case of exception handling, which might have the strict "lock/key/must have the continuation in hand" rule, as opposed to just a generic event-handling architecture. there is an 'up/down' direction in an exception handling space (the space in which the exceptions float around, advertising their antigens); if you call someone, or if they can return to you, you are above them (there can be cycles of course, e.g. recursion, but there is still a direction, and you can unfold/unroll these cycles if you consider each instance of a subroutine call to be a different node, rather than considering all calls to a given subroutine the same node --- btw this sort of 'unfold' operation should be a std lib function on graphs).

so... is every function call implicitly a call/cc and a function can have multiple front doors (the parallelism barrier style) and each door is passed its return arg as a continuation, and an exception can be generated and given any subset of these continuations (or other continuations which were passed in)? then the exception handlers can match on the exception. the exception itself implicitly contains a continuation, which allows for 'resume' behavior. normal function return and exceptional return are subsumed under this same general framework.

now, what is to prevent exceptions from just being general event handling? the up/down direction. an idea to implement this: you cannot pass a continuation upwards in the call stack, except for a continuation at the point that the exception was raised. that is, neither the exception object (or the return object, since now we're dealing with the general case of function returns) nor any general non-locally-scoped variable can contain as data any continuation (except, as just noted, that a return object implicitly contains a continuation to the place where it was raised (what about return objects doing normal function return? their continuation is null and using it would raise an error; they are colored as such)).


Python std errors, at least.

NotImplemented?


also remember to check out modula-2 module system, and Oberon's system, which is apparently like Modula-2 but perfected by throwing out a rarely used but difficult to implement feature (nested modules):

http://prog21.dadgum.com/136.html

also a good point: a good way of judging how good most compiler optimizations are (because they add complexity to the compiler in exchange for making the compiled programs better) is to see if they reduce the time for the compiler to compile itself:

" A Forgotten Principle of Compiler Design That a clean system for separately compiled modules appeared in Modula-2, a programming language designed by Niklaus Wirth in 1978, but not in the 2011 C++ standard...hmmm, no further comment needed. But the successor to Modula-2, Oberon, is even more interesting.

With Oberon, Wirth removed features from Modula-2 while making a few careful additions. It was a smaller language overall. Excepting the extreme minimalism of Forth, this is the first language I'm aware of where simplicity of the implementation was a concern. For example, nested modules were rarely used in Modula-2, but they were disproportionately complex to compile, so they were taken out of Oberon.

That simplicity carried over to optimizations performed by the compiler. Here's Michael Franz:

    Optimizing compilers tend to be much larger and much slower than their straightforward counterparts. Their designers usually do not follow Oberon's maxim of making things "as simple as possible", but are inclined to completely disregard cost (in terms of compiler size, compilation speed, and maintainability) in favor of code-quality benefits that often turn out to be relatively marginal. Trying to make an optimizing compiler as simple as possible and yet as powerful as necessary requires, before all else, a measurement standard, by which both simplicity and power can be judged.
    For a compiler that is written in the language it compiles, two such standards are easily found by considering first the time required for self-compilation, and then the size of the resulting object program. With the help of these benchmarks, one may pit simplicity against power, requiring that every new capability added to the compiler "pays its own way" by creating more benefit than cost on account of at least one of the measures. 

The principle is "compiler optimizations should pay for themselves."

Clearly it's not perfect (the Oberon compiler doesn't make heavy use of floating point math, for example, so floating point optimizations may not speed it up or make it smaller), but I like the spirit of it. "

i'm guessing this is a good idea only until the language goes mainstream.

--

did a google search on modula-2 and haskell, i recall one of the designers of haskell saying that its module system is not as expressive in modula-2 but i can't see what it lacks. todo, ask the mailing list someday. one thing that it may lack is some sort of first-class-ness of data type definitions.

afaict it seems like the big module innovations in modula-2 are just what you find in Python (i know Python came later, but i am more familiar with it, so it is my point of reference): you can import things, imported things by default have qualified names, you can choose to "import as" things as unqualified names, choosing whatever name you wish. also, you can make export lists.

also, i used to think that haskell had no mechanism to specify which names you want to import, but now i think it does:

http://www.haskell.org/onlinereport/modules.html

wait, no, "Modular Type Classes" by Derek Dreyer Robert Harper Manuel M.T. Chakravarty says that indeed typeclass instances are in a global namespace

ok, i looked it up again and i think it's ML's module system that they were saying was awesome, not Haskell's

this blog post talks about why it thinks ML's module system didn't catch on: http://flyingfrogblog.blogspot.com/2010/04/mls-powerful-higher-order-module-system.html

(however, A History of Haskell: Being Lazy with Class says that the Haskell committee members simply weren't very familiar with ML's module system at the time)

toread:

https://www.google.com/search?q=ml+haskell+module&ie=utf-8&oe=utf-8&client=ubuntu&channel=fs

ML Modules and Haskell Type Classes: A Constructive Comparison http://lambda-the-ultimate.org/node/1558

--

'first-class-ify' assert statments found at the beginning and end of functions? e.g. treat them as pre- and post- conditions, and have an option to the compiler to verify if all of the preconditions for a function have been asserted in the calling function or via the post-conditions of its arguments? to get even fancier, have some limited inference.

---

mb:

strict by default but allows a "lazy" declaration, so the expressiveness of lazy, e.g. infinite data structures, separation of traversal strategy from function definition, are still possible without the user of these structures having to do anything different. typewise, everything is lazy, but it's like haskell but by default implicitly recursively strictified until it hits a barrier of a lazy declaration.

--- http://www.cs.nott.ac.uk/%7Egmh/appsem-slides/peytonjones.ppt

---

SAGE has an interesting idea: "hybrid type checking":

"Sage performs hybrid type checking of these specifications, proving or refuting as much as possible statically, and inserting runtime checks otherwise."

--- assume, check: "check" is jasper's "assert" (we don't use the word 'assert' because in English, assert might have other uses and may be confused with 'assume'). "assume" means "for the purpose of type checking, assume that the following is true at this point"

---

could use ! for imperative stuff and ? for logic programming? otoh mb 'imperative stuff' is sufficiently semantic that it doesn't need punctuation (e.g. just use haskell's 'do'?)... but it is ordered...

---

also, 'commutative monads' (unordered haskell do)

---

distinction b/t mandatory and optional type checking? if we have a very expressive type system, e.g. with polymorphism and dependent types, then the type system is being used as a program verifiers, rather than its initial use in C just figure out how to compile the darn thing. contrast knowing whether something is a string or an int to bounds-checking for arrays.

there is a distinction here; the 'mandatory' component of the type system is needed to compile the code at all; the 'optional' component is needed to provably prevent certain classes of run-time crashes and errors. if the language uses hybrid type checking, a program concerned about speed might decide to omit all the run-time checks. however, a type error during compilation of the mandatory types would still lead to an error (e.g. if polymorphism is resolved at compile time, then 'you passed a string to this polymorphic function, but there is no polymorphic variant that can handle strings', although this decision could of course be pushed to runtime).

the optional component is really just a program verification/proof assistant, which can be expected to be very complex/expressive and which could perhaps not even be decidable, or not polynomially decidable. the mandatory component better darn well be decidable, and perhaps should be darn simple.

is it possible to have no mandatory component yet still have a program be compilable? e.g. to decide at compile-time on a per-instance basis if polymorphism can be resolved?

is it possible to have the optional component be extensible, e.g. can you make the language like Sage or like Qi or like Haskell at will by swapping this part out? Perhaps there could be different type-system modules just like there are different behavioral modules? this makes sense as ppl are always developing better intermediate logics

toread:

Typmix: A Framework For Implementing Modular, Extensible Type Systems A thesis submitted in partial satisfaction of the requirements for the degree Master of Science in Computer Science by Thomas Anthony Bergan

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CG4QFjAD&url=http%3A%2F%2Fwww.cs.washington.edu%2Fhomes%2Ftbergan%2Fpapers%2Fuclathesis-typmix.pdf&ei=Un0cUNe8Cabs0gHO2IBI&usg=AFQjCNGKR4Vlyvm-UbxVhoenfk28GLsJKQ&sig2=hRk-JjyAM7AFjedRZ0wXlg

---

what is the "meaning" of programming language code? we could say it is the compiled code. This allows one to put statements in one's program that refer to its compiled version.

--- perhaps some of the complexity of Haskell's type system can be resolved not by simplifying the type system, but just by improving the UI to Haskell's type checker. one part of this is the error messages. one problem is that if a number of type signatures are, together, inconsistent, then type checker will complain about a seemingly arbitrary choice of one of these, whereas to a programmer, that one is not always where the problem is. Imposing ordering may help here. The most obvious ordering would be run-time; act as if you are interpreting the program and then the first time you hit an inconsistency, emit an error on that statement. this would be undecidable in general but in most cases there is a clear hierarchy to the program that can be deduced at compile time; e.g. "Main" has calls to Abc and Def, and Abc calls Xyz. Now if a type signature in Main which is passed to Abc causes a type signature in Abc which is passed to Xyz which is used to compute a local variable X within Xyz which conflicts with a type signature on another local variable Y within Xyz on a subsequent line, then the type system should complain about Y's type signature.

the type checker should also report the other induced types that led to the error, e.g. the types of the parameters in the Abc and Xyz calls which had a bearing on the induced type of X.

the key there is to remove the need for the programmer to 'debug' types by inserting a bunch of intermediate type signatures to find out where the problem lies.

Another thing the type checker errors could provide is examples. Rather than just saying "Something of TypeA? can't also be something of TypeB?" (e.g. "Something of type &[Int] can't also be of type [&Int]"), it should give an example for each type (e.g. "&[Int] (e.g. 0x0000 where *0x0000 = [0]) can't also be of type [&Int] (e.g. [0x0000] where *0x0000 = 0)". This might not help experienced users much (although i bet it will from time to time), but i bet the first thing that a newbie does when seeing some complicated type error is try to figure out an example of the type expression, so this saves them some energy (some will say 'but they have to learn to think!'; i say, well, turn this feature off if you want practise reading type expressions; not everyone does).

---

for shellish programming, readline_rstrip (convert a file with one string on each line to a list of strings, with no newlines in the strings) is quite useful... so are pipes... pipes are like function composition but you can independently redirect the different stdios.. interesting.. also shellish stuff brings up the concept of data-neutral field-delimited formats...

---

shellish stuff like ls *.sva

xargs -n1 flip -m
    i guess haskell is pretty good with this sort of logic?

also many shell commands take multiple arguments anyways:

  flip -m *.sva

---

for performance: predictability over erratic performance: e.g. Python's reference-counting over Java's garbage-collection with possible long pauses (altho a hybrid approach may get the best of both)

definitely no global intepreter lock; stackless python would be preferred to regular python, who cares if it's a little slower in serial

--- in Python you have to do init stuff like:

    def __init__(self, shouldCancelAllUponShutdown=True):
        self._shouldCancelAllUponShutdown = shouldCancelAllUponShutdown
	this should be automatic

---

in Python it's supposedly annoying for a subclass to override __init__, so to be safe you end up doing stuff like

    def __init__(self, **kwds):
        self.customInit(**kwds)

instead, make the idiom to just extend __init__ and call super like usual

in fact, call make it syntactically easier to call super

---

Yegge goes on about how prototype-based dicts are really really useful:

(if you read this article, skip everything up to here) " Properties Pattern high-level overview

At a high level, every implementation of the Properties Pattern has the same core API. It's the core API for any collection that maps names to values:

    get(name)
    put(name, value)
    has(name)
    remove(name)

There are typically also ways to iterate over the properties, optionally with a filter of some sort.

So the simplest implementation of the Properties Pattern is a Map of some sort. The objects in your system are Maps, and their elements are Properties.

The next step in expressive power is to reserve a special property name to represent the (optional) parent link. You can call it "parent", or "class", or "prototype", or "mommy", or anything you like. If present, it points to another Map.

Now that you have a parent link, you can enhance the semantics of get, put, has and remove to follow the parent pointer if the specified property isn't in the object's list. This is largely straightforward, with a few catches that we'll discuss below. But you should be able to envision how you'd do it without too much thought.

At this point you have a full-fledged Prototype Pattern implementation. All it took was a parent link!

From here the pattern can expand in many directions, and we'll cover a few of the interesting ones in the remainder of this article. "

after that point there are some interesting implementation notes:

"JavaScript? permits you to use arbitrary objects as keys, but what's really happening under the covers is that they're being cast to strings, and they lose their unique identity. This means JavaScript? Object property lists cannot be used as be a general-purpose hashtable with arbitrary unique objects for keys.".

"

Quoting

JavaScript? syntax is especially nice (compared to Ruby and Python) because it allows you to use unquoted keys. For instance, you can say

var person = { name: "Bob", age: 20, favorite_days: ['thursday', 'sunday'] }

and the symbols name, age and favorite_days are NOT treated as identifiers and resolved via the symbol table. They're treated exactly as if you'd written:

var person = { "name": "Bob", "age": 20, "favorite_days": ['thursday', 'sunday'] }

You also have to decide whether to require quoting values. It can go either way. For instance, XML requires attribute values to be quoted, but HTML does not (assuming the value has no whitespace in it). "

" Logically a property list is an unordered set, not a sequential list, but when the set size is small enough a linked list can yield the best performance. The performance of a linked list is O(N), so for long property lists the performance can deteriorate rapidly.

The next most common implementation choice is a hashtable, which yields amortized constant-time find/insert/remove for a given list, albeit at the cost of more memory overhead and a higher fixed per-access cost (the cost of the hash function.)

In most systems, a hashtable imposes too much overhead when objects are expected to have only a handful of properties, up to perhaps two or three dozen. A common solution is to use a hybrid model, in which the property list begins life as a simple array or linked list, and when it crosses some predefined threshold (perhaps 40 to 50 items), the properties are moved into a hashtable. "

" Note that you can get a poor-man's splay tree (at least, the LRU trick of bubbling recent entries to the front of the list) using a linked list by simply moving any queried element to the front of the list, a constant-time operation. It's surprising that more implementations don't take this simple step: an essentially free speedup over the lifetime of most property lists. "

" The algorithm for inherited property lookup is simple: look in my list, and if the property isn't there, look in my parent's list. If I have no parent, return null. This can be accomplished recursively with less code, and but it's usually wiser to do it iteratively, unless your language supports tail-recursion elimination. Property lookups can be the most expensive bottleneck in a Properties Pattern system, so thinking about their performance is (for once) almost never premature. "

" As I described in the Overview, the simplest approach for implementing inheritance is to set aside a name for the property pointing to the parent property list: "prototype", "parent", "class" and "archetype" are all common choices. "

"

The deletion problem

If you delete a property from an object, you usually want subsequent checks for the property to return "not found". In non-inheritance versions of the pattern, to delete a property you simply remove its key and value from the data structure.

In the presence of inheritance the problem gets trickier, because a missing key does not mean "not found" – it means "look in my parent to see if I've inherited this property."

...

The solution ... is to have a special "NOT_PRESENT" property value that deleteProperty sets when you delete a property that would otherwise be inherited. This object should be a flyweight value so that you can check it with a pointer comparison. " (he goes on to note that you may or may not want NOT_PRESENT == null)

(so i guess we want a way to create objects and inherit once, at creation time -- see below, this is similar to a pass-by-reference vs. pass-by-value problem; the prototype parents are by default references but if we let the programmer optionally freeze them into values there is no trouble)

" Read/write asymmetry

One logical consequence of prototype inheritance as we've defined it is that reads and writes work differently. In particular, if you read an inherited property, it gets the value from an ancestor in the prototype chain. But if you write an inherited property, it sets the value in the object's local list, not in the ancestor. "

optimization: "

Make sure your string keys are interned. Most languages provide some facility for interning strings, since it's such a huge performance win. Interning means replacing strings with a canonical copy of the string: a single, immutable shared instance. Then the lookup algorithm can use pointer equality rather than string contents comparison to check keys, so the fixed overhead is much lower.

...

Corollary: don't use case-insensitive keys. It's performance suicide. Case-insensitive string comparison is really slow, especially in a Unicode environment. "

(i might want to not use case-insensitive keys in the runtime, but at compile time to lowercase the key names which are in the source code; or perhaps even use a Haskell-eque syntax with significant case)

another optimization:

" Perfect hashing

If you know all the properties in a given plist at compile-time (or at runtime early on in the life of the process), then you might consider using a "perfect hash function generator" to create an ideal hash function just for that list. It's almost certainly more work than it's worth unless your profiler shows that the list is eating a significant percentage of your cycles. But such generators (e.g. gperf) do exist, and are tailor-made for this situation.

Perfect hashing doesn't conflict with the extensible-system nature of the Properties pattern. You may have a particular set of prototype objects (such as your built-in monsters, weapons, armor and so on) that are well-defined and that do not typically change during the course of a system session. Using a perfect hash function generator on them can speed up lookups, and then if any of them is modified at runtime, you just fall back to your normal hashing scheme for that property list. "

another optimization (note this is different from the optional parent freezing that i discuss below):

"

Copy-on-read caching

If you have lots of memory, and your leaf objects are inheriting from prototype objects that are unlikely to change at runtime, you might try copy-on-read caching. In its simplest form, whenever you read a property from the parent prototype chain, you copy its value down to the object's local list.

The main downside to this approach is that if the prototype object from which you copied the property ever changes, your leaf objects will have the now-incorrect old value for the property.

Let's call copy-on-read caching "plundering" for this discussion, for brevity. If Morris caches his prototype Cat's copy of the "favorite-food" property (value: "9Lives"), then Morris is the "plunderer" and Cat is the plundered object.

The most common workaround to the stale-cache problem is to keep a separate data structure mapping plundered objects to their plunderers. It should use weak references so as not to impede garbage collection. (If you're writing this in C++, then may God have mercy on your soul.) Whenever a plundered object changes, you need to go through the plunderers and remove their cached copy of the property, assuming it hasn't since then changed from the original inherited value.

That's a lot of stuff to keep track of, so plundering is a strategy best used only in the direst of desperation. But if performance is your key issue, and nothing else works, then plundering may help. "

another optimization:

" The idea is that you don't need to pay for the overhead of property lists when very few of the objects in your system will ever have one. Instead of using a field in each class with a property list, you maintain a global hashtable whose keys are object instances, and whose values are property lists. "

another optimization (omitted for now):

"

REDACTED

Brendan Eich came up with astoundingly clever performance optimization for the Properties Pattern, which he told me about back in January. I was ready to publish this article, but I told him I'd hold off until he blogged about his optimization. Every once in a while he'd ping me and tell me "any day now."

Brendan, it's October, dammit! "

---

what's the diff b/t classes and prototypes anyways? mb that classes can be uninitialized? could just have the notion of 'lifecycle states' or 'modes' of instances (uninitialized and inititialized could be 2 of the modes). also i guess the inheritance of the class happens at creation time only, whereas the prototype inheritance happens at lookup time (hence the deletion problem that Yegge talks about). You could just have an operator that 'freezes' the inheritance (e.g. that treats the parents as of that time as static values, as if they were copies of themselves (to save time and space you could just mark them and not actually do the copy unless someone wants to modify the originals). Note that 'freezing' in this sense is different from marking an attribute as read-only (Java's 'final').

of course in general certain actions would happen during a lifecycle state transition .. allocation upon init, mb some deallocation upon destruct (maybe can unify C++ destructors and Python 'with').. some objects might 'freeze' the prototype parent (list?) property upon init, and might do a check for duplicate names upon init

---

" Transient properties

While implementing Wyvern, I discovered that making changes to a persistent property list is a wonderful recipe for creating catastrophes.

Let's say some player casts a Resist Magic spell, which boosts her "resist-magic" integer property value by, oh, 30 (thirty percent). Then, while the spell is active, the auto-saver kicks in (writing her enhanced "resist-magic" property value out to the data store along with the rest of her properties), and then the game crashes.

Voilà – the player now has permanent 30% magic resistance!

It doesn't have to be a game crash, either. Any random bug or exception condition (a database hiccup, a network glitch, cosmic rays) can induce permanence in what was intended to be a transient change to the plist. And when you're writing a game designed to be modified at runtime by dozens of programmers simultaneously, you learn quickly to expect random bugs and exception conditions.

The solution I came up with was transient properties. Each object has (logically speaking) two property lists: one for persistent properties and one for transients. The only difference is that transient properties aren't written out when serializing/saving the player (or monster, or what-have-you.)

... My early experimentation yielded the interesting rule that non-numeric transient properties override the persistent value, but numeric properties combine with (add to) the persistent value.

" -- http://steve-yegge.blogspot.com/2008/10/universal-design-pattern.html

actually i think this just calls for the broader notion of a meta-property. not sure how to construct this, however. note that in his construction, it's not that some keys are always transient properties and some aren't; some values assigned to properties are transient, and some aren't.

---

hmmm.. one thing i was bothered by in Clojure was the lack of an elegant assignment-to-dict syntax like e.g. Python and Ruby have. we saw earlier that introducing this introduces a complexity, namely, if you allow objects to catch and override an assignment, and you assign to a child of an object (e.g. "a.b.c = v"), how does that work if 'a' asks to intercept assignments to itself but so does c? anyhow, that'll have to be resolved, so anyhow, combine this idea with functions that can return multiple values and that can do call/cc-ish things during a generalization of exception handling. mb we can also generalize functions by letting them be no different from these data structures that can override lookups and assigns, and let you assign to any function. recall that we are already unifying applying a function to a value and doing a dict lookup and looking up an object attribute. so doing a lookup for "2" on the successor function returns "3", the same as if it were a dict and we accessed index "2". what happens if we assign "4" to index "2" of the successor function? does it throw an error, or does it create an overlay (just as if the successor function were an finite array and we did a lazy copy of that array and noted that the value at index '2' had now changed but the rest were the same?)

---

programming language design is like philosophy: i think of philosophy as the effort of taking things which are hard to say, and often hence hard to think, and defining better ways to say them. clearly, if you look at it this way, then programming language design is the philosophy of CS

i think one of the key problems of programming language design is, how do you make the language expressive for meta-programming, but at the same time prevent the community from fracturing into a zillion dialects? i think this is actually mostly a language community governance problem, not a language design problem (esperanto and friends have the same issue), and i think the solution is an amendable-code-like pika-like system that allows the community to easily grant various degrees of official recognition to various libraries and frameworks. However, on the language design side, i think the thing to do is to have various levels of meta-programming with various degrees of expressive capacity. So, e.g. someone shouldn't need to always write a recursive descent parser in order to modify the grammar. This allows them to metaprogram as little as possible, with as low a degree of meta-ness as possbile. In a way, Lisp is like the assembly language of meta-programming; you can do anything, but there's only a few basic constructs to do it with. We want a whole high-level language of meta-programming..

in deciding how to resolve the case where 'a.b.c = v' but both 'a' and 'c' override their '__set' functions (note that i am adopting the usual left-to-right syntax here, not the 'backwards' syntax i explored earlier, because i feel that having text aligned to the left decides the issue; you want to have a bunch of lines aligned up, and you often find a part of code by looking down the left side of the screen and saying 'now where did i set variable 'x'? oh, there's 'x = ...', there it is), if the grandparent (that is, 'a') wanted to, they could have modified their child (that is, 'b') upon spawning (or when it is set) to force it to modify its child ('c') so as to override its __set however the grandparent ('a') wanted. So that argues for letting 'cs wishes take precedence. If you let the grandparent's wishes take precedence, then its grandchildren cannot grow beyond the grandparent in functionality. however, this idea that 'a' can modify 'bs __set to force it to modify 'c's __set is quite cumbersome. so i think what you want is to do a sort of upside-down inheritance, where 'a' gets first crack at it (and is passed the path .b.c, of course), but by default can pass it along to 'b'. in other words, 'a' inherits the __set behavior from 'b' which inherits it from 'c'; either 'b' or 'a' can override if they choose. note that this is a little different from std inheritance because we also pass whoever handles the __set a path.

it strikes me that this, actual object inheritance, and antibody-like exception handling all share a common structure; there is a stack of things and messages are passed through the stack with everyone getting a crack at them in turn (or mb partially simultaneously, hmm, is that how multiple inheritance should be handled, by calling ALL of the matching methods?)

i keep wondering how to choose between or how to handle references in variables. is everything a value (Haskell), or do we allow references (C), or is almost everything a reference (Python)? C seems very confusing with its *s and &s to distinguish between values and references, but mb C's explicit * and & operators were good, and C's problem is really only insufficient abstraction over data structure implementation, e.g. that you had to see those *s when you were just trying to access an element of a 2-D matrix.

hygenic or non-hygenic macros? i guess hygenic macros are usually better, but perhaps non-hygenic are needed if you want to implement some kind of weird context-sensitive keywords?

there appears to be some common notion of 'simultaneity' that could perhaps to introduced into the syntax: select an individual element, or do simult AND or simult OR (Kant relevant here?). e.g. allowing simultaneity (ANY and/or ALL) in antibody-like exception handling; e.g. a syntactic function to make something into a nondeterministic automata; e.g. forking; e.g. deciding who overrides who in a dict merge (namespace merge); note that 'simultaneity' is over space (e.g. vector operations, dict merge) as well as over time (e.g. forking, nondeterministic automata) i think there may have been another example that i had in mind?

in designing an 'elegant metaprogramming hierarchy', we want to look at various things that other languages do, and ask, how can we make that easier to metaprogram in our language? examples: looking at Javascript's dict literals without quoted key names; looking at Yegge's transient properties.

how would we implement Javascript's dict literals?

one idea is to have pre-substitution macro (later note: aren't all macros like that? isn't that the point? or do macros not get access to the tokens they are macro-ing over as strings?), but could also have macro with meta-quoting convention syntax. This sort of syntax, and a way to meta-program it itself, is the sort of thing i am talking about when i say we need language constructs for limited meta-programming.

another thing is antibody-like exception handling. i feel like we should limit the 'antibodies' (or receptors, if you will) to be ANDs and ORs (boolean functions, actually) of conditionals (e.g. boolean expressions whose atoms are primitive boolean tests on attributes of the exception). But this sort of choice is the sort of things that should be easily expressed in meta-programming syntax -- how would you express that it should be only boolean AND expressions? only boolean ORs? that an exception is shown in series to the callers until it is handled, or in parallel, possibly being handled in multiple places (the simultaneity operator discussed above)?

one wants to have a syntactic macro notation, not to just let punctuation syntax characters be redefined without warning to the reader, but otoh one wants the language itself to have punctuation for 'dict literal where keys are implicitly quoted' vs. 'dict literal where keys are not implicitly quoted'. One idea is to allow macros etc to be unmarked, but to not quite consider that 'real Jasper'. The compiler will translate this for you into 'real Jasper' where they are marked. This allows us to write many of the actual punctuation character syntax for real Jasper using the core language's metaprogramming facilities. The structure of the intermediate compilations (as we take out more and more of the metaprogramming and reduce a program to Core Jasper) is a graph structure, and 'real Jasper' could be a "boundary" in the graph. Above this boundary are user frameworks and libraries. Below it is Jasper language implementation.


a superclass is really just presenting an API to programmers who would subclass it. for example, i have an interface which expects a class to implement an 'idle' method. now i want to subclass that class. but the superclass has a side-effecting call in the idle method which probably should go at the end of what the subclass does, not at the beginning. so the convention that you call super at the beginning of the subclass method here should be reversed; super should go at the end. shouldn't this be a default which is expressed by the superclass? shouldn't the subclass not do anything if it wants the default, and only have to put in some keyword like 'super' if it wants to override?

actually, in my case, i would like the subclass's body to be executed in the middle, not at the beginning or at the end. again, the superclass should be able to choose this default (and the default for this default should be the superclass at the beginning, the subclass at the end).

should the superclass even be able to choose to ignore its own body entirely iff it is subclassed? should it be able to detect if it has been subclassed and branch based on that? or just to say, in effect, the default is for no 'super' call at all? (i like the latter, because the subclass should always have the final word)

i talked above about how the structure of __set over a nested object reference is similar to object inheritance -- should the same flexibility apply there? i can't think of a use for it but i don't see why not. but mb we should have some general meta-programmy way of saying things like this (like which direction the inheritance goes).

i guess what's going on is that, unless the subclass demurres, the function produced by the subclass is mutated by the superclass to add some stuff before, or after, or before and after, or neither. generalized, when the function is called, the subclass function is checked to see if it witholds its consent, and if not, it is passed to the superclass metafunction. that seems like a rather complicated dance, i don't particularly like it. and syntactic support for the consent seems a bit heavyweight. further generalization to simplify?

interesting optimization: if the superclass function has no side effects, then the above implicit wrapping is useless and can be skipped..

i suppose this could be generalized to a 'message handler' for each object. the behavior provided above is merely what the default message handler does. so now the object system (that looks to the superclass if the subclass can't resolve a method) is built on top of core Jasper. i guess that's good. still need a way to mark subclasses that withold consent.

if you think in terms of the Actor model, do you really want to queue incoming messages, or do you want to implicitly fork to process them in parallel by many copies of the object? for pure functions you definitely want to fork (if the granularity is large enough to make that worthwhile). hmm, another use for the simultaneity operator?

still, this 'message handler' should be expressible more abstractly, so that we can capture this repeating pattern of a stack of things which intercept and process messages with elegant metaprogramming syntax.

actually, i think that kind of nails it: the pattern is of a stack of handlers matching and processing messages, in serial or in parallel or in a more complicated way like this wrapping-by-default, possibly being passed not just the message but the path by which the message reached them. oh yeah, and of course, the call stack is another instance of this pattern.

i guess what we are looking for in a lot of this is one form of declarative control structures.

---

one common operation is switching between a data representation in which each node is a dict (a node with a variable set of edges) and a table representation in which all nodes have the same set of edges, but some of the values are null (e.g. where a node wouldn't have an edge with a given key in the set-of-dicts representation, in the table representation it has an edge with that key, but the value is null.

this is somewhat like going down a level in the Haskell-like maybe representation.

--

ok another take on the 'levels' idea. i think i may have got it, at least as it applies to the graph data structure (if not yet to language semantics, although perhaps that will turn out to be related).

i was thinking about the message handling stack. each handler is a node. there are edges between the nodes to represent the parent-child relationship (or equivalently, the 'upwards in the stack' relation). each node contains a set of trigger conditions, and a body of code to be executed. one can therefore represent the handlers as nodes with three types of edges: a trigger edge (that leads to a node of type 'trigger', representing all the trigger conditions), a code edge, and one or more parent edges. A slightly alternate and in my mind preferable representation (remember, in Jasper it's supposed to be easy to switch between representations like this) would be to attach the trigger conditions directly to the handler node and have them all be of edge type 'trigger', rather than just a single edge whose label/type is 'trigger'. But going with the former representation, each node has two edges with types 'trigger' and 'body', such that for each node, there is exactly one edge of the given type. One can visualize this in 3d with the nodes as little spheres with cyclinders coming out of them, like Capsela. instead of having a 'handler' meta-node, with 'trigger' and 'body' edges connected to it, since each handler node is composed of exactly one 'trigger' and 'body' nodes, one could simply stack the trigger nodes on top of the body nodes in 3d, so that there are two levels, the body nodes on the bottom level and the trigger nodes on the top. The geometry of the nodes in each level is identical (this is a tabluar kind of situation). Within each level, the nodes are connected via parent-child type edges. The edges between levels are trigger-body type edges.

Now, one can traverse the trigger nodes via their parent-child edges or one can traverse the body nodes via their parent-child edges, the result will be the same because the geometry is identical. this is like restricting one's attention to one vertical layer/level at a time. one can also think of traversing the invisible meta-nodes representing handlers (a handler node is equivalent to the vertical column containing one trigger node and one body node).

when one is restricting one's attention to one vertical layer/level at a time and traversing parent/child nodes, one might imagine that the other layer and the trigger/body type edges don't exist. if one asks 'What are the edges from/to this node?' one might want the answer to omit the trigger/body type edges.

so, the essence of the levels/layers concept is that edges have types, nodes which are connected via edges of only one type (or more generally, by edges whose types fall within some subset of edge types) form connected subsets/equivalence classes which can be represented as single nodes or equivalently as vertical columns, and one can restrict one's attention to a subset of edge types and ignore the rest (the edge types that are allowed to be traversed when forming the transitive closure to get the equivalence class are just those edge types which are being ignored during normal traversal, e.g. the transitive closure is taken over the inter-layer edges, but normal ops are taken over intra-layer edges).

to generalize this further, a meta-level might consist of a switch in perspective between which subsets of edge types are inter-layer or intra-layer, and/or between which edges are of which type, and/or between a view in which edges are nodes (the default in Jasper, in fact nodes and edges are quite similar), vs. edge types/node labels are nodes. Note that a single node or edge might have multiple types.

in other words, we are redefining what a node is (is it a vertical column, e.g. a handler node in the example, or a single trigger node and a single body node), in such a way that edges of certain types (parent/child) form an identical node topology in either view (e.g. we can traverse the parent/child edges of the trigger nodes only, or we can traverse the body nodes only, or we can traverse the handler metanodes -- in other words we can tranverse in coset land or we can traverse members of the cosets); and/or we are redefining which edge types we are looking at (parent/child only or parent/child and trigger and body, or parent/child and trigger/body); and/or we are redefining what the types are (if there are multiple types of types)

yes, yes.. this seems to be getting quite close to the meta project..

hmm.. so what are the fundamental/primitive operators here? we can nodify an edge... we can follow an edge (which in Jasper is equivalent to applying a function, remember); we can restrict our attention to certain edge types.. we can query the available edges from a node.. we can add or replace an edge (by using __set, which might be thought of as a magic edge); what else? how do we handle sets of edges with the same label? how do we deal with multiple labels per edge?

the labels for a node or edge are themselves like a dict (or rather, multidict since in general one key can map to multiple values), hence like a node.

so, labels are just another type of type of edge. the choice of which types are label types and which are edge types can be parameterized, and a shift in this choice is a (one kind of?) shift in level. one can imagine that the level is a form of perspective or context, scoped in code dynamically or statically (which is better for this?), through which code is viewed.

so again.. if we are at a node, we can:

oo i like this.. it's probably another form of universal computation.. you can construct the natural numbers by starting with a node with no edges and then adding edges.. i guess that in order to compute you may need a substitution operator; namely, if given a namespace (a dict (node) representing the assignment to formal parameters passed into a function), and a graph (representing the function definition), then iterate over the graph, and whenever you see a node whose label is one of the edge labels in the namespace (placeholder nodes representing variable mention), replace it with the node found at the end of the edge with the same label name extending from the namespace node.

i guess we also need a special type of node or edge to allow us to nest thunks, e.g. unevaluated substitutions.

but i wonder how far we could go by relying on the edge queries more and eliminating one of the previous two primitives?

a slight generalization to substitution in this direction: apply a map to nodes, apply a map to edges

still need to deal with the transient/view-ish changes in perspective, and with the homomorphisms (traversing cosets)


local mutable variables aren't the enemy, implicit references (side-effect mutuation across variables) is the enemy


perspectives: something passed along in variables along with values; a perspective changes how a value acts, often hiding some of it, but you can recover the hidden parts of the value by changing the perspective

sort of like a db cursor (you can move the cursor forward in the query set)

a graph is a value, but which node is the one you are looking at is a matter of perspective.

if every variable has both a value and a perspective, and a perspective is a set of meta-attributes, then since variables are just nodes anyhow, what's the difference between the base attributes and the meta-attributes? this itself is merely a matter of perspective. ultimately, the variable is just a node with various edges of various types.

this shows something important for my meta-project; perspective is a crucial concept. what is and is not 'meta' varies with perspective.


in python loops you are always typing range(len(listVar)). this is too long.

---

if meaning is what the interpreter turns the program into, then meaning changes with perspective. e.g. "if 2 > 3 then print 'hi'" may mean a conditional or a null, depending on if your perspective allows you to deduce that the 'print "hi"' branch will never execute. The meaning of a program with no inputs may be variously considered to be its output (if your perspective is running the program), or its code as a string (if your perspective is just to look at source code as a string) or a list of tokens (if your perspective is the output of the lexer) or assembly language (if your perspective is the output of the compiler). The meaning of a program may be its denotational semantics, its operational semantics, or its axiomatic semantics, depending on your perspective.

does meaning in this context mean anything more than just "some function whose input is the program?" Can the wordcount of the program be considered its meaning? Can the feeling that a non-programmer poet who reads a piece of Perl poetry gets be considered its meaning? To disallow these interpretations, we add the further constraint that the output of the program given any input must be determinable from its "meaning". This anchors the notion of the meaning of a program into the computation and side-effects that it produces depending on input. A meaning may include more than this (it may, indeed, include the feeling evokes by Perl poetry) but it cannot include less than this; this ensures that there is a homomorphism from any domain of program meanings into any domain of 'base program meanings' (what i mean is a domain whose objects have a 1-1 correspondence with program I/O characteristics -- for the subset of pure programs, this means a 1-1 correspondence with partial functions on program inputs). This allows the source code to be a meaning, but since many source codes can have the same results on all inputs, the source code domain is not a base program meaning domain.

Note that i do not here include the time and space footprints of the program (nor the minimal heat dissipation caused by the deletion of information during non-unitary computation, etc) in its 'meaning'. Perhaps that is an error, perhaps not, i'm not sure. Since i don't have a handle on this i'll leave it out for now because it would definitely make things more complicated, and i don't know if it's essential. You can always have those characteristics in the non-base-meaning domain.

You can constructively demonstrate that some domain is a meaning domain by writing an interpreter that runs programs in that domain.

A complete meaning domain is one for which there is an interpreter onto (surjective) the base program meaning domain; e.g. a meaning domain expressive enough to represent all programs.

Of course any old infinite discrete domain can encode all programs so what we really ask is, can you write an interpreter that interprets the objects in the domain in the natural, straightforward manner. i guess what we really want is (a) a function from the source code to the domain, and (b) a function from the domain to a base program meaning domain, such that the concatentation of these is equal to the result of running the standard interpreter on the original source code. So e.g. "wordcount" doesn't suffice because even though integers can encode programs, the wordcount doesn't give you such an encoding; two programs which behave differently can have the same wordcount. So i guess when we ask if something be a meaning of a program, we are really judging the function from source code to some target domain, not the target domain itself.

---

"

My third and final Java example: sometimes you really do need multiple inheritance. If you make a game, and you have a LightSource? interface and a Weapon interface, and behind each interface is a large implementation class, then in Java you're screwed if you want to make a Glowing Sword. You have no recourse but to manually instantiate weapon and light-source implementation objects, store them in your instance data, implement both interfaces, manually stub out every single call to delegate to the appropriate instance, and hope the interface doesn't change very often. And even then, you haven't fully solved the problem, because the language inheritance rules may not work properly if someone subclasses your GlowingSword?.

"

Jasper should assist in delegation without boilerplate. (how would CLOS handle that?)

---

in-placeify operator (e.g. sort_inplaceify does sort in-place)

---

as noted earlier: you should be able to specify the signature of a function you want, and some post-conditions (or examples/unit tests), and optionally some pre-conditions and time and space constraints, and Jasper will look in the available libraries for a match. Add in some theorem-proving assistance, and it'll write parts of the program for you.


mathematical number classes, not int, long, etc (that's a job for space annotations)

---

lazy evaluation

---

methodMissing to allow programmers to essentially create their own inheritance system etc


" JavaScript? properties have a small, fixed amount of metadata. Each property has a set of flags. The flags include ReadOnly? (can't modify the value), Permanent (can modify the value but can't delete the key), DontEnum? (key doesn't show up in iterators but can be read directly), and others depending on the implementation. "

interesting, DontEnum? is like my conception of metadata


a simple extension of Python keyword arg syntax can allow for partial application with keywords:

def f(a=1, b=)

indicates that the second arg has keyword b, but is required

and

f a=3

is a partial function application that leaves b open

and

f b=4 a=

is a partial function application that leaves a open

could replace the = with something else if you want.. make sure it's a non-shifted key


subtypes of null:

null (the parent class) not-found (in this mapping, there is no value corresponding to the given key) access-denied (the query may have an answer, but the security policy does not allow this question to be answered in this context) unknown (== undefined) (the answer to this query cannot be determined in the present context) nonsense (the query or its answer is inconsistent or nonsensical) fail (the operation requested was attempted but failed)

---

i think i said this before but some syntax ideas:

a.b.c = (a b c)

a b c , d e f = (a b c) (d e f)

or mb a b c / d e f = (a b c) (d e f)

(if these are the same, let's use , b/c it is more symmetrical, and save / for directed arc, see below) ---

lazy expanders: sometimes you may want a node to support a membership test but not enumeration, e.g. if x_types is the set of type of node x, then you might want to be able to say "is x of type t" but not "give me a list of all of the types that x could be" because x is type object, type int, type number, type positive int, etc.

generalizing, we could allow other commonly-asked questions about a node to be supported primitives. this allows nodes to act as symbols for the knowledge that we have about things, e.g. mb some nodes can report their length (count) but nothing else.

so, conceptually, a node can support any subset of the following:

count integer: how many edges? list node: a list of edges (this doesn't quite make sense b/c this is what a node is ahyway, right?) get x the node at the other end of the edge labeled x (arbitrarily choosing one if there are multiple x edges) edge x the node reifying the edge labeled x (arbitrarily choosing one if there are multiple x edges) gets x a list of nodes at the other end of edges labeled x edges x a list of nodes reifying edges labeled x set x y create an edge labeled x pointing to node y any p predicate that evaluates to TRUE iff p(x) is true for the target of any edge from this node all predicate that evaluates to TRUE iff p(x) is true for the target of every edge from this node in x predicate that evaluates to TRUE iff the target of any edge from this node == x (memcmp equals or equivalent?)

except that once a node supports list, it supports count, and once it supports list and get, it supports everything but set.

and all nodes support:

meta return the meta node for this node (but i think instead we should switch perspective..?) persp x switch perspective on this node (how should this work? should it be persp x y?)

note: we can let the predicates return lazy expressions rather than bools to do logic.. how to make this support arbitrary logics, and arbitrary intference methods?

---

types of equals

equivalent: the values of these variables are identical equivalent under homomorphism: the values of these variables are in the same coset of a homomorphism memeq: these variables are stored in the same memory location refeq: these variables will always have the same value, eventually (e.g. they might refer to two different memory locations but they are synced with each other)

i think that rather than representing all of these, we should just use value equivalence -- if you want memeq, just use &x == &y. refeq is a special case of equivalent under homomorphism. equivalent under homomorphism can be === or mb ==

choice-of-homomorphism-or-domain.

however, are there cases where the homomorphism may be implicit? e.g. above we said that null is a class, not a particular value. but you should be able to test for membership in this class with x == null... e.g. if x == nonsense, then x == null is true, right? but the value "nonsense" is a type, and the value "null" is a type, and they are not the same type.. hmm...

in human language, if you say, "is mars the morning star" or "is bob human", the answer is yes, even though in math, the first is (mars == the morning star), and the second is (bob isa human). suggesting that == should act like "is", and if the first one is a value and the second is a type containing that value, then == returns true. but this does violence to the mathematical meaning of ==s, and leads to unexpected answers when dealing with types containing types: "is (the type of all types) (the type of all types and all values)?" if we interpret that as an ==, the answer is no, if we interpret this as isa, the answer is yes.

perhaps use === for this? or perhaps use another nonshifted punctuation for assignments, instead of =, like .:

a . 3 a + 4 == 7 a + 4 == int

too bad : is shifted, that would work well

or perhaps just using an assignment (an imperative) within an expression that returns a value is an error, and you use = for both, and == for the = or isa thing? or use == for the equals or isa thing?

hmm, i still like just using === for equals, = for assignment, and == for equals-or-isa:

x y = or / === x y , isa x y

---

how to support arbitrary logics? perhaps by having an "implies" operator that applies to boolean values. also, generalize boolean values to 'logical' values.

not all logics have truth values? not sure. see see http://en.wikipedia.org/wiki/Truth_value

---

bwa ha ha! i think i've got it! changing perspective changes the bindings of the operations on a node; i.e. it changes the meanings of 'get' and 'set'.

the other common operations, e.g. 'count', are really just edges on the meta-node. the meta-node is what you get when you look at the node from the distinguished perspective 'meta'.

might call a perspective a 'view' because it's shorter to type. also, call edges 'arcs' for the same reason. dunno if there is a shortcut for node.

you might call the meta view the "node" view just so as to not have to use the overloaded word 'meta'. then again, i think 'meta' is very appropriate here.

---

'bind' identifies nodes, at least within a given perspective

--- so, Jasper is now capable of type-1 meta! can we make it capable of type-2 or even type-3 meta?

type-2 would seem to involve the reification of the 'meta' arc, that is, the arc from the base node to its meta-node, possibly as well as the reification of a 'meta' type for that arc. so, in the meta node, there is an arc that takes you to a representation that represents each view of the node as a separate node, with the arcs labeled by the view name. one of those arcs will be labeled 'meta'. you can, of course, reify that arc. so i think that's type-2; i think now we've gotten Jasper up to meta class type-2 (although it's debatable; does this construct mean that Jasper contains the 'understanding' of the specialness of that meta-node? is such an 'understanding' required for type-2? what would such an understanding consist of? perhaps just the algebra?).

now, type-3 reifies the distinction between types 1, 2, and 3. todo.

(after a tiny bit of thought) i think this greatly clarifies the meaning of type-1 and type-2. but i don't think it represents their meaning in total; it's just a representation of that (perhaps this concept of 'representation' should be further explored on the path to type-3; but i thought i decided that algebra itself is only type-2. hmm. maybe the next step is to fully add algebra to Jasper to gain a fuller understanding of type-2). i don't think this is type-3.

---

is there any sense to a function definition with an expression on the left, e.g.

f (+ x 1) y = + x y

?

i guess that would match an expression of the form "f (+ x 1) y", but would not match other expressions such as "f x y"?

i suppose it makes sense to allow this sort of thing, in order to get closer to grammar definition and logic programming

i guess the programmer is implicitly asserting that f (+ x 1) y === f (+ x 1) y, that is, if the programmer says

f (+ x 1) y = + x y f x y = + x y

then the programmer is asserting that (for example, if x' = (+ x 1) and y' = y:)

+ (+ x 1) y === + x y

which is untru, so then the programmer has made an error? in any particular case this could be caught by running both of them and comparing..

similarly, should we just generalize this syntax to allow context-free grammars and Horn clauses (and whatever else the user expands the language to make use of) to be stated in a similar fashion?

e.g.

a and (not a) -> false (-> is 'implies') sentence -> subject verb object (-> is 'produces')

user can define operators like ->; reuses the interpreter's machinery for interpreting assignment statements..

mb / is a better choice for most things..

--- note: the substitution operator is not the same as a map operator; substitution is a recursive map operator; stopping at quoting boundaries, tho

---

graph constructor syntax idea:

a -> b -> c -> a

node a has label "a"

$a -> b -> c -> $a

the first node is given whatever is in variable a as a label

also, when making a schema of a graph, can use keyword 'root' and 'self' to refer to the root node and to the node from which an edge is emanating (are these different? mb just use 'self')

e.g.

self -> self

hmm would be nice to use / instead of -> ...

---

ah, that's right, in Haskell, f x y == (f x) y, not f (x,y). does this require you to know the arity of things in order to parse? no, but semantically? is this a good idea?

lessee..

vectorize * 3 [1,2,3]

hmmm, makes sense, but you do have to know that vectorize is 'done' after it takes *... due to currying, this is a semantic, not a syntactic, distinction, as any function can be said to be 'done' after taking 1 argument (it's just that it returns a partially-applied function).

is this good, or do we want

(vectorize *) 3 [1,2,3]

to be explict?

what if it were

vectorize-with-option * option 3 [1,2,3]

vs.

(vectorize-with-option * option) 3 [1,2,3]

those are syntactically equivalent in haskell...

semantically we like to think of the result of (vectorize-with-option * option) as a 2-input function, not a 1-input function returning a 1-input function, which is what causes the discrepency..

what would Jasper look like if we made that distinction?

(vectorize-with-option *) option 3 [1,2,3]

would no longer work b/c it would say that (vectorize-with-option *) takes one argument, not three. you'd have to do

((vectorize-with-option *) option) 3 [1,2,3]

or

(vectorize-with-option * option) 3 [1,2,3]

so the associativity rule would be

f x y z = (f) (x,y,z)

hmmm that's not really even associativity..

seems easier to read somehow tho.. but does it kill the beauty of Haskell?

the rule would basically be that you can give a function fewer args than its arity.. but not more, without enclosing the fn and its args inside parens

in other words, functions would automatically be curried in case of partial application.. but curried functions are not automatically 'decurried', e.g. if you defined a function as

f x : takes an int and returns a function from int to int

then that wouldn't be the same as defining

f x y: takes two ints and returns an int

and the latter could be used like the former, but not vice versa

hmm, seems like this may or may not impede the use of hof (higher order functions), since they must now be careful to return a fn of the correct arity after doing their manipulations... otoh this seems pretty easy to deal with.. not sure tho

mb should ask on a Haskell mailing list for examples where this would hurt

mb would be better to just run with it and try reimplementing Haskell prelude with this syntax first..

---

yes, i think in Haskell, newlines are implicit separators. that's how they get rid of the parens. i like it.

---

namespaces can be used to represent equational theories, e.g. if a + b == a * b isn't true in general, but it is true within weird-domain-x, then you can make a weird-domain-c namespace and put it in there. you can put Horn clauses, production rules, etc in namespaces too.

---

namespaces can do lexical scoping by overriding __get so as to first search within that namespace. e.g.

namespace a { b = 3 c = b + 2

}

the b inside 'c = b + 2' does a __get b, which can be overridden by a.

even if we don't need to override to do that, we can use it to pass a hidden 'self' parameter within stateful objects, like Python's self:

stateful a {

def __init__ (barg): b = barg

def c: b + 2

}

(pardon the Python grammar)

in the method 'c', 'b' is resolved via a __get, but a overrides its get to pass a 'self' parameter to any method defined within (now how do we exclude static methods.. or mb we use the keyword 'state' as the first argument to the fns; that's a pain tho; mb this is where we use '!'? no, i actually like using 'state', altho it's too long.. mb 'me')

so something like

a = {

__init__ me barg = { -- b = barg -- how is this detected as a mutation? mb it has to be: me.b = barg -- so the first time a variable is accessed, you must us 'me.' to make sure it is created in object scope, not local scope }

c me = b + 2

d = 4

}

the statefulness is autodetected by the presence of __init__ and 'me'. 'me' is a keyword. c is an instance method, d is a static method.

---

can use Python ':' in place of 'do'; or mb just curly braces

can we just use parens instead?

a = (

__init__ me barg = ( -- b = barg -- how is this detected as a mutation? mb it has to be: me.b = barg -- so the first time a variable is accessed, you must us 'me.' to make sure it is created in object scope, not local scope )

c me = b + 2

d = 4

)

translates into

a = (;

__init__ me barg = (; -- b = barg; -- how is this detected as a mutation? mb it has to be: me.b = barg; -- so the first time a variable is accessed, you must use 'me.' to make sure it is created in object scope, not local scope );

c me = b + 2;

d = 4;

)

eliminating comments and whitespace:

a = (;__init__ me barg = (;me.b = barg;); c me = b + 2; d = 4;)

eliminating superflous ;s:

a = (__init__ me barg = (me.b = barg); c me = b + 2; d = 4;)

hmm, yeah, looks like parens are fine. might as well use [] instead of parens b/c they are unshifted? for now i'll keep using parens b/c my custom keyboard layous has them unshifted.


globals? sure, set via accessing the global node:

global.x = 3

---

i'm kind of enjoying having a.b.c = (a b c) and a , b = (a) (b).

---

with multidim arrays, how to get

x[a,b] == x[a][b] ?

or should these not be the same?

how to do array slices, e.g.

x

x[a,:]

x[3:5,b]

x[:5,3:-1]

(or mb use '..' in place of ':' ?)

i guess things like ':' and '3:' (or .. and 3..) are literals for 'range' objects, and the __get fn intelligently checks for these?

so would it look like:

x.3..5.-1 ?

mb in this case (x 3..5 -1) is easier to read?


need a variant of a.x = (y) to say:

'override method x of a with y, but as a hook that a.x calls when it wants (so i don't have to manually write super)

and a variant to say:

'override method x of a with y unconditionally, don't call a.x at all, but put a.x in some keyword (like 'super') for me to access so that i can call it if i want)

mb the former is a.x := (y), and the latter is just a.x = (y), but if 'super' is in the body, then the old a.x is saved and bound to super.

mb just use '=' in both cases, but if super is in the body, then ignore the parent (but then how not to super? mb 'never super')

mb use keyword 'old' or 'parent' instead of 'super'? but i guess 'super' is less overloaded and already used to mean this.

since we are talking about rebinding a.x, at what time is this stuff executed, and in what order?

mb use keyword 'inherit'?

mb default is to for subclass to call super at the beginning (equivalently, for the superclass to call the subclass at the end). use 'hook' keyword in the superclass to call the subclass somewhere else. or should the default to be call at the beginning in parallel?

e.g.

dog = (hi = bark)

beagle = ( inherit = dog -- or just 'inherit dog' -- or inherit := dog to freeze dog and inherit from that

hi = (wag tail) )

-- beagle.hi will bark then wag tail

dog = (hi = bark)

beagle = ( inherit = dog -- or just 'inherit dog' -- or inherit := dog to freeze dog and inherit from that

hi = (never super; wag tail) )

-- beagle.hi will wag tail

dog = (hook; hi = bark;)

beagle = ( inherit = dog -- or just 'inherit dog' -- or inherit := dog to freeze dog and inherit from that

hi = (never super; wag tail) )

-- beagle.hi will wag tail then bark

if super returns something, and subclass needs it, then subclass must call super explicitly, e.g. super + 3.

---

never x = if FALSE then x

note: special behavior: always lazy, so x will never even be evaluated.

---

:= for 'freeze and copy'

---

for simultaineous EOL,
for sequential EOL (would like it to be the reverse b/c : is symmetrical and ; is not, but ; is the unshifted one)

---

note: Mutable state is only bad if the state is not disconnected from other states in other variables

Eg most oop encapsulation ok, e.g. if you have a Point class with and x and a y instance variables, and there is only one reference to Point in one variable at any one time -- that's fine. What's dangerous is if you have an object which holds references inside itself, and there are other references to the same referents inside other objects -- it's the connectedness of state that's dangerous.

---