proj-oot-ootNotes1

mb give up on node labels and just have node labels be references but have have a mechanism to make namespaces into 1st class objects and to use them in the way i was using the supernode before

library fns to do the graph and category constructs in vaughan pratt's category theory handout

as nodes are fns, edge sets of vertices can be defined 'lazily'

homset: edges between given 2 vertices

slices:

1: -> [1 2 3 ...]

1:4 -> [1 2 3 4]

1:10:2 -> [1 3 5 7 9]

:3 -> [0 1 2 3]

: -> [0 1 2 3 ... ]

1:4.1 == 1

1:10.-1 == 10

1:10.[-1 2 -3] == [[1:10].-1 [1:10].2 [1:10].-3] == [10 3 8]

1:5.: == (1:5).(:) == [1 2 3 4 5]

./ note that no error is raised when indexing a graph node with a list of indicies, some of which are edge labels not present in the node /.

1:5.1:3 == (1:5).(1:3) == [2 3 4]

1:5.:3 == (1:5).(:3) == [1 2 3 4]

1:5.2: == (1:5).(2:) == [3 4 5]

[a==1

to achieve this, the default node

slicable: (int list) slicable = slicable

capitalize for types or ?x for pattern vars? eg ?: list: int (?x list) = ?x eg cap: List: Int (x List) = x eg both: Int (?x List) = ?x note: types that take args must always be fully applied in order to disambiguate b/t application of the parameterized type and application of an instance of the type to a value. fully parensethized too. you can always just use _:

(_ List) len = Int

behaviors are just null interfaces?? no, they are more like subinterfaces. well, mb thats ok.

in addition to mb, ' somehow denotes which edges are metadata

oot notes

see also [1]

mb should rename b/c java starts with j

go through rosetta stones

do just seq, not monadic het or nty, or unt or uty; hetrogeneous/untyped variable footnotes (1), (2) at EOL naw, mb #1, anywhere implied sequencing: "#1 stuff" means "do {#1; stuff}" "stuff #1" means "do {stuff; #1}" poor man's mixins? "debug" flavored footnotes (generalized: flavors) scoped (function or class scope) footnotes are an explicit substitute for AOP (aspect-oriented programming)'s "join points" the content of footnotes may simply be some tags which can be queried by an AOP pointcut, e.g. "' access_bank_account transfer" (i.e. an I-EXPR, in this example, a flat list with two elements), or it could be code (the default) (what's done with the return value? i guess the rest of the line, if there is any, is aborted if a certain flavor (i.e. product type coordinate value) of exception is thrown (or maybe special protocol for footnote return value?) two footnotes of the same # within scope mean that the footnote runs "around" the stuff in between, i.e. do bank_acct(a) = bank_acct(a) if a footnote is the only thing on a line, that's fine, but what if a footnote is specified in the middle of a line? is it just evaluated? like "foo bar #3 dude"? i guess this could mean "pass the default for this arg to #3 for advice, and let #3 do something again after this is called" pointcuts may query tags, but also can query the lexical scope of the footnote, including fn, class, module, and mb even source code file and version as wikipedia says about aspectJ, "Pointcuts can be composed and named for reuse". oot itself should be used for this mb instead of letting a footnote at BOL be before, and at EOL be after, they should all be either middle or EOL, and this means the footnote can wrap (before or after)

	   footnotes do not have to be in order
	footnote tags can also be used for other purposes, i.e. http://en.wikipedia.org/wiki/Attribute-Oriented_Programming. should probably read http://en.wikipedia.org/wiki/A_Metadata_Facility_for_the_Java_Programming_Language
 stuff" means "do {#1; stuff}"
	         "stuff #1" means "do {stuff; #1}"

compiler can compile to various targets, like ruby, python, java

compiler can be told to inline and attempt to simplify a list of functions or modules or source filters-- helpful for when someone else wants to use some higher order stuff that looks like line noise to you. compiler is, basically, extremely modular, extensible, and IDE-friendly

(flavors? parameterized code? mb flavors determine which monads get added to everything?)

  1. at BOL for preprocessor (or do we need scoped preprocessing?? but can scope with whitespace, i.e.
  2. scoped preprocessor ) of course, preprocessor language is oot
    1. is pre-pre processor, etc "meta" keyword for "macros"? or just preprocessor? sig whitespace and optional {} logical endpoint of sig whitespace: no : (python), ;, {} (C), "end" (ruby) needed! x=y defines x as convenience var x = y if q x = z x = x + 1 z = x / 2 -> z = x/2 where x = x'+1 x' = (if q {z} {x}) x = y continuations cot http://en.wikipedia.org/wiki/Continuation#Examples, i like how they say "Continuations are the functional expression of the GOTO statement, and the same caveats apply."

sub (or mac) like "where" in haskell everything prefix except a few predefined exceptions: arithmetic? naw.. these exceptions are converted to prefix before preproc/meta/macros see them $, $! (but $-, not $!) as in haskell (or mb --, since no shift is needed 4 that -- comments can be ) precedence of language keywords over user? of imports over later??? in import keyword? naw... no prec except language-defined! [] [[ instead of language spec, a ref implementation in itself

tupled return types , is sufficent, no need to surround with parens

dicts lisp lists (trees as linked lists) trees arrays (integer-indexed list with O(ln N) access), hashes (assoc arrays) iterators, generators graphs strings """ for "here" (parameterized) streams (for more, check other langs, don't forget *lisp) numbers: int rational ?? bool nil

data type flavors (such as "exact int", "fixed real", "float real", "exact float real") means a type which inherits from (i.e. is in the typeclass of) both "exact" and "int" clearly, "exact" is an empty typeclass

are class flavors like traits? http://en.wikipedia.org/wiki/Trait_%28abstract_type%29

containers have [] (naw, (), see below) [[x is short for [x] heck, why not just () instead of []? "call" the container? d = dict d(apple) = 3 pr d(apple) pr import "import std" assumed (unless "noimport std", which is discouraged) easy compiler cmdline to chain preprocessors (i.e. to let one source file preprocess another) can use the significant indentation to define data literals eg d = dict( apple 3 banana 2) (ie, look, no commas, no =s) (if you need to compute, use parens, e.g.: d = dict( apple 3 (whatever_kind_of_fruit_fred_gave_me) () ( the ()s, with opening parens followed by linefeed, are what distinguishes this from d = dict ({apple 3; banana 2;}), which would otherwise be assumed; surround block with {}, as in dict ({apple 3; banana 2;}), to pass a code block)

  umm, this is kinda like lisp, but more complex b/c the special case with (); why not just use '?
   d = dict
   '   apple 3
      banana 2
    nested_list =
    ' outermost_elem_1
      outermost_elem_2
        sublist_elem_1
          subsublist_elem_2
          subsublist_elem_2
	sublist_elem_3
    here_document =
      """hi
         dude
         multiline
         str
   or
    here_document = """hi dude multiline str """ or
    here_document ="""hi dude multiline str """ or
    here_document = """hidude multiline str"""

(initial compilation step removes HERE documents)

ptrs (store ptr that you get from elsewhere, even if you can't deref, or at least not easily)

syntaxes using =: "x = 3" assign value to convenience variable x

  "d = dict; d(bob) = 3"
    same as "_put d bob 3"
    (what if you did "(d(bob)) = 3"? should that be an error, or
     should it mean "lookup element bob in d, it should be a ptr, now
     	       	    deref that ptr and put 3 into that location",
     i.e. "*(d(bob)) = 3"? in other words, should l-values be
     ordinary expressions of type ptr, and "d bob" has an extra,
     special interpretation as type ptr, or do l-values have a special
     syntax involving "d(bob)"? seems like in the former case we are
     making this expression have a side-effect, namely a mutation; so
     maybe the latter is better; otoh, mb we should do the latter
     when we can, and the former otherwise, with the former requiring
     the type of the block to be side-effecty
     "(find_dict(1))(bob)" = 3 means, "_put find_dict(1) bob 3"
     in haskell, mutable arrays are in the IO monad or the ST monad

container protocol: get with default is just call w/ a pair arg, i.e. "d(bob,5)"?? no, what if multidim array? mb xtra comma or parens? comma: "d(bob,,5)" hmm, that could be useful in general for any call.. could supply default or raise exception with "f(bob,,5)" or "f(bob,,raise \"bad arg to function f\")", or, w/ footnote, "f(bob,,#1)"

also, support "properties" http://docs.python.org/library/functions.html#property

documentation and unittests within comments protocol or at least, help enable via docstring like reflection fns that parse I-exprs in docstring comments

something put together with parens without a space makes these two bind to each other as if surrounded by parens so "_put find_dict(1) bob 3" means "_put (find_dict(1)) bob 3", but "_put find_dict (1) bob 3" is an invocation of _put with 4 arguments

  same with other symbols UNLESS the combined string is defined, i.e. if "a+" is defined, then
  "a+" will mean a+, not "(a +)".

import import __ --- promiscuous by default import __ as name

automated attempted type coercion, using "to_" and "from_" fns (mb "to" and "from" should be keywords; "to int", "from int") (get error from haskell, look for "to", look for "from"?) but see also http://www.artima.com/forums/flat.jsp?forum=106&thread=179766&message=234456 ; "I also would recommend extreme caution in using that feature. People complain about dynamic typing and then go ahead and use implicit conversions. That is like filtering the mosquito and eating the camel. In 90% of the dynamic typing errors at least I get a type foo does not understand message bar message that makes it easy to find the error. With implicit conversions I can get all kind of logical errors without ever getting any hint at what is wrong. " so mb should have an operator meaning "coercion here, but i'm not going to tell you which one". of course, an IDE can query the compiler/interpreter to find out what it guessed -- ? ~? apparently scala already did something similar called implicit "conversions": http://www.artima.com/forums/flat.jsp?forum=106&thread=179766&start=0&msRange=15 * mb should use word "convert" or "implicit" to remind scala users * in the last example of the initial post on that thread, we wouldn't need to convert in haskell b/c we could just add an append fn w/ an array signature disallow multistep conversion paths (unless "~ ~", "~ ~ ~", etc or "~*" (!)) ambiguity error if multiple conversion choices

todo: learn about env, reader, writer, st monad in haskell, backtracing, continuations, syb in haskell, typeclassopedia, oohaskell (http://homepages.cwi.nl/~ralf/OOHaskell/), template haskell, liskell, hasp

can we get rid of the complexity of existential types? lazy patterns? make things open data by using typeclasses with default types? get rid of the complexity of "boxed" vs. "unboxed" (i think not, but mb syntactic support, with unboxed as default ("laz" keyword? btw laziness does change semantics (its not just optimization), b/c of "bottom") (also, we need fast string hashtables/assoc arrays, how does python do that, is it "boxed"?)? get rid of the complexity of monads (or at least, of monad transformers)? simplify arrows, other quasistd libs, typeclassopedia

if you want to hide the monads, do you really want to start with haskell? mb ocaml is what i seek, mb oot unnecessary

can we simplify patterns? see all the weird stuff in http://www.haskell.org/tutorial/patterns.html

an "object class" is: typeclass with default type "object" is a value (whose type is its typeclass) "methods" are: syn sugas for functions on the default type that take an object of that type as first argument (which is hidden) (i.e. "self") "instance variables" are: entries in the association list inside the object (i.e. the value of the class type) "constructors" are: syntactic sugar, using the class's name, for a fn returning an object; this is better than using a data type literal because this way the actual data type (in haskell terms) can change "destructors" are: see http://eli.thegreenplace.net/2009/06/12/safely-using-destructors-in-python/. i think we shouldn't have destructors except for efficiency. use Python contexts http://www.python.org/dev/peps/pep-0343/

exception handling like python, see also Python contexts http://www.python.org/dev/peps/pep-0343/, java try finally see http://www.randomhacks.net/articles/2007/03/10/haskell-8-ways-to-report-errors , haskell exceptions optional "throws" that does nothing (helpful for IDEs) of course, everything has to be wrapped in some monad or another. i kinda like the "general" version of #6 in http://www.randomhacks.net/articles/2007/03/10/haskell-8-ways-to-report-errors should examine how erlang and ruby do it, too and how does a compiled language do this? C++, .NET (is this a language?), C#, F#, Mono (are the last 4 compiled?)

patterns a la haskell

vs haskell "haskell 4 dummies"/condecending? no, haskell for me! too verbose all those dummy vars having to annotate signatures which throw exceptions w/ monads too unreadable too much weird syntax (e.g. lazy patterns, non-prefix operators which may have precedence) yes, powerful, but means that you gotta learn all this before you can read someone else's code! this is why python wants "only one way to do it" if you want open data w/ typeclasses, must use a framework (syb? oohaskell?), or have lots of boilerplate too complex; even "gentle intro" isn't global namespace cluttered

namespaces hate having types, typeclasses, cluttering up global namespace. mangle. allow "overriding" of "methods" if using objects, then the hidden self argument of methods, and default types will disambiguate

logic programming to write type inference; even though will initially compile to haskell, ultimately want a concise "eval" written in itself

compiler API in language for eval

eval

scripting

should look into CLOS MOP, scheme macros (define-case -- what is it and why better than define-syntax?), ocaml (the "practical haskell"), self (small), io, javascript (prototypes), smalltalk (said to be beautiful), curry (haskell w/ logic programming), helium (simple haskell), Cayenne, SML modules (vs dependent types, or some restriction thereof?) (see also http://mlton.org/MLBasis). should read http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/index.htm. check clean, timber, other haskell derivitives. scala (and .NET) are said to support component programming.

modules? http://www.google.com/search?hl=en&client=iceweasel-a&rls=org.mozilla%3Aen-US%3Aunofficial&q=haskell+module+system+sml+power&aq=f&oq=&aqi=

  Cayenne wikipedia: "There is no special module system, because with dependent types records (products) are powerful enough to define modules."
  http://www.rubrication.net/2007/04/21/how-a-real-module-system-should-work/

ez FFI interface to libraries in interpreted languages, not just C interface

i think i like SML's modules, but would prefer for the compiler to construct the module defn for you. "internal" fns which don't go in the module are noted by the "hid" keyword (or just prefixed by "_", like in Python?)

ideally, the same syntax would be used for dealing with configure-time and run-time "components" and network "services" as for compile-time "classes" and "modules". REST-ful libraries, anyone? also must support WSGI layering

Cayenne dependent types look very useful and not hard to understand: http://en.wikipedia.org/wiki/Cayenne_%28programming_language%29

here's a good point about ambiguities in eq: "let/let rec problem is not a compiler optimization. It comes from people wanting to do two things with a similar syntax: let rec x = 0::x -- defines circular list let x = 0::x -- defines list of 0::(old contents of x) By comparison, in SML function definitions (fun f x = ...) have let rec and value definitions (let x = ...) have let semantics, and it's impossible to define circular values. So that's what you'd lose for getting rid of rec (unless you fixed the rest of the grammar at the same time).

" -- http://www.oreillynet.com/mac/blog/2006/03/haskell_vs_ocamlwhich_do_you_p.html#comment-27923

unicode

instead of "self", "me" is shorter (thx to http://en.wikipedia.org/wiki/Delegation_%28programming%29)

is comparison operator (not =)

"o" is composition

+= or maybe even a general mechanism for X=

imports throughout file as if in beginning

all names relative to module (NOT global) cyclic imports illegal?

syntactic sugar for - in front of a number to make it negative; i.e. -x is a single token that compiles to (negate(x)) (in oot, you'd say (negate x))

nested comments

compiler: where is symbol X defined?

repr

for optional args? Lisp example from sds:

(with-open-file (f "foo" :direction :output) (write x :stream f :readable t :pretty t))

see prog const; allow any combo of (f x y) and (f(x,y)) and f x y syntaxes, and /f = infix f. div (or dv) = division

candidate two-letter reserved words when possible (then three letter); compiler expands(contracts):

sq (seq) cd (cond) if df (?) it (iterate?) lp (loop?) fl (foldl) fr (foldr) o (func composition) me (self object) ev (eval) dl (del) in (in, python set membership) pr (print) rpr (repr then print ; also a "rpr flag" that makes pr act like rpr) it (pronoun)??? x (pronoun) y (pronoun) z (pronoun) t (true) f (false) n (none/nil/null) i (position within a list)

slice [] graph [[]] graph, with subelements implicitly surrounded by [[]] () parens (()) castrate " strquote \ rightAssoc ' quote ` antiquote @ listExpand
&& and ; eol
or
  1. single-line comment /. ./ multi-line comment 1 list comprehension (2) $< >$ regexp ^< >^ eval (?) ^ unshadow ^x unshadow x (instead of xx) ^ ^ twice unshadow x (instead of xxx) & (in graph constructors): lexical node parent .: get, from the container typeclass. in graphs, this follows the edge specified by the key ..: graph labeled node accessor {}: type annotation. can follow a value, or can come just after a ( or [ or $[ or [[ that is surrounding/constructing the value. _ throw away ___ any token starting with ___

convention: tokens starting with _ are private to their module and its friends; with __ are protocols; with ___ are language, incld. language protocols

in []s: id (not identity!) self

mu (locally mutable variable) (but, better to just define as any l-value becomes locally mutable; i.e. x=3 x++ print x is like let x = 3 in x' = x + 1 print x' ) in that case... mu (globally mutable variable) (if multiple gl contexts, see "beg gl" below, then mu VAR CONTEXT sets non-default context (am i evil if i have a default gl context wrapping every program??) )

mb that mu should be "gl" instead, and reserve mu for object instance variables? or is that var (vr)?

mb (maybe (?))

mu: automatically infer mutable version of datastructures like hashes

python array slicing syntax; python list comprehesions

case insensitive (switch to turn off for compilation to other languages)

  1. for metaprogrammy thingees:
  2. number (even float!): footnote
  3. sub: replacement macro
  4. mac: macro
  5. inc: include
  6. beg ___
  7. end ___: begin and end sections to have something metaprogrammy done to them
  8. beg imp...#end imp: wrap contained fns with sq
  9. beg gl NAME...#end gl NAME wrap contained fns with state monad (semi-global variables)
  10. beg strict...#end strict all contained operations are strictified

metaprogrammy things cannot apply outside your file (module?), unless you give files containing them to the compiler/interpreter as special arguments

gotos that can target footnotes (within one fn?)?

cannot define new infix operators (use /), or new operator precedences; that power is something that makes haskell hard to read

i guess when there are multiple =s with the same lvalue that could be a sign that we want convenince mutation or mutable object setter, rather than function redefinition?

i guess that if we are in an imperative loop, then the mutable variable should persist through the different loop iterations, rather than being merely lexical sugar.


Perl has "there's more than one way to do it", Python "only one obvious way to do it". The former is motivated by natural language, and power, the latter by readability (and simplicity?). Perl has so many special syntaxes that if you don't learn all this stuff, you can't read other people's code. Haskell has a similar problem, but here it seems to stem from powerful hof libraries/typeclasses, and from things about its syntax (and basic libraries).

Oot likes power and readability and simplicity; so "only one obvious way to do it" is good, but not to the extent that Python does it (Python's decision to not do tail-call optimization b/c we don't want people doing recursion, b/c there's already imperative iteration, seems too restrictive to me).


typeclass and class names start with capital letters

multiple inheritance

not a different namespace by type; altho object methods r in obj namespace (so mb it should be "%w[hi there].map(pr), instead of "map %w[hi there] pr". hmm, but this makes it hard to partially apply the pr to map). hmm, mb i take it back. fns live in module namespaces, not in objects.

each file is associated with exactly one module (by default, a new module is implicitly created for each file, but this doesn't have to be; you can put many files in one module). files in a module share a namespace, and you can do imports and metaprogrammy things in the module decl that will apply to all files in the module (but ea file can do its own imports and metaprogramming too, to allow for that quick test hack -- u are encouraged to move the stuff to the module decl later tho)



arc: power, power, conciseness -- by macros and a drop of syntax python: readability, power, readability -- by syntax and standard idioms haskell: power, conciseness, safety/purity -- by static lazy purely functional semantics and MLish syntax perl: conciseness, power, naturalness -- by syntax

oot: power, readability, conciseness -- by haskell semantics, syntax, and transformation of syntax for macros

cookbook comparison

. is "get" operator for containers and fields, not []: "list.3" is abbrev for "get list 3"

wrapper typeclasses: "for every type that is in the Num typeclass, it is now also in the boolean typeclass; we implement the boolean interface thus: "bool x = (x != 0)""

   note: haskell already does this: if JSON is a typeclass, u can write:

" instance (JSON a) => JSON [a] where toJValue = undefined fromJValue = undefined " ( http://book.realworldhaskell.org/read/using-typeclasses.html )

so now we don't have to do implicit typecasting, since all exposed functionality is in the typeclasses, anyway; we just keep track of the sets of types in each typeclass, and for each pair (type, typeclass), an "implication tree" that shows how that type was deduced to be in that class (i.e. what if types that were Num were wrapped into typeclass Q, and types that were both Num and Q(t) were wrapped into typeclass W, then the tree would show:

Num(t) (instance in file A) Num(t) -> Bool(t) (instance in file B) Num(t) -> Q(t) (instance in file C) Num(t) and Q(t) -> W(t) (instance in file D)

note: this is inference on Horn clauses, as each typeclass wrap declaration is a Horn clause

this tree tells the compiler how to actually do the various operations

but what if you add a new module and now it provides an alternate wrapper from Num to Q? Now there are two paths from Num to W. "Refuse the temptation to guess" and issue a warning that neither path has been chosen. If the W functionality is used, then issue a compilation error. The compiler should make it easy for the user (or their IDE) to find out how this happened; mb by default have a "compile log" that lets the compiler say, "btw, you used to use path Num->Bool (file B), Num->Q (file C), Num and Q -> W (file D), but then you added file E, which conflicts with file C for Num-Q. (or, at least, just point out that C and E conflict for Num->Q, which is what breaks the path to W).

haskell uses :: for types, we may as well use :. but how to indicate roles, properties, and keyword args? mb . with no prefix, i.e.:

f x .keyword1=y

but that's a waste b/c we have two extra chars . and =. mb just = with no space is sufficient?

f x keyword1=y

if u have optional todo

but if : is type, then what is list slicing? mb {} for type?

suggestion (following python http://docs.python.org/tutorial/controlflow.html#intermezzo-coding-style): 4-space indentation, and no tabs.

; for virtual EOL, {} for virtual EOL and indentation

what's the diff in the parse tree b/w {} and ()? nothing? then just use ()

some constructs (basically, those which in ruby take a block, or some which in Perl set $_, although that's also "it") introduce an implicit anonymous function definition around one of their arguments (the "block" argument) which binds x, or x and y, or x and y and z. so x,y,z are reserved words. example:

instead of [2*x for x in range(101) if x2 > 3], x is assumed, so

[2*x for range(101) if x2 > 3]

and instead of

search_engines = %w[Google Yahoo MSN].map do

engine
    "http://www." + engine.downcase + ".com"
  end

in ruby, or the eqiv with my syntax and {}s,

map (fn engine {"http://www." + engine.downcase + ".com"}) [Google Yahoo MSN]

or the equiv with my syntax and ()s,

map (fn engine ("http://www." + engine.downcase + ".com")) [Google Yahoo MSN]

the "fn engine" becomes an implicit "fn x", so

map ("http://www." + x.downcase + ".com") [Google Yahoo MSN]

how to write this in the fn defn?

so far we have 3 semantics for =s, depending on context.

if not surrounded by spaces, left-hand-side is a property and right hand side is a value: property=value

if surrounded by spaces and the lhs is of the form ().() (or x.y, or x.(), or ().y), then it x.y = z is short for "set x y z" (and if u r setting a non-mutable object, the set will return the new obj, in which case it is lexically bound as a convenience variable update to x)

if surrounded by spaces and the lhs is otherwise, then it is a fn defn, i.e. "f x y = x*y" is short for "defn f = fn (x y) (x*y)" (if "defn" even exists.. = lisp's "setf"? anyway, = basically will be used for defn or setf, so this use of =s is actually atomic, mb...)

  1. char unicode charset encoding (must be at top of file)
  2. jver oot version
  3. ver module version

mb should use # for comments, like in ruby, and find another character for metaprogrammy, like mb %

nested, (possibly) multiline comment syntax w/o chording: how about /. to open, ./ to close

use properties on typeclasses to help the compiler pick the default type; to say that you want a container implementation with fast lookup (like an array or dict):

x{contain lookup=fast}

looks like haskell doesn't even have (user-definable) defaults:

http://hackage.haskell.org/trac/haskell-prime/wiki/Defaulting

docstrings and properties on fns (is this like javabeans?):

"""f does something cool""" functionColor=red, temp=cool f x y

of course, properties and docstrings can be reflectively accessed at runtime (although, perhaps not changed??). but mb their main use is at compile time, for IDEs and macros.

%% means "execute this at compile time". if this is line oriented, we should have an open/closer (%/, /%)?

str fns, literals take unicode. to deal with bytes, use bytestr, encode, decode

attributes: .. operator. takes an object on the left and a string on the right -- the value of the string is an "attribute name". can set using = (as with . and "get", maps to "getAttr" instead), or can get just by "..". each attribute may be a different type. by default, getAttr and setAttr call a generic routine that just stores the value, but instead u can use "handleAttr obj attrName get &set &remove" to tell getAttr and setAttr to call your functions for a specific attribute. behind the scenes, each object is actually a subclass of what u think it is, with a typed struct which is used by the generic attribute accessors to store attribute values. the "add" fn is used to add an attribute value "dynamically" (although actually this subclasses and adds statically behind the scenes). "add" only takes constant strings, to ensure that we can set the attribute field types at compile time. ????

	   mb should just use graphs, see below

mb attr lookup should be symbol by default, not string, since we're making everything all static anyways. have a string conversion option like python does.

objects like "2" and "5" can be objects, but "final" objects, meaning they're singletons and u can't change their attributes (perphaps "final" is a pseudo-function that u run on them when ur done setting them up)

typeclasses ("classes") should be encouraged to have default instances. if they don't, mb should be declared (or at least labeled by the compiler) as "abstract".

default, keyword, and unlimited positional argument handling, as well as list and dict unpacking, as in python: http://docs.python.org/tutorial/controlflow.html#more-on-defining-functions

don't need rfn like in arc ( http://www.paulgraham.com/arcll1.html ) b/c haskell's fn defn can be used directly within fn scope

xx, yy, xxx, yyy, zz, zzz can be used to refer to the enclosing scope's x, y, xx, etc when they were implicitly bound actually, use ^x, etc

mb allow macros to un-hygenically refer to x,y,z,xx,etc,it, via ^x, ^j, etc

mb / instead of --. / is tall and asymmetric. find something else for infixing

@x means "insert the contents of list x right here"

note: i'm trying to make the most common punctuation unchorded

notation for "rooted" graphs (such that an edge can point to another edge; perhaps like the "homogeneous viewpoint" in category theory, or topic graphs):

[] surrounds root node (surrounding []s can be omitted when ,s are present inside) (can be implicitly closed by indentation) [[]] like [], but subelements implicitly surrounded by [[]], and spaces separate (except for things right next to =s) ,s separate elements elements are nodes that parent node points to "label = node" is notation for local edge labels (don't have to be unique) "node ^^ id" is notation to reify an edge and give the corresponding node id "id" [id

reserved labels: id, n ("n" is implicit label for unlabeled edges) (mb "v" would be better, for "value"???) comparison operator ^= means "do the lhs and the rhs point at the same node in the same graph" $[] means that tokens will be interpreted as strings, not variable names, except for tokens prefixed by ` ; also, like [[]], u can separate by spaces within a graph construction, "self" as a value refers to the current node, as if it were a node id & is the "parent" operator. & self refers to the (lexical) parent node (since this is a graph, nodes may have multiple actual parents) & & self refers to the grandparent within a nested rooted graph construction, "root" as a node id refers to the root within a graph construction, the graph's nodes' ids are bound to symbols, i.e. if some node is called "bobsNode" within the constructor "thisGraph = [ ..." then "bobsNode" is short for thisGraph.bobsNode. if this shadows some other symbol named bobsNode, use the unshadow operator ("^") to get that one: ^bobsNode edges can have multiple labels, denoted by a list: [[label1, label2] = value] (or, if u prefer, [(label1, label2) = value]) unless the type is tagged "singleLabel" to add a label to an edge later, use fn addLabel or somesuch each edge is implicitly also labeled by its position within the edge list (which may be changed later) unless the type is tagged "unordered" multiple edges cannot share the same label unless the type is tagged "nonuniqLabel" this is b/c ow the "get" operator's return type must depend on whether the requested edge is unique in each particular case, b/c it usually returns the value itself, but if there are multiple values, it should return all of them in a list.
value1, value2, etc] is notation for a root node with a node id (global label) "id"

note: local node labels not needed; global edge labels provided by reified edge nodes

example: a single node with value "10": ex = [10] ex.n == 10

example: a single node that points to itself: ex = [s

ex = [self] ex.n ^= ex
s]

example: a list

ex = ["apple","banana","cherry"] ex = [["apple" "banana" "cherry"]] ex = $[apple banana cherry] fruit2 = "banana"; ex = $[apple `fruit2 cherry] fruit23 = $[banana cherry]; ex = $[apple `@fruit23] ex = "apple", "banana", "cherry" ex.0 == "apple" ex.1:2 = ["banana" "cherry"] ex2 = ["grapefruit", @ex] ex2 = ["grapefruit","apple","banana","cherry"]

example: an association table ex = [[ apple = red banana = yellow cherry = red ex = assoc $[[apple red] [banana yellow] [cherry red]] ex = assoc $[apple red], $[banana yellow], $[cherry red] ex = assoc $[ apple red banana yellow cherry red ex = assoc [ "apple", "red" "banana", "yellow" "cherry", "red" ex = assoc [[ "apple" "red" "banana" "yellow" "cherry" "red"

/. note: assoc takes a list of nodes of form [key value] creates one node whose labels are the keys and where the associated values are the values ./

/. note: if you wanted some 1-ary function "f" instead of 0-ary "apple" inside [[]], you'd put it in double parens to "castrate" it: [ ((f)) "red" ...

./

/. mb no commas OR []s are even needed when things are 0-ary?

"apple" "banana"

naw, too hard to read if u don't know the arities. use [[]] if u want spaces. ./

ex."apple" == "red" 3