notes-computer-jasper-jasperNotes8

orthecreedence 3 days ago

link

Common Lisp guy gonna chime in here...having a REPL that integrates with your editor (vim/slimv in my case) is an intense improvement on any other language I've ever used. With C you edit, leave, `make`, run. With PHP you edit, open browser, F5, run...same with JS.

With lisp, I edit, eval (simple command in my editor), see result. If I'm building a game or a long-running app, I can redefine functions while it's running without doing a full recompile and having to try and get back to the state the app was in when I wanted to test my change.

So you get the ability to live-code while the app is running. Once you get used to being able to do this, leaving your editor and opening to shell to test your app seems like a barbaric exercise.

So a good REPL (lisp or not, doesn't matter) can make the experience a lot more interactive and experimental that your average write -> save -> compile -> run cycle.

reply

gsg 3 days ago

link

Unfortunately the OCaml toplevel pales in comparison to CL environments like slime in terms of visibility and debuggability. Redefinition is also off the table, as OCaml is a very static language. (You can rebind the same name, which can be helpful in exploratory programming but is not at all the same thing.)

It remains an extremely useful tool though.

reply

--

"I'm not arguing that you can't get work done in these languages. You can get very solid work done in all of them, with enough persistence. My basic impetus for blogging about this stuff stems from the desire to have my cake and eat it at work. I want a language with Python's succinctness, Lisp's extensibility, C's performance, Erlang's parallelism, Java's tools, Io's orthogonality, and apparently Qi's type system. :-) And more besides." -- http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptable-lisp.html

--

" My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines.My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines. " -- http://www.loper-os.org/?p=8

---

https://groups.google.com/forum/?hl=en#!topic/comp.lang.lisp/XpvUwF2xKbk[101-125-false]

---

Design of a LISP-based microprocessor

http://dl.acm.org/citation.cfm?id=359031&coll=portal&dl=ACM

---

" Kent M Pitman 2/20/99 More LispOS? talk (was Re: Lisp subverts the world (was Re: ints vs fixnums (was Re: Java ... (was Re: ... (was Re: ...))))) cba...@2xtreme.net (Christopher R. Barry) writes:

> Within Garnet you can move your mouse over any object in your GUI and > hit F1 to bring up the Inspector which gives you information about > every single slot value in the object, displays it's "is-a" hierarchy, > and shows you what things it's formulas depend on.

Within DW, you can move your mouse over any object or subobject that has been displayed, whether or not it was part of any GUI. You get this virtually for free and have to work hard to suppress it if you don't want it. But it doesn't require any special programming.

One of the most common ways to get a foothold in Genera for debugging something is to (1) find a place where the thing you want to use appears visually, (2) click Super-Left to get the object into your hands for read-eval-print, inspection, etc, (3) use (ed (type-of *)) to find its source reliably and with no foreknowledge of what the type is, who wrote the program, what their file conventions were, who loaded it, or any of myriad other things that other systems make me do.

> Under the X windowing system there is this standard tool "editres" > that lets you click on any client and bring up it's class hierarchy > tree and edit resources interactively.

Not resources. Objects. All objects. Not just the heavyweight ones. Lists. Integers. I somehow don't believe you when you say this is the same as what Genera does.

The usual thing DW enables is the debugging of things that the person did NOT make part of the standard interface. Pointing to a UI tool and saying it allows you to plan for structured debugging of something is missing the point.

Or maybe you're saying that Garnet allows me to cross the boundary into the internals of systems. Maybe I'm missing out.

When you can make the claim that if you teach me a few paragraphs worth of stuff, I will have all of the information I need in order to bootstrap myself into a complete understanding of the entire operating system and all of its conventions, I'll believe you Garnet and X are as powerful as Genera and Dynamic Windows. Until then, I'll continue to believe they are different. Incidentally, I'm dying to know that there is a tool that will do all that Genera could do only for stock hardware becuase the stack of books I need to buy in order to be equivalently competent with other systems is daunting and I could really do with the savings in time, energy, and money of learning things about standard systems in the way I've been doing...

I don't even find Emacs itself to be equivalently self-documenting and easy to get around in, much less the many systems that Emacs touches.

> I know these things aren't as cool as the LispM?'s equivalent, but is > that it? It sounds like you have few nice applications that have > functionality equivalent to modern environments but because of their > tight integration you can do a few things that you otherwise couldn't.

Dynamic Windows offered a tight integration with the system. You alleged that all that it offered is captured by Emacs. The burden is on you to show that it is. Defining away the problem does not do that.

You haven't said how Emacs lets me "discover" systems. You've only said it provides tools that enable those who already know about a system, and Garnet/X tools that let me inspect the formally presented UI of a system. That's not the same.

> > But for example, there's no assurance when I read in a source file > > that it's the same source file that I loaded. The lisp machine > > keeps meticulous records not only of which source files are loaded > > and from where but which functions have been patched by which files. > > That means I can't make a mistake and use the wrong tags file or the > > wrong version of files backing up a tags file. > > There are modern IDEs available for popular languages with powerful > project file mechanisms with very similar capability. Not as cool, but > still powerful and usable.

This started out by a discussion of what's cool. You said Emacs was cool and nothing else was needed. Now you're qualifying your remarks by saying everything is not as cool, and dodging the specifics I laid out.

> > There are myriad subtle ways in which Emacs is only a pale shadow of > > Zmacs. > > Other than some of the cool functionality that arises within Zmacs > from its super-tight integration with the rest of the environment, > what can Zmacs itself specifically do that Emacs can't?

Paraphrase: "Other than the fact that the system you're talking about involved a lot of people investing time and thought on various issues that have been lost, can you explain to me why a system in which that thought has been lost is deficient?" Uh, ... no. I can't. This was about lost features. This was about your claim that nothing had been lost. If you leave out the stuff that's lost, you're right, there's no difference.

Look, I used to literally sit around in my office at MIT years ago and fuss at people who said Lisp Machines were everything. (I used and liked Lisp Machines but I myself didn't think they were what they were because of the hardware--they were what they were because of the software and the ideas behind them.) I used to ask people "What's an impossible thing to do? I'm looking for something to do this afternoon in Teco that people say can only do on Lisp Machines." People said Zmail. I wrote the approximate equivalent in Teco--filters, and all that. They wanted a filter menu. I wrote that in Teco. They wanted mouse handling. (That was tough because PDP10's didn't have mice, but I used my imagination a little and arranged a mouse protocol from Lisp Machines so you could telnet to the teco-based Emacs and click on my filter menus.) They wanted Zmail init files. I wrote a Lisp compiler in Teco so that I could use people's Zmail init files unmodified. It was about 20K words of Teco code, btw. Code was smaller back then... sigh.

Anyway, I *know* what it is to look at functionality and duplicate it elsewhere. It CAN be done. I am not saying it can't. What I'm saying is that it has not been done, and it's a crying shame. Few people even know there ever WAS a lisp machine, and those who do are mostly not rich enough personally to invest the time to duplicate what was there. Many people spent a big chunk of their lives investing in this dream and it didn't pan out quite as we wish. Ok. Sometimes other events win out--not always even for the right reasons. Or at least for the reasons you wish. But don't add insult to injury to say that the losers in battles such as these had nothing to offer.

Common Lisp beat out Interlisp, and maybe for good reasons but it doesn't mean Interlisp had nothing to offer--some very good ideas got lost in the shuffle and I don't pretend that Common Lisp just obviously had a better way. Java is going to beat out Smalltalk perhaps, but that doesn't mean Java is better than Smalltalk. We owe it to the losers in these little skirmishes to make sure that, if nothing else, the good ideas are not lost along with the framework. And we do not accomplish that by defining that there was nothing lost. That's both callous to those who worked hard on these other things and short-sighted to the future, which might one day care about the things that got lost.

There are still Lisp Machines around. If you want to opine on them with authority, get one and use it. There is no substitute for first-hand data in situations like this.

> > > And XEmacs can > > > embed cool color graphics and glyphs/widgets into the frames to. > > > Is there anything a programmer cannot do elegantly and efficiently > > > within Emacs? > > > > In principle? Well, it's all software. You can make it be whatever > > youwant, I guess. But in practice, if you're asserting that Emacs > > duplicates the features of Genera's editor, I think you're kidding > > yourself. > > This functionality to click on an instance of any object seems really > cool, and indeed Emacs as it is currently could never do this. But is > that all? And again, this feature doesn't seem specific to Zmacs but > rather Genera itself within which Zmacs is tightly integrated.

No. It's not all. I could enumerate any of a zillion things the Lisp Machine editor system has that are not in Emacs. But what would be the point? You'd see any finitely enumerable list, tell me it was all stuff that could be done, and then say that my point was moot.

Here are a few, not intended to be complete, but to give you a spirit of the degre of "precision" in Zmacs commands that distinguish them from Emacs commands:

   This is an extension to Tags Multiple Query Replace (which does a
   multiple-strings-at-once Query Replace over a Tags Table; and,
   incidentally, the Tags Tables don't have to be files--the Lisp
   Machine just makes them up on the fly from spaces of buffers, from
   system definitions, etc. on user request).  It allows you to put
   pairs of elements and their replacements in a buffer and have
   all replaced in parallel.
   As with all operations of this kind (including Tags operations and
   other mapping operations like "Edit Compiler Warnings"), this creates
   a "possibilities buffer" which is a physical manifestation of keeping
   your finger on where you were in the middle of a complex operation
   so that if you see something else you want to do while you are
   doing the replacement, you can suspend the operation and resume it
   later perhaps after doing other replacements or edits.  When editing
   the ANSI CL spec, something I refused to do on anything but a Lisp
   Machine, I often had dozens of these buffers stacked up in simultaneous
   use and was successfully able to resume them to make sure all completed
   while allowing the system to accomodate my "focus".
   This command allows you to run a source comparision program (which
   itself is just presentationally way better than Unix diff or
   emacs compare-windows).  There was a public version of a source
   comparison program written by someone a while back that is as good
   but is in CL and isn't integrated into Emacs.  Alas.  But in addition
   to the presentational issues, which are comparatively minor, the real
   feature was how this could be called.  It prompts for two things
   describing what to compare, including "buffer", "file", "region",
   "definition", "top of kill ring" and "second in kill ring".  you type
   a single character (B/F/R/D/c-Y/m-Y) for each such prompt.  It's
   completely essential for comparing files.  What I can't comprehend
   is why no one thinks this is essential in a non-versioned file system.
   It's important enough in a versioned file system but in a non-versioned
   system one is always finding "probable copies" of files all over the place
   and trying to understand the differences.  Ready access to program
   controlled source compare is central to everything I do and basically
   wholly absent on stock hardware.  Another example is when you are saving
   a file on the lisp machine and you find it's been written by someone else;
   in emacs the query just tells you of a problem--in zmacs it offers to
   compare the files before making you decide whether to continue saving.
   It's not the feature itself, though that's important, but
   it's the attention to detail at this fine-grained level throughout the
   system which is why the lisp machine is so fondly remembered.

And that's my overall point. It's not just about what's missing. It's about the lack of interest in those who have created Emacs in supporting those other things. I still just use my Lisp Machine. It's right here next to my PC, and on a regular basis I just move to the other chair and edit in Zmacs. 6 years after heavy-duty development ceased on Zmacs, it still whomps the competition and makes me not care that processor speeds have gone up a zillionfold in the meantime. Others will tell you the same.

I WISH I could use features like that on a fast processor. that would be great. But it isn't likely to happen soon.

You can say the burden is on us old-timers to tell you what's missing or we shouldn't be whining. But I don't see it that way. I see the burden is on the victors, who have the resources and who claim their way is better, to show us that they won for good reason. We did our part for the cause. We may or may not continue to try to do things to assure the ideas aren't lost.

I spend a lot of my time trying to make sure old ideas make it onto the books and don't get lost. But I'm just one person. It takes more than one person. And the task does not begin by dismissing the need to do the job.

> > Also, I'm not suggesting this was a property of the Lisp Machine or of > > special hardware. Nothing the LispM? did couldn't be implemented in > > standard systems if people were of a mind to do it. But they haven't > > been, and it's historical revisionism to deny that stuff has been lost. > > Problem is, only people that have access to them know specifically > what makes them cool. It's nice to hear from you what some of this > functionality really is, but it seems that most of it is duplicated at > least in part by some modern tools/environments.

You can order doc sets ... people give them away on this and other lists periodically.

I wish you luck in this regard. These matters are syntactically small and I don't really care a lot about them at the micro level. But agreement is important and Anything that unifies the Lisp communities is good. "?" is a bad choice of character becuase it frustrates Scheme programs who want to use "?" instead of "P" for the end of predicate names. Even if one isn't going for Scheme compatibility, going for things that don't creating gaping divides between the communities is good. "

---

http://en.wikipedia.org/wiki/C-element

--

http://webintents.org/

--

" Overview Io is a dynamic prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk[1] (all values are objects), Self[2] (prototype-based), NewtonScript?[3] (differential inheritance), Act1[4] (actors and futures for concurrency), Lisp[5] (code is a runtime inspectable / modifiable tree) and Lua[6] (small, embeddable). Perspective The focus of programming language research for the last thirty years has been to combine the expressive power of high level languages like Smalltalk and the performance of low level language like C with little attention paid to advancing expressive power itself. The result has been a series of languages which are neither as fast as C or as expressive as Smalltalk. Io's purpose is to refocus attention on expressiveness by exploring higher level dynamic programming features with greater levels of runtime flexibility and simplified programming syntax and semantics.

In Io, all values are objects (of which, anything can change at runtime, including slots, methods and inheritance), all code is made up of expressions (which are runtime inspectable and modifiable) and all expressions are made up of dynamic message sends (including assignment and control structures). Execution contexts themselves are objects and activatable objects such as methods/blocks and functions are unified into blocks with assignable scope. Concurrency is made more easily manageable through actors and implemented using coroutines for scalability. Goals To be a language that is:

simple

    conceptually simple and consistent
    easily embedded and extended 

powerful

    highly dynamic and introspective
    highly concurrent (via coroutines and async i/o) 

practical

    fast enough
    multi-platform
    unrestrictive BSD/MIT license
    comprehensive standard packages in distro "

" overview

Io is a prototype-based programming language inspired by Smalltalk (all values are objects, all messages are dynamic), Self (prototype-based), NewtonScript? (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua (small, embeddable). features

BSD license small vm (~10K semicolons) multi-state (multiple VMs in same process) incremental collector, weak links actor-based concurrency, coroutines 64bit clean C99 implementation embeddable, exceptions, unicode "

---

perhaps the reason that programming small scripts for numerical exploration demands less safety than normal programs is: (a) the person running the script is also a programmer, and in fact, the same programmer who wrote the script (b) the control flow only 'leaves your hands' briefly (e.g. only a small number of lines of control are executed each time before returning control to you at the interpreter prompt)'. input and output data is not transformed via presentation layers in between touching your hands and eyes, and the core processing.

--

these notes for gasket on why i still use a wiki for my website instead of a static site generated with e.g. jekyll might inspire something, particularly the way in which tags as embedded links within content of a certain matchable form are more convenient than metadata separate from content:

(why is my website a locked wiki rather than a static site?

a) history, before the onset of wiki-spam i wanted it to be an open wiki. With Gasket, i hope to achieve that.

b) lightweight markup

c) RecentChanges?

f) backlinks. simple tagging system via CategoryTagName? w/o the need for 'metadata'.

e) simple workflow, no 'metadata'

http://jalada.co.uk/2011/01/30/back-to-wordpress.html complains about Jeykll workflow: " Jekyll was nice, but the barrier to entry for posting was far too high. I’d fire up TextMate?, copy some meta data from another post, write the post (in Markdown), upload images manually if I wanted them, save the post with the right filename (I got this wrong a lot), push to Github, SSH to my server, pull from Github, tweet about the new post. That’s a lot of work which meant ideas for blog posts never went anywhere. "

)

--

in fact, that may be one of the reasons that HTML is so convenient, too; that links are embedded within the content rather than being in a separate metadata section.

this may be a lesson for Jasper, somehow. perhaps one that has already been learned, by way of Views.

--

"

Functions mapping sets to truth values ( singleton? X ) Returns true iff the set X has exactly one element. (

doubleton? X ) Returns true iff the set X has exactly two elements. (

tripleton? X ) Returns true iff the set X has exactly three elements. Functions on sets (

set-difference X Y ) Returns the set that results from removing Y from X . (

union X Y ) Returns the union of sets X and Y . (

intersection X Y ) Returns the intersect of sets X and Y . (

select X ) Returns a set containing a single element from X .

Logical functions (

and P Q ) Returns true if P and Q are both true. (

or P Q ) Returns true if either P or Q is true. (

not P ) Returns true iff P is false. (

if P X Y ) Returns X iff P is true, Y otherwise. Functions on the counting routine (

next W ) Returns the word after W in the counting routine. (

prev W ) Returns the word before W in the counting routine. (

equal-word? W V ) Returns true if W and V are the same word.

Recursion (

L S ) Returns the result of evaluating the entire current lambda expression S . Table 2.1: Primitive operations allowed in the LOT. All possible compositions of these primitives are valid hypotheses for the model.

http://colala.bcs.rochester.edu/papers/piantadosi_thesis.pdf pdf page 32

--

http://en.wikipedia.org/wiki/Live_coding

--

https://github.com/petkaantonov/bluebird/wiki/Optimization-killers

--

" C++ POD Member Handling Email ThisBlogThis?!Share to TwitterShare? to FacebookShare? to Pinterest

I always mess up the initialization of plain old data fields in C++. Always. Maybe by writing it up, I'll finally get it right.

Plain Old Data is essentially anything a regular C compiler could compile. That is:

    integer and floating point numbers (including bool, though it isn't in C)
    enums
    pointers, including pointers to objects and pointers to functions
    some aggregate data structures (structs, unions, and classes)

A struct, union, or class is treated as plain old data if it has only the default constructor and destructor, has no protected or private member variables, does not inherit from a base class, and has no virtual functions. I suspect most C++ programmers have an intuitive feel for when a class behaves like an object and when its just a collection of data. That intuition is pretty good in this case.

The default constructor for plain old data leaves it uninitialized. An explicit constructor sets it to zero. Code Result

class Foo { public: Foo() {} int a_; };

	Result: a_ is uninitialized.

class Foo { public: Foo() : a_() {} int a_; };

	Result: the member corresponding to a_ is zeroed.Were it a structure, the entire thing would be zero.

People are often confused by the first point, that member POD fields are left uninitialized unless specifically listed in the initializer list. This is not the same as for member objects, which call the default constructor. Making this even more confusing, when a process starts up any pages it gets from the OS will be zeroed out to prevent information leakage. So if you look at the first few objects allocated, there is a better than average chance that all of the member variables will be zero. Unfortunately once the process has run for a while and dirtied some of its own pages, it will start getting objects where the POD variables contain junk.

Struct initializers

Putting a POD struct in the initializer list results in zeroing the struct. If the struct needs to contain non-zero data, C++0x adds a useful capability:

struct bar { int y; int z; };

class Foo { public: Foo() : b_({1, 2}) {} struct bar b_; };

Recent versions of gcc implement this handling, though a warning will be issued unless the -std=c++0x or -std=gnu++0x command line flag is given. Labels: C Programming " -- http://codingrelic.geekhold.com/2011/01/c-pod-member-handling.html

--

" x86 vs ARM: Active Power

It requires power to switch a CMOS transistor 0->1 or 1->0, so one way to reduce power consumption is to have fewer transistors and to switch them at a lower frequency. x86 is at a disadvantage here compared to ARM, which Intel and AMD's design teams have to cover with extra work and cleverness. The vagaries of the x86 instruction set burdens it with hardware logic which ARM does not require.

    Since the Pentium Pro, Intel has decoded complex x86 instructions down to simpler micro-ops for execution. AMD uses a similar technique. This instruction decode logic is active whenever new opcodes are fetched from RAM. ARM has no need for this logic, as even its alternate Thumb encoding is a relatively straightforward mapping to regular ARM instructions.
    x86_32 exposes only a few registers to the compiler. To achieve good performance, x86 CPUs implement a much larger number of hardware registers which are dynamically renamed as needed. ARM does not require such extensive register renaming logic.
    Every ARM instruction is conditional, and simple if-then-else constructs can be handled without branches. x86 relies much more heavily on branches, but frequent branches can stall the pipeline on a processor. Good performance in x86 requires extensive branch prediction hardware, where ARM is served with a far simpler implementation." -- http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-cpus.html

--

http://codingrelic.geekhold.com/2008/10/aliasing-by-any-other-name.html

For the sake of efficiency, during optimization, C compilers may make some assumptions about which pointers may be aliased. If they made no assumptions at all, then whenever a value is written to the location pointed to by a pointer, it would have to be assumed that the value at every other location pointed to by every other pointer may have changed. This would prevent any optimizations where the compiler replaces some pointer-accessed locations with registers.

So, the standard defined a rule for when the compiler may assume that two pointers are not aliases. The rule is when they are of incompatible types.

So, if you have two aliased pointers of incompatible types (e.g. one is a pointer to a 16-bit number and the other is a pointer to a 32-bit number), the C compiler may 'optimize' in ways that assume that the pointers are not aliased, causing bugs.

--

jasper should eschew optimizations like those prevented by the C 'volatile' keyword, by default; instead, a 'nonvolatile' sort of annotation should explicitly say which things can be optimized in this way

--

http://codingrelic.geekhold.com/2008/03/secret-life-of-volatile.html describes some sorts of optimizations which C's 'volatile' prevents. Namely, replacing references through a pointer with references to a register; reordering memory accesses; writing only the last value to memory when a sequence of values overwrite each other. Notes that volatile does NOT help avoid the CPU data cache (to avoid this, create a memory-mapping in a special way) or insert memory barriers.

http://www.airs.com/blog/archives/154 notes that volatile does not guarantee atomic accesses, nor does it guarantee cache flushes. http://web.archive.org/web/20120210084400/http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ makes similar points.

--

Hoon's %=, (C(W X)), "query with changes", or "resolve with changes" looks useful.

--

http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/

Ruby vs. Lisp:

[1,2,3].map {

nn*n }.reject {nn%3==1 }

(remove-if (lambda (n) (= (mod n 3) 1)) (mapcar (lambda (n) (* n n)) '(1 2 3)))

--

in Lua, reading nonexistent variables is not an error, it just give you nil.

not suggesting this for jasper, it's just interesting.

--

it's desirable to be able to write chains of processing steps from left to right, like Ruby, instead of nested, like Lisp:

[1,2,3].map {

nn*n }.reject {nn%3==1 }
  is better than:

(remove-if (lambda (n) (= (mod n 3) 1)) (mapcar (lambda (n) (* n n)) '(1 2 3)))

but i'd prefer not to have everything OOP like in Ruby, because it seems silly to me that a symmetric two-argument function like addition should be defined in an asymmetric way.

so how could we do that if map and reject were just functions?

you'd just have to have syntax operators that lets you say "take the result on my left, and give it as the first argument to the function on my right". to generalize, let the user choose which argument on the right gets the thing on the left.

perhaps this is how arrows work in Haskell, i'm not sure.

so e.g., using '

' as the operator and '-' to mark where to put the argument, you'd have something like:

[1,2,3]

map - {nn*n }reject - {nn%3==1 }

note the similarity to Unix shell syntax. Why is this longer than the above Ruby code? because we're explicitly specifying at which arguments to put the incoming results. We could say that if no place is specified (by the next pipe), then put it as the first argument:

[1,2,3]

map {nn*n }reject {nn%3==1 }

now, what about Ruby's 'yield'? we don't need 'yield' if we are just passing anonymous functions, we only need it if the block coming in can 'return' in the larger scope. And, to make things as concise as possible, we may as well omit the argument lists in the anonymous lambdas and use special default variables to match positional args:

[1,2,3]

map {$1*$1 }reject {$1%3==1 }

imo that's even easier to read than Ruby!

interesting that this scheme uses two kinds of default variables: the target of the pipe (set by '-', or, by default, the first argument of the function), and the variables for the anonymous lambdas ($1, $2 etc)

note: instead of $1,$2, etc, should we use x,y,z or a,b,c?

--

incidentally, for Jasper's answer to Ruby 'yield', should we just allow passed first-class functions to act that way anyways, if they are called in a special way? kinda like giving them a continuation that they could call, i guess?

--

to generalizing piping further, just have a way to have multiple 'pipes' and to route the inputs and outputs of a command to the pipes

actually routing the outputs is too complicated; if you want to do that, just do an assignment and spread the pipeline over multiple lines.

may as well just use $1, $2, etc instead of -

so $1, $2, etc take the values of the multiple return arguments by the guy at the previous step of the pipeline

and you can also capture the values of e.g. STDOUT and STDERR by asking for e.g. $STDOUT and $STDERR in the next pipeline stage

but what if you want to capture STDOUT but instead of using it in this stage, use it in the next one? oh dear, i guess we need output routing after all. we need something like $STDOUT > $STDOUT (our STDOUT is disconnected and replaced with the previous stage's STDOUT) or $STDOUT >> $STDOUT (our stdout is merged with the previous stage's STDOUT). And then you could do $STDOUT > $3 (our return argument 3 is the previous stage's STDOUT), or $3 > $2 (our return argument 2 is the previous stage's return argument 3).

i guess pipelines should be able to do streams, not just fixed objects. but they should do them lazily, so that they can be infinite. i guess that's not different from the source sending the target a lazy list. so we don't need to do anything special here, if they want to stream, they'll send a lazy list over the pipeline.

note that this pipeline stuff is all just syntactic sugar for ordinary sequences of commands involving assignments of return arguments to temporary variables, passing of temporary variables to functions, and semiglobals (STDOUT is a semiglobal). So it can be expanded before applying macros.

note: we need a generalized 'identity' combinator for this, that just passes out the same stuff it got in, no matter how many arguments

note: can we do SKI combinatorial calculus? i think so; we have K from above, the need for a generalized I is clear, and i think we can do S as a normal function

--

what is STDOUT anyways? a lazy list? a lazy list wrapped with a cursor? something else?

--