proj-oot-old-150618-ootNotes8

orthecreedence 3 days ago

link

Common Lisp guy gonna chime in here...having a REPL that integrates with your editor (vim/slimv in my case) is an intense improvement on any other language I've ever used. With C you edit, leave, `make`, run. With PHP you edit, open browser, F5, run...same with JS.

With lisp, I edit, eval (simple command in my editor), see result. If I'm building a game or a long-running app, I can redefine functions while it's running without doing a full recompile and having to try and get back to the state the app was in when I wanted to test my change.

So you get the ability to live-code while the app is running. Once you get used to being able to do this, leaving your editor and opening to shell to test your app seems like a barbaric exercise.

So a good REPL (lisp or not, doesn't matter) can make the experience a lot more interactive and experimental that your average write -> save -> compile -> run cycle.

reply

gsg 3 days ago

link

Unfortunately the OCaml toplevel pales in comparison to CL environments like slime in terms of visibility and debuggability. Redefinition is also off the table, as OCaml is a very static language. (You can rebind the same name, which can be helpful in exploratory programming but is not at all the same thing.)

It remains an extremely useful tool though.

reply

--

"I'm not arguing that you can't get work done in these languages. You can get very solid work done in all of them, with enough persistence. My basic impetus for blogging about this stuff stems from the desire to have my cake and eat it at work. I want a language with Python's succinctness, Lisp's extensibility, C's performance, Erlang's parallelism, Java's tools, Io's orthogonality, and apparently Qi's type system. :-) And more besides." -- http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptable-lisp.html

--

" My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines.My concept of the system expanded from that of a hierarchical notebook to a complete programming system. Yet I knew that it could not satisfy me if it contained any parts not modifiable from within itself. Emacs would stand a chance, were it not for its C-language core. Then I discovered that system-level reflection is not an unreasonable demand. I found out that instant documentation and source access for any object known to the user/programmer is a reasonable thing to expect. That it is possible to halt, repair, and resume a running program. And, depressingly, that one could do these things on 1980s vintage hardware but not on modern machines. " -- http://www.loper-os.org/?p=8

---

https://groups.google.com/forum/?hl=en#!topic/comp.lang.lisp/XpvUwF2xKbk[101-125-false]

---

Design of a LISP-based microprocessor

http://dl.acm.org/citation.cfm?id=359031&coll=portal&dl=ACM

---

" Kent M Pitman 2/20/99 More LispOS? talk (was Re: Lisp subverts the world (was Re: ints vs fixnums (was Re: Java ... (was Re: ... (was Re: ...))))) cba...@2xtreme.net (Christopher R. Barry) writes:

> Within Garnet you can move your mouse over any object in your GUI and > hit F1 to bring up the Inspector which gives you information about > every single slot value in the object, displays it's "is-a" hierarchy, > and shows you what things it's formulas depend on.

Within DW, you can move your mouse over any object or subobject that has been displayed, whether or not it was part of any GUI. You get this virtually for free and have to work hard to suppress it if you don't want it. But it doesn't require any special programming.

One of the most common ways to get a foothold in Genera for debugging something is to (1) find a place where the thing you want to use appears visually, (2) click Super-Left to get the object into your hands for read-eval-print, inspection, etc, (3) use (ed (type-of *)) to find its source reliably and with no foreknowledge of what the type is, who wrote the program, what their file conventions were, who loaded it, or any of myriad other things that other systems make me do.

> Under the X windowing system there is this standard tool "editres" > that lets you click on any client and bring up it's class hierarchy > tree and edit resources interactively.

Not resources. Objects. All objects. Not just the heavyweight ones. Lists. Integers. I somehow don't believe you when you say this is the same as what Genera does.

The usual thing DW enables is the debugging of things that the person did NOT make part of the standard interface. Pointing to a UI tool and saying it allows you to plan for structured debugging of something is missing the point.

Or maybe you're saying that Garnet allows me to cross the boundary into the internals of systems. Maybe I'm missing out.

When you can make the claim that if you teach me a few paragraphs worth of stuff, I will have all of the information I need in order to bootstrap myself into a complete understanding of the entire operating system and all of its conventions, I'll believe you Garnet and X are as powerful as Genera and Dynamic Windows. Until then, I'll continue to believe they are different. Incidentally, I'm dying to know that there is a tool that will do all that Genera could do only for stock hardware becuase the stack of books I need to buy in order to be equivalently competent with other systems is daunting and I could really do with the savings in time, energy, and money of learning things about standard systems in the way I've been doing...

I don't even find Emacs itself to be equivalently self-documenting and easy to get around in, much less the many systems that Emacs touches.

> I know these things aren't as cool as the LispM?'s equivalent, but is > that it? It sounds like you have few nice applications that have > functionality equivalent to modern environments but because of their > tight integration you can do a few things that you otherwise couldn't.

Dynamic Windows offered a tight integration with the system. You alleged that all that it offered is captured by Emacs. The burden is on you to show that it is. Defining away the problem does not do that.

You haven't said how Emacs lets me "discover" systems. You've only said it provides tools that enable those who already know about a system, and Garnet/X tools that let me inspect the formally presented UI of a system. That's not the same.

> > But for example, there's no assurance when I read in a source file > > that it's the same source file that I loaded. The lisp machine > > keeps meticulous records not only of which source files are loaded > > and from where but which functions have been patched by which files. > > That means I can't make a mistake and use the wrong tags file or the > > wrong version of files backing up a tags file. > > There are modern IDEs available for popular languages with powerful > project file mechanisms with very similar capability. Not as cool, but > still powerful and usable.

This started out by a discussion of what's cool. You said Emacs was cool and nothing else was needed. Now you're qualifying your remarks by saying everything is not as cool, and dodging the specifics I laid out.

> > There are myriad subtle ways in which Emacs is only a pale shadow of > > Zmacs. > > Other than some of the cool functionality that arises within Zmacs > from its super-tight integration with the rest of the environment, > what can Zmacs itself specifically do that Emacs can't?

Paraphrase: "Other than the fact that the system you're talking about involved a lot of people investing time and thought on various issues that have been lost, can you explain to me why a system in which that thought has been lost is deficient?" Uh, ... no. I can't. This was about lost features. This was about your claim that nothing had been lost. If you leave out the stuff that's lost, you're right, there's no difference.

Look, I used to literally sit around in my office at MIT years ago and fuss at people who said Lisp Machines were everything. (I used and liked Lisp Machines but I myself didn't think they were what they were because of the hardware--they were what they were because of the software and the ideas behind them.) I used to ask people "What's an impossible thing to do? I'm looking for something to do this afternoon in Teco that people say can only do on Lisp Machines." People said Zmail. I wrote the approximate equivalent in Teco--filters, and all that. They wanted a filter menu. I wrote that in Teco. They wanted mouse handling. (That was tough because PDP10's didn't have mice, but I used my imagination a little and arranged a mouse protocol from Lisp Machines so you could telnet to the teco-based Emacs and click on my filter menus.) They wanted Zmail init files. I wrote a Lisp compiler in Teco so that I could use people's Zmail init files unmodified. It was about 20K words of Teco code, btw. Code was smaller back then... sigh.

Anyway, I *know* what it is to look at functionality and duplicate it elsewhere. It CAN be done. I am not saying it can't. What I'm saying is that it has not been done, and it's a crying shame. Few people even know there ever WAS a lisp machine, and those who do are mostly not rich enough personally to invest the time to duplicate what was there. Many people spent a big chunk of their lives investing in this dream and it didn't pan out quite as we wish. Ok. Sometimes other events win out--not always even for the right reasons. Or at least for the reasons you wish. But don't add insult to injury to say that the losers in battles such as these had nothing to offer.

Common Lisp beat out Interlisp, and maybe for good reasons but it doesn't mean Interlisp had nothing to offer--some very good ideas got lost in the shuffle and I don't pretend that Common Lisp just obviously had a better way. Java is going to beat out Smalltalk perhaps, but that doesn't mean Java is better than Smalltalk. We owe it to the losers in these little skirmishes to make sure that, if nothing else, the good ideas are not lost along with the framework. And we do not accomplish that by defining that there was nothing lost. That's both callous to those who worked hard on these other things and short-sighted to the future, which might one day care about the things that got lost.

There are still Lisp Machines around. If you want to opine on them with authority, get one and use it. There is no substitute for first-hand data in situations like this.

> > > And XEmacs can > > > embed cool color graphics and glyphs/widgets into the frames to. > > > Is there anything a programmer cannot do elegantly and efficiently > > > within Emacs? > > > > In principle? Well, it's all software. You can make it be whatever > > youwant, I guess. But in practice, if you're asserting that Emacs > > duplicates the features of Genera's editor, I think you're kidding > > yourself. > > This functionality to click on an instance of any object seems really > cool, and indeed Emacs as it is currently could never do this. But is > that all? And again, this feature doesn't seem specific to Zmacs but > rather Genera itself within which Zmacs is tightly integrated.

No. It's not all. I could enumerate any of a zillion things the Lisp Machine editor system has that are not in Emacs. But what would be the point? You'd see any finitely enumerable list, tell me it was all stuff that could be done, and then say that my point was moot.

Here are a few, not intended to be complete, but to give you a spirit of the degre of "precision" in Zmacs commands that distinguish them from Emacs commands:

   This is an extension to Tags Multiple Query Replace (which does a
   multiple-strings-at-once Query Replace over a Tags Table; and,
   incidentally, the Tags Tables don't have to be files--the Lisp
   Machine just makes them up on the fly from spaces of buffers, from
   system definitions, etc. on user request).  It allows you to put
   pairs of elements and their replacements in a buffer and have
   all replaced in parallel.
   As with all operations of this kind (including Tags operations and
   other mapping operations like "Edit Compiler Warnings"), this creates
   a "possibilities buffer" which is a physical manifestation of keeping
   your finger on where you were in the middle of a complex operation
   so that if you see something else you want to do while you are
   doing the replacement, you can suspend the operation and resume it
   later perhaps after doing other replacements or edits.  When editing
   the ANSI CL spec, something I refused to do on anything but a Lisp
   Machine, I often had dozens of these buffers stacked up in simultaneous
   use and was successfully able to resume them to make sure all completed
   while allowing the system to accomodate my "focus".
   This command allows you to run a source comparision program (which
   itself is just presentationally way better than Unix diff or
   emacs compare-windows).  There was a public version of a source
   comparison program written by someone a while back that is as good
   but is in CL and isn't integrated into Emacs.  Alas.  But in addition
   to the presentational issues, which are comparatively minor, the real
   feature was how this could be called.  It prompts for two things
   describing what to compare, including "buffer", "file", "region",
   "definition", "top of kill ring" and "second in kill ring".  you type
   a single character (B/F/R/D/c-Y/m-Y) for each such prompt.  It's
   completely essential for comparing files.  What I can't comprehend
   is why no one thinks this is essential in a non-versioned file system.
   It's important enough in a versioned file system but in a non-versioned
   system one is always finding "probable copies" of files all over the place
   and trying to understand the differences.  Ready access to program
   controlled source compare is central to everything I do and basically
   wholly absent on stock hardware.  Another example is when you are saving
   a file on the lisp machine and you find it's been written by someone else;
   in emacs the query just tells you of a problem--in zmacs it offers to
   compare the files before making you decide whether to continue saving.
   It's not the feature itself, though that's important, but
   it's the attention to detail at this fine-grained level throughout the
   system which is why the lisp machine is so fondly remembered.

And that's my overall point. It's not just about what's missing. It's about the lack of interest in those who have created Emacs in supporting those other things. I still just use my Lisp Machine. It's right here next to my PC, and on a regular basis I just move to the other chair and edit in Zmacs. 6 years after heavy-duty development ceased on Zmacs, it still whomps the competition and makes me not care that processor speeds have gone up a zillionfold in the meantime. Others will tell you the same.

I WISH I could use features like that on a fast processor. that would be great. But it isn't likely to happen soon.

You can say the burden is on us old-timers to tell you what's missing or we shouldn't be whining. But I don't see it that way. I see the burden is on the victors, who have the resources and who claim their way is better, to show us that they won for good reason. We did our part for the cause. We may or may not continue to try to do things to assure the ideas aren't lost.

I spend a lot of my time trying to make sure old ideas make it onto the books and don't get lost. But I'm just one person. It takes more than one person. And the task does not begin by dismissing the need to do the job.

> > Also, I'm not suggesting this was a property of the Lisp Machine or of > > special hardware. Nothing the LispM? did couldn't be implemented in > > standard systems if people were of a mind to do it. But they haven't > > been, and it's historical revisionism to deny that stuff has been lost. > > Problem is, only people that have access to them know specifically > what makes them cool. It's nice to hear from you what some of this > functionality really is, but it seems that most of it is duplicated at > least in part by some modern tools/environments.

You can order doc sets ... people give them away on this and other lists periodically.

I wish you luck in this regard. These matters are syntactically small and I don't really care a lot about them at the micro level. But agreement is important and Anything that unifies the Lisp communities is good. "?" is a bad choice of character becuase it frustrates Scheme programs who want to use "?" instead of "P" for the end of predicate names. Even if one isn't going for Scheme compatibility, going for things that don't creating gaping divides between the communities is good. "

---

http://en.wikipedia.org/wiki/C-element

--

http://webintents.org/

--

" Overview Io is a dynamic prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk[1] (all values are objects), Self[2] (prototype-based), NewtonScript?[3] (differential inheritance), Act1[4] (actors and futures for concurrency), Lisp[5] (code is a runtime inspectable / modifiable tree) and Lua[6] (small, embeddable). Perspective The focus of programming language research for the last thirty years has been to combine the expressive power of high level languages like Smalltalk and the performance of low level language like C with little attention paid to advancing expressive power itself. The result has been a series of languages which are neither as fast as C or as expressive as Smalltalk. Io's purpose is to refocus attention on expressiveness by exploring higher level dynamic programming features with greater levels of runtime flexibility and simplified programming syntax and semantics.

In Io, all values are objects (of which, anything can change at runtime, including slots, methods and inheritance), all code is made up of expressions (which are runtime inspectable and modifiable) and all expressions are made up of dynamic message sends (including assignment and control structures). Execution contexts themselves are objects and activatable objects such as methods/blocks and functions are unified into blocks with assignable scope. Concurrency is made more easily manageable through actors and implemented using coroutines for scalability. Goals To be a language that is:

simple

    conceptually simple and consistent
    easily embedded and extended 

powerful

    highly dynamic and introspective
    highly concurrent (via coroutines and async i/o) 

practical

    fast enough
    multi-platform
    unrestrictive BSD/MIT license
    comprehensive standard packages in distro "

" overview

Io is a prototype-based programming language inspired by Smalltalk (all values are objects, all messages are dynamic), Self (prototype-based), NewtonScript? (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua (small, embeddable). features

BSD license small vm (~10K semicolons) multi-state (multiple VMs in same process) incremental collector, weak links actor-based concurrency, coroutines 64bit clean C99 implementation embeddable, exceptions, unicode "

---

perhaps the reason that programming small scripts for numerical exploration demands less safety than normal programs is: (a) the person running the script is also a programmer, and in fact, the same programmer who wrote the script (b) the control flow only 'leaves your hands' briefly (e.g. only a small number of lines of control are executed each time before returning control to you at the interpreter prompt)'. input and output data is not transformed via presentation layers in between touching your hands and eyes, and the core processing.

--

these notes for gasket on why i still use a wiki for my website instead of a static site generated with e.g. jekyll might inspire something, particularly the way in which tags as embedded links within content of a certain matchable form are more convenient than metadata separate from content:

(why is my website a locked wiki rather than a static site?

a) history, before the onset of wiki-spam i wanted it to be an open wiki. With Gasket, i hope to achieve that.

b) lightweight markup

c) RecentChanges?

f) backlinks. simple tagging system via CategoryTagName? w/o the need for 'metadata'.

e) simple workflow, no 'metadata'

http://jalada.co.uk/2011/01/30/back-to-wordpress.html complains about Jeykll workflow: " Jekyll was nice, but the barrier to entry for posting was far too high. I’d fire up TextMate?, copy some meta data from another post, write the post (in Markdown), upload images manually if I wanted them, save the post with the right filename (I got this wrong a lot), push to Github, SSH to my server, pull from Github, tweet about the new post. That’s a lot of work which meant ideas for blog posts never went anywhere. "

)

--

in fact, that may be one of the reasons that HTML is so convenient, too; that links are embedded within the content rather than being in a separate metadata section.

this may be a lesson for Oot, somehow. perhaps one that has already been learned, by way of Views.

--

"

Functions mapping sets to truth values ( singleton? X ) Returns true iff the set X has exactly one element. (

doubleton? X ) Returns true iff the set X has exactly two elements. (

tripleton? X ) Returns true iff the set X has exactly three elements. Functions on sets (

set-difference X Y ) Returns the set that results from removing Y from X . (

union X Y ) Returns the union of sets X and Y . (

intersection X Y ) Returns the intersect of sets X and Y . (

select X ) Returns a set containing a single element from X .

Logical functions (

and P Q ) Returns true if P and Q are both true. (

or P Q ) Returns true if either P or Q is true. (

not P ) Returns true iff P is false. (

if P X Y ) Returns X iff P is true, Y otherwise. Functions on the counting routine (

next W ) Returns the word after W in the counting routine. (

prev W ) Returns the word before W in the counting routine. (

equal-word? W V ) Returns true if W and V are the same word.

Recursion (

L S ) Returns the result of evaluating the entire current lambda expression S . Table 2.1: Primitive operations allowed in the LOT. All possible compositions of these primitives are valid hypotheses for the model.

http://colala.bcs.rochester.edu/papers/piantadosi_thesis.pdf pdf page 32

--

http://en.wikipedia.org/wiki/Live_coding

--

https://github.com/petkaantonov/bluebird/wiki/Optimization-killers

--

" C++ POD Member Handling Email ThisBlogThis?!Share to TwitterShare? to FacebookShare? to Pinterest

I always mess up the initialization of plain old data fields in C++. Always. Maybe by writing it up, I'll finally get it right.

Plain Old Data is essentially anything a regular C compiler could compile. That is:

    integer and floating point numbers (including bool, though it isn't in C)
    enums
    pointers, including pointers to objects and pointers to functions
    some aggregate data structures (structs, unions, and classes)

A struct, union, or class is treated as plain old data if it has only the default constructor and destructor, has no protected or private member variables, does not inherit from a base class, and has no virtual functions. I suspect most C++ programmers have an intuitive feel for when a class behaves like an object and when its just a collection of data. That intuition is pretty good in this case.

The default constructor for plain old data leaves it uninitialized. An explicit constructor sets it to zero. Code Result

class Foo { public: Foo() {} int a_; };

	Result: a_ is uninitialized.

class Foo { public: Foo() : a_() {} int a_; };

	Result: the member corresponding to a_ is zeroed.Were it a structure, the entire thing would be zero.

People are often confused by the first point, that member POD fields are left uninitialized unless specifically listed in the initializer list. This is not the same as for member objects, which call the default constructor. Making this even more confusing, when a process starts up any pages it gets from the OS will be zeroed out to prevent information leakage. So if you look at the first few objects allocated, there is a better than average chance that all of the member variables will be zero. Unfortunately once the process has run for a while and dirtied some of its own pages, it will start getting objects where the POD variables contain junk.

Struct initializers

Putting a POD struct in the initializer list results in zeroing the struct. If the struct needs to contain non-zero data, C++0x adds a useful capability:

struct bar { int y; int z; };

class Foo { public: Foo() : b_({1, 2}) {} struct bar b_; };

Recent versions of gcc implement this handling, though a warning will be issued unless the -std=c++0x or -std=gnu++0x command line flag is given. Labels: C Programming " -- http://codingrelic.geekhold.com/2011/01/c-pod-member-handling.html

--

" x86 vs ARM: Active Power

It requires power to switch a CMOS transistor 0->1 or 1->0, so one way to reduce power consumption is to have fewer transistors and to switch them at a lower frequency. x86 is at a disadvantage here compared to ARM, which Intel and AMD's design teams have to cover with extra work and cleverness. The vagaries of the x86 instruction set burdens it with hardware logic which ARM does not require.

    Since the Pentium Pro, Intel has decoded complex x86 instructions down to simpler micro-ops for execution. AMD uses a similar technique. This instruction decode logic is active whenever new opcodes are fetched from RAM. ARM has no need for this logic, as even its alternate Thumb encoding is a relatively straightforward mapping to regular ARM instructions.
    x86_32 exposes only a few registers to the compiler. To achieve good performance, x86 CPUs implement a much larger number of hardware registers which are dynamically renamed as needed. ARM does not require such extensive register renaming logic.
    Every ARM instruction is conditional, and simple if-then-else constructs can be handled without branches. x86 relies much more heavily on branches, but frequent branches can stall the pipeline on a processor. Good performance in x86 requires extensive branch prediction hardware, where ARM is served with a far simpler implementation." -- http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-cpus.html

--

http://codingrelic.geekhold.com/2008/10/aliasing-by-any-other-name.html

For the sake of efficiency, during optimization, C compilers may make some assumptions about which pointers may be aliased. If they made no assumptions at all, then whenever a value is written to the location pointed to by a pointer, it would have to be assumed that the value at every other location pointed to by every other pointer may have changed. This would prevent any optimizations where the compiler replaces some pointer-accessed locations with registers.

So, the standard defined a rule for when the compiler may assume that two pointers are not aliases. The rule is when they are of incompatible types.

So, if you have two aliased pointers of incompatible types (e.g. one is a pointer to a 16-bit number and the other is a pointer to a 32-bit number), the C compiler may 'optimize' in ways that assume that the pointers are not aliased, causing bugs.

--

oot should eschew optimizations like those prevented by the C 'volatile' keyword, by default; instead, a 'nonvolatile' sort of annotation should explicitly say which things can be optimized in this way

--

http://codingrelic.geekhold.com/2008/03/secret-life-of-volatile.html describes some sorts of optimizations which C's 'volatile' prevents. Namely, replacing references through a pointer with references to a register; reordering memory accesses; writing only the last value to memory when a sequence of values overwrite each other. Notes that volatile does NOT help avoid the CPU data cache (to avoid this, create a memory-mapping in a special way) or insert memory barriers.

http://www.airs.com/blog/archives/154 notes that volatile does not guarantee atomic accesses, nor does it guarantee cache flushes. http://web.archive.org/web/20120210084400/http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ makes similar points.

--

Hoon's %=, (C(W X)), "query with changes", or "resolve with changes" looks useful.

--

http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/

Ruby vs. Lisp:

[1,2,3].map {

nn*n }.reject {nn%3==1 }

(remove-if (lambda (n) (= (mod n 3) 1)) (mapcar (lambda (n) (* n n)) '(1 2 3)))

--

in Lua, reading nonexistent variables is not an error, it just give you nil.

not suggesting this for oot, it's just interesting.

--

it's desirable to be able to write chains of processing steps from left to right, like Ruby, instead of nested, like Lisp:

[1,2,3].map {

nn*n }.reject {nn%3==1 }
  is better than:

(remove-if (lambda (n) (= (mod n 3) 1)) (mapcar (lambda (n) (* n n)) '(1 2 3)))

but i'd prefer not to have everything OOP like in Ruby, because it seems silly to me that a symmetric two-argument function like addition should be defined in an asymmetric way.

so how could we do that if map and reject were just functions?

you'd just have to have syntax operators that lets you say "take the result on my left, and give it as the first argument to the function on my right". to generalize, let the user choose which argument on the right gets the thing on the left.

perhaps this is how arrows work in Haskell, i'm not sure.

so e.g., using '

' as the operator and '-' to mark where to put the argument, you'd have something like:

[1,2,3]

map - {nn*n }reject - {nn%3==1 }

note the similarity to Unix shell syntax. Why is this longer than the above Ruby code? because we're explicitly specifying at which arguments to put the incoming results. We could say that if no place is specified (by the next pipe), then put it as the first argument:

[1,2,3]

map {nn*n }reject {nn%3==1 }

now, what about Ruby's 'yield'? we don't need 'yield' if we are just passing anonymous functions, we only need it if the block coming in can 'return' in the larger scope. And, to make things as concise as possible, we may as well omit the argument lists in the anonymous lambdas and use special default variables to match positional args:

[1,2,3]

map {$1*$1 }reject {$1%3==1 }

imo that's even easier to read than Ruby!

interesting that this scheme uses two kinds of default variables: the target of the pipe (set by '-', or, by default, the first argument of the function), and the variables for the anonymous lambdas ($1, $2 etc)

note: instead of $1,$2, etc, should we use x,y,z or a,b,c?

--

incidentally, for Oot's answer to Ruby 'yield', should we just allow passed first-class functions to act that way anyways, if they are called in a special way? kinda like giving them a continuation that they could call, i guess?

--

to generalizing piping further, just have a way to have multiple 'pipes' and to route the inputs and outputs of a command to the pipes

actually routing the outputs is too complicated; if you want to do that, just do an assignment and spread the pipeline over multiple lines.

may as well just use $1, $2, etc instead of -

so $1, $2, etc take the values of the multiple return arguments by the guy at the previous step of the pipeline

and you can also capture the values of e.g. STDOUT and STDERR by asking for e.g. $STDOUT and $STDERR in the next pipeline stage

but what if you want to capture STDOUT but instead of using it in this stage, use it in the next one? oh dear, i guess we need output routing after all. we need something like $STDOUT > $STDOUT (our STDOUT is disconnected and replaced with the previous stage's STDOUT) or $STDOUT >> $STDOUT (our stdout is merged with the previous stage's STDOUT). And then you could do $STDOUT > $3 (our return argument 3 is the previous stage's STDOUT), or $3 > $2 (our return argument 2 is the previous stage's return argument 3).

i guess pipelines should be able to do streams, not just fixed objects. but they should do them lazily, so that they can be infinite. i guess that's not different from the source sending the target a lazy list. so we don't need to do anything special here, if they want to stream, they'll send a lazy list over the pipeline.

note that this pipeline stuff is all just syntactic sugar for ordinary sequences of commands involving assignments of return arguments to temporary variables, passing of temporary variables to functions, and semiglobals (STDOUT is a semiglobal). So it can be expanded before applying macros.

note: we need a generalized 'identity' combinator for this, that just passes out the same stuff it got in, no matter how many arguments

note: can we do SKI combinatorial calculus? i think so; we have K from above, the need for a generalized I is clear, and i think we can do S as a normal function

--

what is STDOUT anyways? a lazy list? a lazy list wrapped with a cursor? something else?

--

http://doc.rust-lang.org/master/rust.html#behavior-considered-unsafe

---

dbaupp 15 hours ago

link

> You can easily generate a segfault in Rust in 'unsafe' (or 'trusted') code; that might only restrict errors of that nature to code that uses unsafe blocks, but practically speaking that's pretty common; once you enter an FFI unsafe block, you lose all type safety; but you can totally do it without FFI too. Eg. using transmute().

Not directly addressing what you're saying, but, IME people are far too quick to use `unsafe` code. One needs to be quite careful about it as there's a pile of invariants that need to be upheld: http://doc.rust-lang.org/master/rust.html#behavior-considere...

> once you enter an FFI unsafe block, you lose all type safety

You don't lose all type safety, especially not if the FFI bindings you're using are written idiomatically (using the mut/const raw pointers correctly; wrapper structs for each C type, rather than just using *c_void, etc).

reply

http://doc.rust-lang.org/master/rust.html#behavior-considered-unsafe

---

--

tshadwell 22 hours ago

link

For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.

The problem of 'summing any kind of list' is not a problem that is solved in Go via the proposed kind of parametric polymorphism. Instead, one might define a type, `type Adder Interface{Add(Adder)Adder}`, and then a function to add anything you want is fairly trivial, `func Sum(a ...Adder) Adder`, put anything you want in it, then assert the type of what comes out.

When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

The Nil pointer is not unsafe as far as I know, from the FAQ: http://golang.org/doc/faq#no_pointer_arithmetic

The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

Go is not flawless by any means, but it warrants a specific style of simplistic but powerful programming that I personally enjoy.

reply

ithkuil 10 hours ago

link

>When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.

Actually using channels as a general iterator just for the sake of using the range operator is considered as an anti-pattern. The reason is not performance (although it has a cost), but the risk of leaking producer goroutines. Your example:

    for v := range mt.Walk() {
      if blah {
        break
      }
    }

How will the goroutine writing into the channel returned by mt.Walk know when there are no more consumers which will possibly read from it?

One way out is:

    done := make(chan struct{})
    for v := range mt.Walk(done) {
      if blah {
        break
      }
    }
    close(done) // or defer close(done)

Picking the right cleanup is error-prone.

What about errors? How will mt.Walk tell you that it had to interrupt the iteration because an error happened? Either your channel has a struct field containing your error and your actual value (unfortunately Go lacks tuples or multivalue channels).

Furthermore uncaught panics in the producer goroutine will generate a deadlock, which will be caught by the runtime, but it will halt your process. One way to do it is:

    errChan := make(chan error)
    for v := range mt.Walk(errChan) {
      if blah {
        break
      }
    }
    err := <-errChan

The producer will use the select statement to write both to errChan and your result channel. The success of writing to errChan is a signal for the producer that the consumer exited. However same thing here about relying on the last statement being executed to avoid a leak in case of returns or panics. Here the defer is less nice since you're supposed to do something with the error:

    func Example() (err error) {
      errChan := make(chan error)
      for v := range mt.Walk(errChan) {
        if blah {
          break
        }
      }
      defer func() {
        err = <-errChan
      }()
    }

Next-style methods just pass through the panics, and allow you to handle errors either by having a func Next() (error, value) or with this pattern which moves the pesky error handling outside:

    i := NewIterator()
    for i.Next() {
      item := i.Item()
      ...
    }
    err := i.Error()

First, any panic that happens inside either your code or the generator will bubble through. Second, if you return from your loop body, you will have to provide your own error (the compiler will remind you about your function signature, if in doubt). You can return early if the iterator can be stopped and GCed out (i.e. it doesn't handle goroutines or external resources), otherwise you'd have to call a cleanup as with channels.

The rule of thumb with Go should be that you don't have to do things just because they use some syntactic sugar. After a while you start to think about beauty in terms of properties not about calligraphy.

However, I do see this as a weak point of the language, which hopefully can be solved by education; after all Go is so simple to learn that you might be tempted to make it look even simpler. But the fact that the language has (almost) no magic, it means that you can actually understand what some code does, which imho outweighs the occasional syntactical heaviness or having to learn a few patterns.

reply

frowaway001 18 hours ago

link

> put anything you want in it, then assert the type of what comes out

... which is exactly what the article mentions and criticizes?

reply

rakoo 22 hours ago

link

> It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

I disagree: if the construction can fail, the constructor must return an error, which will be checked; only if the error is nil can the process continue. There shouldn't be logic on the actual data returned to assert whether a constructor worked or not.

reply

tshadwell 22 hours ago

link

I meant something like this:

http://play.golang.org/p/eqnDLVMHGA (pseudocode)

reply

wyager 22 hours ago

link

>The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.

And what happens when you don't check? It crashes. That's the unsafe part.

These crashes are simply not possible in Rust and Haskell, and the type system notifies you if failure is possible (because the function will return an Option/Maybe).

reply

--

---

dcposch 18 hours ago

link

> A Good Solution: Constraint Based Generics and Parametric Polymorphism

> A Good Solution: Operators are Functions

> A Good Solution: Algebraic Types and Type-safe Failure

> A Good Solution: Pattern Matching and Compound Expressions

People have tried this approach. See languages like C++ and Scala, with hundreds of features and language specification that run into the thousands of pages.

For an unintentional parody of this way of thinking, see Martin Odersky's "Scala levels": http://www.scala-lang.org/old/node/8610

For additional hilarity, note that it is an undecidable problem whether a given C++ program will compile or not. http://stackoverflow.com/questions/189172/c-templates-turing...

--

Go was created by the forefathers of C and Unix. They left out all of those features on purpose. Not unlike the original C or the original Unix, Go is "as simple as possible, but no simpler".

Go's feature set is not merely a subset of other langages. It also has canonical solutions to important practical problems that most other languages leave do not solve out of the box:

Go's feature set is small but carefully chosen. I've found it to be productive and a joy to work with.

reply

runT1ME 18 hours ago

link

You seem completely ignorant of the things you're attempting to talk about. Scala doesn't have "hundreds' of features, nor is the language specification thousands of pages. It's just an outright fabrication to say so.

>Go was created by the forefathers of C and Unix.

Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

>They left out all of those features on purpose

Did they? I don't believe this is the case, as I've heard from the creators many times that they want to add generics but haven't figured out the details yet.

Are you really going to sit here and argue that static typing is important EXCEPT for when working with collection? That parametric polymorphism doesn't make things simpler?

reply

masklinn 13 hours ago

link

> Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.

More than thirty years (at the time it was released), the first language with "modern" generics was ML in 1973.

reply

lmm 11 hours ago

link

The Scala specification is two hundred and something pages, around a third the length of the Java specification (largely because Scala has, in some sense, fewer features than Java, in the sense that Java has lots of edge cases with their own special handling, whereas Scala has a smaller number of general-purpose features. The complexity comes because it's easy to use all of them at once)

reply

wyager 18 hours ago

link

>See languages like C++ and Scala

Of the 4 you mentioned (Constraint based generics and parametric polymorphism, operators as functions, algebraic types and type-safe failures, and pattern matching/compound expression) C++ really only has 1 (operators as functions).

>with hundreds of features and language specification that run into the thousands of pages.

This describes neither Rust nor Haskell.

>Go is "as simple as possible, but no simpler".

It has mandatory heap usage, garbage collection, green threads. It's more than generous to call that "as simple as possible".

Of the 5 features you mention that Go has "canonical solutions" to (in the form of external tools), I know off the top of my head that Haskell's Cabal takes care of at least 4 of them. I'm not sure about formatting. Rust probably has similar tools, or if it doesn't, they can certainly be added without changing the language.

reply

dbaupp 16 hours ago

link

> Rust probably has similar tools

Built-in: http://doc.rust-lang.org/master/guide-testing.html

Built-in: http://doc.rust-lang.org/master/rustdoc.html

The newly released 'cargo': http://crates.io/ https://github.com/rust-lang/cargo/ (alpha, but quickly improving). This will be Rust's cabal equivalent, almost certainly with support for generating documentation and cross-compiling (it already has basic support for running the tests described above).

Missing at the moment, but very wanted: https://github.com/rust-lang/rust/issues/3195 .

(Well, to be precise, the compiler has the '--pretty normal' option, but it's not so good. https://github.com/pcwalton/rustfmt is the work-in-progress replacement.)

Already supported, although it requires manually compiling Rust with the appropriate --target flag passed to ./configure, to cross-compile the standard library.

reply

bjz_ 13 hours ago

link

I would be very wary about promoting Cargo as a 'cabal equivalent'. :P

reply

--

functional reactive programming sounds like a great thing for oot:

https://gist.github.com/staltz/868e7e9bc2a7b8c1f754

consider the example

" To show the real power of FRP, let's just say that you want to have a stream of "double click" events. To make it even more interesting, let's say we want the new stream to consider triple clicks as double clicks, or in general, multiple clicks (two or more). Take a deep breath and imagine how you would do that in a traditional imperative and stateful fashion. I bet it sounds fairly nasty and involves some variables to keep state and some fiddling with time intervals.

https://camo.githubusercontent.com/74d215aac2e23ae940cf5d1f4e08cc8878c9fecf/68747470733a2f2f676973742e67697468756275736572636f6e74656e742e636f6d2f7374616c747a2f38363865376539626332613762386331663735342f7261772f623538306164346133336236336163623263656439623865356539306661616238636137656632362f7a6d756c7469636c69636b73747265616d2e706e67

...

First we accumulate clicks in lists, whenever 250 milliseconds of "event silence" has happened (that's what buffer(stream.throttle(250ms)) does, in a nutshell. Don't worry about understanding the details at this point, we are just demoing FRP for now). The result is a stream of lists, from which we apply map() to map each list to an integer matching the length of that list. Finally, we ignore 1 integers using the filter(x >= 2) function. That's it: 3 operations to produce our intended stream. We can then subscribe ("listen") to it to react accordingly how we wish. "

that sounds a lot like neurons responding to spikes, with short-term plasticity!

so FRP is brain-like

and it's also good because we are re-using the same language we use to analyze data (map, filter, etc) to analyze control (these are events that we are going to have handlers for)

and two goals of Oot are:

--

http://smallcultfollowing.com/babysteps/blog/2014/05/13/focusing-on-ownership/

---

https://news.ycombinator.com/item?id=8007477

---

" The threading macro inserts each expression into the next expression’s first argument place.

Let’s take the classic:

(loop (print (eval (read))))

Rather than write it like that, we can write it as follows:

(-> (read) (eval) (print) (loop)) " -- http://docs.hylang.org/en/latest/tutorial.html

---

---

why have nested list comprehensions? to avoid having to use nested loops instead of list comprehensions:

" Some common excuses for not using a list comprehension:

    You need to nest your loop. You can nest entire list comprehensions, or just put multiple loops inside a list comprehension. So, instead of writing:
        words = ['her', 'name', 'is', 'rio']
        letters = []
        for word in words:
            for letter in word:
                letters.append(letter)
        
    Write:
        words = ['her', 'name', 'is', 'rio']
        letters = [letter for word in words
                          for letter in word]
        
    Note that in a list comprehension with multiple loops, the loops have the same order as if you weren't making a list comprehension at all." -- http://lignos.org/py_antipatterns/

---

jules 1 day ago

link

Even clearer:

    x = [for word in words: for letter in word: letter]

This also has the advantage of being readable left to right without encountering any unbound identifiers like all other constructs in Python.

---

bkeroack 1 day ago

link

Depends on how you want to program: imperative vs functional.

Personally I think list comprehensions are the most beautiful part of Python, though sometimes I use map() when I'm trying to be explicitly functional (I realize it's allegedly slower, etc).

Generally I think list comprehensions are cleaner and allow you to write purer functions with fewer mutable variables. I disagree that deeply nested for loops are necessarily more readable.

reply

SEJeff 1 day ago

link

Map is not allegedly slower, it is demonstrably slower.

$ python -mtimeit -s'nums=range(10)' 'map(lambda i: i + 3, nums)' 1000000 loops, best of 3: 1.61 usec per loop

$ python -mtimeit -s'nums=range(10)' '[i + 3 for i in nums]' 1000000 loops, best of 3: 0.722 usec per loop

Function calls have overhead in python, list comprehensions are implemented knowing this fact and avoiding it so the heavy lifting ultimately happens in C code.

reply

---

ggchappell 1 day ago

link

This is a nice little article, but I wonder about some of the design decisions. In particular:

> The simplifications employed (for example, ignoring generators and the power of itertools when talking about iteration) reflect its intended audience.

Are generators really that hard? (Not a rhetorical question!)

The article mentions problems resulting from the creation of a temporary list based on a large initial list. So, why not just replace a list comprehension "[ ... ]" with a generator expression "( ... )"? Result: minimal storage requirements, and no computation of values later than those that are actually used.

And then there is itertools. This package might seem a bit unintuitive to those who have only programmed in "C". But I think the solution to that is to give examples of how itertools can be used to create simple, readable, efficient code.

reply

---

omegote 1 day ago

link

Point 3 of the iteration part is not good advice. With [1:] you're making a copy of the list just to iterate over it...

reply

omaranto 1 day ago

link

You're right. I still wouldn't recommend looping over indices, but rather using itertools.islice(xs, 1, None) instead of xs[1:].

reply

zo1 1 day ago

link

You're right... But at least it's a shallow copy.

reply

---

wodenokoto 1 day ago

link

I use python for datamining, and most of my work is done exploring data in iPython.

> First, don't set any values in the outer scope that > aren't IN_ALL_CAPS. Things like parsing arguments are > best delegated to a function named main, so that any > internal variables in that function do not live in the > outer scope.

How do I inspect variables in my main function after I get unexpected results? I always have my main logic live in the outer scope because I often inspect variables "after the fact" in iPython.

How should I be doing this?

reply

msellout 16 hours ago

link

I'd try to test smaller chunks of code for validity. If any block of code is longer than 12 lines, I get nervous that I don't understand what it's doing. Refactor your code into functions as you confirm the code behaves as expected in the interpreter.

It's very difficult to write automated tests when all logic is in outer scope rather than chunked into functions.

reply

hyperion2010 1 day ago

link

If you are using the interpreter directly then that particular bit of advice is hard to follow since you basically live in global all the time. For that reason I would say that this advice applies mainly to .py files.

reply

LyndsySimon? 1 day ago

link

Agreed.

There's a big difference between "scripting" and "writing software" in terms of best practices.

If you're writing some ETL scripts in an IPython notebook, it would be overkill to encapsulate everything to keep your global scope clean.

reply

---

noobermin 1 day ago

link

Question, how do you over multiple long lists (in python 2) especially if zip itself takes a long time to zip them, for example.

reply

takeda 1 day ago

link

You use Python 3.

J/K, while this is technically a limitation of Python 2, there actually is izip in itertools package which is a generator and works in similar way to zip in python 3.

reply

---

http://redex.racket-lang.org/

---

since we only have one kind of compound structuring data type (graphs), strings are just graphs of characters. This means that:

---

pjmlp 8 days ago

link

Another I left <dynamic language> for <strong typed language> when faced with performance/large codebase/tooling issues post.

Additionally we are now past Rails wave, Node.js wave and into Go wave.

reply

adamors 8 days ago

link

> Additionally we are now past Rails wave, Node.js wave and into Go wave.

And the narrative is similar as well: starting a new project with the shiny new thing is cool but maintaining it is boring and awful and to quote the OP "I need maintainers!".

Part of the problem is the language of course. Maintaining even a medium sized codebase written in a dynamic language is a challenge. But the other part is the mantra that "coding is easy" and "everyone should do it". And now there's an entire generation of developers whose idea of software development is "writing an app in Node in a weekend". Which of course doesn't require years of maintenance, complicated tooling or even adherence to common best practice.

reply

---

" Ask any C++ guru and they will tell you: avoid mutation, avoid side effects, don’t use loops, avoid class hierarchies and inheritance.< " --- milewski ?

---

" 280

Costya Perepelitsa Suggest Bio Votes by Bartosz Milewski, Cody Tagart, Toby Thain, Calvin Huang, (more) Over-reliance on the von Neumann model in our design of computers and programming languages.

(This may be better recognizable today as the "imperative vs functional programming" debate.)

This was argued by John Backus (inventor of FORTRAN and Backus–Naur? Form) when he received the ACM Turing Award in 1977 for "profound, influential, and lasting contributions to the design of practical high-level programming systems". This answer will simply be highlights of his stanford.edu 29-page Turing Award lecture (2.9 MB PDF) titled: "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs".

First, the beginning of the abstract:

    Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of program- ming inherited from their common ancestor--the von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.
    An alternative functional style of programming is founded on the use of combining forms for creating programs. Functional programs deal with structured data, are often nonrepetitive and nonrecursive, are hierarchically constructed, do not name their arguments, and do not require the complex machinery of procedure declarations to become generally applicable. Combining forms can use high level programs to build still higher level ones in a style not possible in conventional languages.

Some choice quotes from the paper follow. Backus refers to (what we would now call) imperative languages as "conventional" languages and "von Neumann" languages/models interchangably, so I'll make that substitution where appropriate. All emphasis (bold text) has been added by me:

    "there is a desperate need for a powerful methodology to help us think about programs, and no [imperative] language even begins to meet that need. In fact, [imperative] languages create unnecessary confusion in the way we think about programs."
    "Surely there must be a less primitive way of making changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck... it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking... programming [in imperative languages] is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck... Combining single words is not what we really should be thinking about, but it is a large part of programming any task in von Neumann languages."
    "The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer's bottleneck does"
    "the assignment statement splits programming into two worlds. The first world comprises the right sides of assignment statements. This is an orderly world of expressions, a world that has useful algebraic properties (except that those properties are often destroyed by side-effects). It is the world in which most useful computation takes place. The second... is the world of statements... This world of statements is a disorderly one, with few useful mathematical properties. Structured programming can be seen as a modest effort to introduce some order into this chaotic world, but it accomplishes little in attacking the fundamental problems created by the word-at-a-time von Neumann style of programming, with its primitive use of loops, subscripts, and branching flow of control... the split between the two worlds prevents the combining forms in either world from attaining the full power they can achieve in an undivided world."
    "[an imperative] language must have a semantics closely coupled to the state... Thus every feature of [an imperative] language must be spelled out in stupefying detail in its framework. Furthermore, many complex features are needed to prop up the basically weak word-at-a-time style. The result is the inevitable rigid and enormous framework of [an imperative] language."
    "Perhaps the most important element in providing powerful changeable parts in a language is the availability of combining forms that can be generally used to build new procedures from old ones. [Imperative] languages provide only primitive combining forms, and the von Neumann framework presents obstacles to their full use."
    "Denotational semantics and its foundations provide an extremely helpful mathematical understanding of the domain and function spaces implicit in programs. When applied to [a functional] language, its foundations provide powerful tools for describing the language and for proving properties of programs. When applied to [an imperative] language, on the other hand, it provides a precise semantic description and is helpful in identifying trouble spots in the language... a bewildering collection of productions, domains, functions, and equations that is only slightly more helpful in proving facts about programs than the reference manual of the language"
    "denotational and axiomatic semantics are descriptive formalisms whose foundations embody elegant and powerful concepts; but using them to describe [an imperative] language can not produce an elegant and powerful language... proofs about programs use the language of logic, not the language of programming. Proofs talk about programs but cannot involve them directly since the axioms of von Neumann languages are so unusable. In contrast, many ordinary proofs are derived by algebraic methods. These methods require a language that has certain algebraic properties. Algebraic laws can then be used in a rather mechanical way to transform a problem into its solution."
    "[Functional programming] systems offer an escape from conventional word-at-a-time programming... because they provide a more powerful set of functional forms within a unified world of expressions. They offer the opportunity to develop higher level techniques for thinking about, manipulating, and writing programs... the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely talks about programs."
    "There are numerous indications that the applicative style of programming can become more powerful than the von Neumann style... when these models and their applicative languages have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them. Only then, perhaps, will we be able to fully utilize large-scale integrated circuits in a computer design not limited by the von Neumann bottleneck."

Early in the paper, Backus illustrates the inferiority of the imperative style by comparing (pseudo-code) implementations of an inner product.

The Imperative Implementation:

1 2 3

c := 0

for i := 1 step 1 until n do

    c := c + a[i] x b[i]

Backus points out that this implementation has the following properties:

    "Its statements operate on an invisible 'state' according to complex rules" (those "complex rules" are described in the enormity of the highly complex language reference)
    "It is not hierarchical. Except for the right side of the assignment statement, it does not construct complex entities from simpler ones."
    "It is dynamic and repetitive. One must mentally execute it to understand it."
    "It computes word-at-a-time by repetition (of the assignment) and by modification (of variable i)."
    "Part of the data, n, is in the program; thus it lacks generality and works only for vectors of length n."
    "It names its arguments; it can only be used for vectors a and b. To become general, it requires a procedure declaration. These involve complex issues (e.g. call-by-name versus call-by-value)."
    "Its 'housekeeping' operations are represented by symbols in scattered places (in the 'for' statement and the subscripts in the assignment). This makes it impossible to consolidate housekeeping operations, the most common of all, into single, powerful, widely used operators. Thus in programming those operations one must always start again at square one, writing ['for i in ...'] and ['for j in ...'] followed by assignment statements sprinkled with i's and j's."

The Functional Implementation:

(Insert +) . (ApplyToAll? x) . Transpose

I'm assuming most people who read this will not be so familiar with these functional forms, so here's a brief explanation:

    Transpose
    takes two vectors,
    <a1, a2, ...>
    and
    <b1, b2, ...>
    , and produces
    <(a1,b1), (a2,b2), ...>
    ApplyToAll
    sticks a multiplication in each pair, resulting in
    <(a1 x b1), (a2 x b2), ...>
    Insert
    interleaves ("folds") addition between elements of the vector, resulting in
    (a1 x b1) + (a2 x b2) + ...
    .

(These three are defined in the language the way

:=

and

for ... step ... until ... do

are in the imperative pseudo-language above are.) This implementation has the following properties:

    "It operates only on its arguments. There are no hidden states or complex transition rules."
    "It is hierarchical, being built from three simpler functions and three functional forms."
    "It is static and nonrepetitive, in the sense that its structure is helpful in understanding it without mentally executing it."
    "It operates on whole conceptual units, not words; it has three steps; no step is repeated."
    "It incorporates no data; it is completely general; it works for any pair of conformable vectors."
    "It does not name its arguments; it can be applied to any vectors without any procedure declaration or complex substitution rules."
    "It employs housekeeping forms and functions that are generally useful in many other programs; in fact, only + and x are not concerned with housekeeping. These forms and functions can combine with others to create higher level housekeeping operators."

Upvote • 11+ Comments • Share (10) • Thank • Report • Updated 13 Aug, 2013 "

---

34

Bartosz Milewski, Physicist turned Programmer Votes by Jean-François Lebeau, Amir Mohammad Saied, Gregory Popovitch, Robert Adkins, (more) My first impression is that Swift borrowed parts of Haskell without creating a complete programming ecosystem. Case in point: using optional in place of exceptions. Good idea: Haskell uses Maybe in a similar context. Except that Haskell supports monads and the "do" notation, so it can easily chain partial functions and short-circuit in case of failure. Well, Swift got it too -- it's called optional chaining. So what's the problem?

The problem is that Swift created a custom solution that only works for optional, whereas Haskell uses a much more general tool: monadic composition. In Haskell, if you're not happy with a binary success/failure option provided by Maybe, you may use the Either sum type to pass around error messages. Since Either is also a monad, the same chaining and short-circuiting will work as well. Swift has sum types, too, but no monadic composition, so you're stuck. Optional chaining will give you exception-like functionality but without propagating any information about the type of the error. If you need error information, you'll have to fall back on manual error testing and propagation, which is painful, verbose, and error prone.

It looks like the designers of Swift got scared of the monad and stopped half way towards a usable functional language.

Upvote • 1 Comment • Share • Thank • Report • Written 6 Jun

---

" Rust let i = box 1234i;

C++ int *i = new int;

Rust infers the correct type, allocates the correct amount of memory and sets it to the value you asked for. This means that it's impossible to allocate uninitialized memory: Rust does not have the concept of null. " -- http://doc.rust-lang.org/master/intro.html

from my plBook:

"What value should a new variable or region of memory have just after it is allocated?

Some choices:

rust appears to have the 4th solution. I like that, let's do that.

---

http://archive.wired.com/wired/archive/3.09/geek.html

---

don't know if we can do anything about this but:

http://www.drdobbs.com/tools/just-let-me-code/240168735

---

some notes from MQL5, Metatrader's language, for Metatrader version 5.

how does this fit in with handlertrees?

i guess a subscription request is adding a new horizontal branch in the handlertree. But what about the vertical axis? You should be able to place your subscription node in front of or behind another given node.

i guess this is like 'advice', e.g. external insertion of a pre- or post- wrapper that executes before or after the wrapped function.

so function calling could be seen as a special case of this (!) -- that is, a function call can be seen as an event emission. Should we view it like that? It's probably not very efficient, but in the common case a Sufficiently Smart Compiler/Interpreter could probably optimize it.

note that:

---

i want the oot implementation to support optimizations as separately installable 'modules', keeping the core implementation small and understandable.

which means that these 'modules' will need to have an execution ordering.

one solution is to assign each module a 'priority number' and to execute them in order. but as i saw with easylatex, then the dependency information becomes implicit, which makes it harder to change the code. so, better to explicitly list dependencies and then have oot calculate a suitable topological sort/linearization/total ordering.

---

note: if/where the oot implementation uses a nondeterministic solver to efficiently find a solution to an NP-hard problem, should also launch in parallel a thread doing a deterministic solver to prove that the problem is insoluble, so that at least in theory the algorithm is always terminating (also have a timeout).

---

gremlin:

--

http://en.wikipedia.org/wiki/Small-C

http://stackoverflow.com/questions/1913621/is-there-a-simple-compiler-for-a-small-language

http://synthcode.com/blog/2009/06/Small_is_Beautiful

--

https://github.com/swannodette/om

--

on the shortcomings of LSL and Unity3d:

http://popl-obt-2014.cs.brown.edu/papers/worst.pdf

---

http://approx2014.cs.umass.edu/10.Approx%2714.pdf

---

http://www.scoop.it/t/trending-programming-languages

---

http://www.future-programming.org/program.html

---

http://science.raphael.poss.name/posts/2013/10/18/finding-the-right-language/

"

I need to pick a programming language to prototype some initial ideas for my next research project, an interactive shell to program and control lightweight process networks. I initially started with Python, but I stumbled into the need for efficient and concise pattern matching on tree-like data structures, which Python does not provide. So I started to search for a replacement, scanning my usual go-to languages: Go, C++, ML, Haskell, shell scripts.

To my surprise, for my specific pet project none of them fit my bill of requirements:

    [match] supports structural pattern matching (fails: C++, Python, shell)
    [gen] can express generic functions (fails: Go)
    [dbg] supports “printf”-style debugging (fails: Haskell)
    [syn] the syntax is concise and does not sacrifice clarity for pedantism (fails: ML, Go)

Then I scanned a few more candidates which I intended to learn/use soon anyways: Scheme (Racket, Chicken), Rust, Shen, Erlang, Clojure. Unfortunately those fail other more important requirements, which the previous 4 languages did match already:

    [bus] high bus factor (fails: Shen, Chicken)
    [proc] supports process creation and inter-process communication out-of-the-box (fails: Shen)
    [conc] supports lightweight and composable concurrency out-of-the-box (fails: Shen, Lua)
    [ffi] provides a comprehensive and simple foreign function interface to C libraries (fails: Erlang, Shen)
    [light] is close to running embedded: small implementations exist and/or run-time OS dependencies are clearly specified (fail: Racket, Erlang)
    [stable] stable language definition (fails: Shen, Rust)

This was intruiging. Then I started to scan further and larger, to other languages I looked at previously, and still no luck.

Here’s a summary table. The requirements (left to right) are listed in decreasing order of importance for my project. They are all somewhat “deal breakers” for now. Language [bus] [proc] [conc] [ffi] [light] [stable] [match] [gen] [dbg] [syn] ML (OCaml) N Haskell N Go N N Python N Unix shell N Rust N Scheme/Racket N Common LISP N Scheme/Guile N D N N Clojure N N ? Erlang N N Perl N N C N N N C++ N N N Javascript N N N Java N N N N Scheme/Chicken N Lua N N N Shen N N N N N

What I learned from this:

    I would be happier if the Rust team would get its sh*t together and stabilize the language.
    It’s a shame that the main author of Chicken is slowly walking away from the implementation and nobody is seriously picking it up yet.
    Maybe it is time for me to learn CamlP4 and add some simplified syntax to Ocaml.
    It’s also a shame that Haskell was not designed by system programmers for system programmers.
    The Python community should learn more from functional language designers.
    I dread the prospect of implementing my own programming language but this project might leave me no choice.

"

---

The essence of component-based design and coordination

http://science.raphael.poss.name/pub/poss.13.coord.pdf

---

Extrinsically adaptable systems or “what are hacker-friendly systems and how we should ask for more” Raphael ‘kena’ Poss University of Amsterdam, The Netherlands

http://science.raphael.poss.name/pub/poss.13.exadapt.pdf

---

http://p5js.org/

---

google search for keyword args, partial function application, currying, and 0-ary fns:

https://www.google.com/search?client=ubuntu&hs=thb&channel=fs&biw=900&bih=1287&q=partial+function+application+%220-ary%22+%22keyword+arguments%22+syntax&oq=partial+function+application+%220-ary%22+%22keyword+arguments%22+syntax&gs_l=serp.3...30702.31415.0.31499.7.4.0.0.0.0.0.0..0.0....0...1c.1.51.serp..7.0.0.-K0mS58QwnI

http://arxiv.org/pdf/1006.3678.pdf

http://pvs.csl.sri.com/doc/pvs-language-reference.pdf

http://stackoverflow.com/questions/11173660/can-one-partially-apply-the-second-argument-of-a-function-that-takes-no-keyword

http://lambda-the-ultimate.org/node/4381

https://groups.google.com/forum/#!topic/pilud/nSdSIomTD9w

https://groups.google.com/forum/#!topic/pilud/

http://ptg.ucsd.edu/~staal/programming.html#currying-composition-and-a-monad-in-python

" Convergence in Language Design:A Case of Lightning StrikingFour? Times in the Same Place Peter Van Roy Universit´e catholique de Louvain,B-1348 Louvain-la-Neuve, Belgium pvr@info.ucl.ac.behttp:www.info.ucl.ac.be/people/cvvanroy.html Abstract. What will a definitive programming language look like? By definitive language I mean a programming language that gives good so-lutions at its level of abstraction, allowing computer science researchersto move on and work at higher levels. Given the evolution of computerscience as a field with a rising level of abstraction, it is my belief thata small set of definitive languages will eventually exist. But how can welearn something about this set, considering that many basic questionsabout languages have not yet been settled? In this paper, I give sometentative conclusions about one definitive language. I present four casestudies of substantial research projects that tackle important problems infour quite different areas: fault-tolerant programming, secure distributedprogramming, network-transparent distributed programming, and teach-ing programming as a unified discipline. All four projects had to thinkabout language design. In this paper, I summarize the reasons why eachproject designed the language it did. It turns out that all four languageshave a common structure. They can be seen as layered, with the follow-ing four layers in this order: a strict functional core, then deterministicconcurrency, then message-passing concurrency, and finally shared-stateconcurrency (usually with transactions). This confirms the importanceof functional programming and message passing as important defaults;however, global mutable state is also seen as an essential ingredient "

http://www.academia.edu/2547014/A_Syntactic_Approach_to_Combining_Functional_Notation_Lazy_Evaluation_and_Higher-Order_in_LP_Systems

" 18:14:11 <koninkje> edwardk: You wouldn't happen to know of any work on type theory of keyword arguments, would you? 18:14:34 <edwardk> datalog with unstratified omega-continuous semiring solving, etc. ;) 18:14:41 --- quit: xcv (Remote host closed the connection) 18:14:42 <koninkje> :) 18:14:45 <koninkje> sounds fun 18:14:55 <edwardk> hrmm. named arguments? 18:15:01 --- join: hajimehoshi (~hajimehos@209.118.182.66) joined #haskell 18:15:05 <koninkje> er, keyword arguments I meant 18:15:14 --- quit: iago (Quit: Leaving) 18:15:26 <koninkje> e.g., in Python, OCaml, Scheme/Lisp,... 18:15:35 <edwardk> yeah. i want to call a function with these named args 18:15:37 <koninkje> or a la Haskell's record syntax 18:15:47 <edwardk> some optional, some required, etc. 18:16:11 <edwardk> i was going to point you towards the ocaml stuff 18:16:28 <koninkje> like what? 18:16:52 <edwardk> i kinda stopped thinking about them when i realized i couldn't have them nicely in haskell since i can't 'curry' the partial application to a few named args without telling it somehow that i'm all done 18:16:58 --- quit: nighty^ (Quit: Disappears in a puff of smoke) 18:17:03 --- quit: jasonjckn (Read error: Connection reset by peer) 18:17:12 <koninkje> I'm finishing up one of my quals and am looking for citations for the related works section. But while many languages have keyword args, I can't find much on the type theory thereof... 18:17:14 --- join: nighty^ (~nighty@tin51-1-82-226-147-104.fbx.proxad.net) joined #haskell 18:17:22 <edwardk> ah 18:17:23 --- join: jasonjckn (~jasonjckn@8.25.194.28) joined #haskell 18:17:32 <edwardk> from a theoretical perspective i can't think of much 18:17:35 <edwardk> if anything 18:17:36 <koninkje> my chiastic lambda calculus would let you do that ;) 18:18:00 * koninkje hasn't looked into the optionality part of things yet though 18:18:16 <niteria> hm, interesting how Applicative instance for (e ->) is K and S, and S could be treated like application in given enviroment 18:18:38 --- quit: hiyakashi (Quit: お前は知りすぎた) 18:18:59 <elliott> niteria: that's basically exactly what gives rise to Applicative 18:19:05 <elliott> it's "generalised K/S" 18:19:06 <koninkje> edwardk: as far as the analytics stuff, I'm swamped for this upcoming week; but we should chat sometime after that 18:19:14 <edwardk> k 18:19:20 <elliott> S is "generalised application" 18:19:40 <edwardk> i'm on here (and #haskell-lens) almost all the time 18:20:21 <edwardk> and i'll braindump then "

https://www.google.com/search?client=ubuntu&hs=7Ww&channel=fs&q=koninkje+%22keyword+arguments%22&oq=koninkje+%22keyword+arguments%22&gs_l=serp.3...2056.2056.0.2176.1.1.0.0.0.0.0.0..0.0....0...1c..51.serp..1.0.0.QElJwFrDcCA

http://agda.orangesquash.org.uk/2013/March/19.html

koninkje is this girl, todo should ask her her thoughts on keyword args, partial function application, currying, and 0-ary fns: http://code.haskell.org/~wren/ http://winterkoninkje.dreamwidth.org/

---

do we want immutability (non-"openness") of builtin types?

http://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-to-see/ talks about this in Python a bit (search for "immutability of builtin types"). Some pros:

---

that essay, http://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-to-see/ , also talks about how atomicity guarantees in the builtins lead to not using dynamic attribute lookup in their implementations (because any dynamic lookup could trigger eg a lazy import), which leads to reimplementations of things rather than delegation:

" A good example are collections. Lots of collections have convenience methods. As an example a dictionary in Python has two methods to retrieve an object from it: __getitem__() and get(). When you implement a class in Python you will usually implement one through the other by doing something like return self.__getitem__(key) in get(key).

For types implemented by the interpreter that is different. The reason is again the difference between slots and the dictionary. Say you want to implement a dictionary in the interpreter. Your goal is to reuse code still, so you want to call __getitem__ from get. How do you go about this?

A Python method in C is just a C function with a specific signature. That is the first problem. That function's first purpose is to handle the Python level parameters and convert them into something you can use on the C layer. At the very least you need to pull the individual arguments from a Python tuple or dict (args and kwargs) into local variables. So a common pattern is that dict__getitem__ internally does just the argument parsing and then calls into something like dict_do_getitem with the actual parameters. You can see where this is going. dict__getitem__ and dict_get both would call into dict_get which is an internal static function. You cannot override that.

There really is no good way around this. The reason for this is related to the slot system. There is no good way from the interpreter internally issue a call through the vtable without going crazy. The reason for this is related to the global interpreter lock. When you are a dictionary your API contract to the outside world is that your operations are atomic. That contract completely goes out of the window when your internal call goes through a vtable. Why? Because that call might now go through Python code which needs to manage the global interpreter lock itself or you will run into massive problems.

Imagine the pain of a dictionary subclass overriding an internal dict_get which would kick off a lazy import. You throw all your guarantees out of the window. Then again, maybe we should have done that a long time ago. "

---

ok, this is crazy:

http://rachelbythebay.com/w/2014/08/19/fork/

similar:

cperciva 3 hours ago

link

This reminds me of one of the most epic bugs I've ever run into:

    mkdir("/foo", 0700);
    chdir("/foo");
    recursively_delete_everything_in_current_directory();

Running as root, this usually worked fine: It would create a directory, move into it, and clean out any garbage left behind by a previous run before doing anything new.

Running as non-root, the mkdir failed, the chdir failed, and it started eating my home directory.

reply

swah 3 hours ago

link

In those times I wish I could use the emacs lisp way:

    (let (dir "/foo")
      (create-directory dir)
      (with-current-directory dir
        (delete-all-files-recursively)))

Factor recognized the value of dynamically scoped variables: http://concatenative.org/wiki/view/Factor/FAQ/What's%20Facto...

A lot of code became much simpler because of that decision.

reply

staunch 2 hours ago

link
  $ mkdir /tmp/foo && cd /tmp/foo && touch bar.txt

reply

---

jwise0 3 hours ago

link

In a similar family, note also that setuid() can fail! If you try to setuid() to a user that has has reached their ulimit for number of processes, then setuid() will fail, just like fork() would for that user.

This is a classic way to get your application exploited. Google did it (at least) twice in Android: once in ADB [1], and once in Zygote [2]. Both resulted in escalation.

Check your return values! All of them!

[1] http://thesnkchrmr.wordpress.com/2011/03/24/rageagainsttheca... [2] https://github.com/unrevoked/zysploit

reply

agwa 3 hours ago

link

Thankfully, setuid() no longer fails on Linux because of RLIMIT_NPROC, as of 2011[1].

Still, I agree with you 100%: check your syscall return values, especially security-critical syscalls like setuid!

[1] http://lwn.net/Articles/451985/ and http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...

reply

spudlyo 3 hours ago

link

If a function be advertised to return an error code in the event of difficulties, thou shalt check for that code, yea, even though the checks triple the size of thy code and produce aches in thy typing fingers, for if thou thinkest "it cannot happen to me", the gods shall surely punish thee for thy arrogance. [0]

[0]: http://www.lysator.liu.se/c/ten-commandments.html

reply

---