proj-oot-ootNotes13

Difference between revision 39 and current revision

No diff available.

https://wizardsofsmart.wordpress.com/2015/03/19/web-apis-vs-rpc/

---

django release discussion:

"

bebop 3 days ago

Some great things in this release:

and it is an LTS. Time to get upgrading.

reply

crdoconnor 2 days ago

>uuid field

This one is good.

I kind of wish it were default for primary keys, since the number of times I got burned by having databases I couldn't easily merge (which UUIDs help a lot with) way exceeds the number of times I had performance/memory issues caused by actually using UUIDs. "

so, we should support things like UUID keys

--

being able to do stuff like:

[(k,v) for (k,v) in geneName2Idx.items() if v == 0]

is just great. Note the destructuring bind.

--

some stuff coming in C++:

"

hendzen 14 days ago

I think C++ has a very bright future - due to two driving forces:

1) The C++ Standards committee has been doing a very, very good job. Aside from the big one (compiler enforced memory safety), some of the best Rust features are on their way into C++. For example:

Other great stuff coming to C++ over the next few years:

... much much more "


hsivonen 14 days ago

Fun times ahead if document.write isn't already supported. When I rewrote Gecko's HTML parsing, accommodating document.write was a (or maybe the) dominant design issue.


realusername 14 days ago

I'm quite curious, why document.write is so hard to implement compared to other methods ? Is it not working a bit like innerHTML ?


hsivonen 13 days ago

It's quite different from innerHTML, since document.write inserts source characters to the character stream going into the parser, and there's no guarantee that all elements that get opened get closed. There's even no guarantee that the characters inserted don't constitute a partial tag. So document.write potentially affects the parsing of everything that comes after it.

For this to work, scripts have to appear to block the parser. However, it's desirable to start fetching external resources (images, scripts, etc.) that occur after the script that's blocking the parser. In Firefox, the scripts see the state of the world as if the parser was blocked, but in reality the parser continues in the background and keeps starting fetches for the external resources it finds and keeps building a queue of operations that need to be performed in order to build the DOM according to what was parsed. If the script doesn't call document.write or calls it in a way that closes all the elements that it opens, the operation queue that got built in the background is used. If the document.write is of the bad kind, the work that was done in the background is thrown away and the input stream is rewound. See https://developer.mozilla.org/en-US/docs/Mozilla/Gecko/HTML_... for the details.

For added fun, document.write can write a script that calls document.write.

reply

Animats 13 days ago

What a mess. To support a stupid HTML feature, the browser's parser has to be set up like a superscalar CPU, retirement unit and all. Hopefully the discard operation doesn't happen very often.

---

"

In addition to bi-directional data binding, we can also bind parameterized functions to events:

var vm = todo.vm

m("button", {onclick: vm.add.bind(vm, vm.description)}, "Add")

In the code above, we are simply using the native Javascript Function::bind method. This creates a new function with the parameter already set. In functional programming, this is called partial application.

The vm.add.bind(vm, vm.description) expression above returns a function that is equivalent to this code:

onclick: function(e) { todo.vm.add(todo.vm.description) }

Note that when we construct the parameterized binding, we are passing the description getter-setter by reference, and not its value. We only evaluate the getter-setter to get its value in the controller method. This is a form of lazy evaluation: it allows us to say "use this value later, when the event handler gets called".

Hopefully by now, you're starting to see why Mithril encourages the usage of m.prop: Because Mithril getter-setters are functions, they naturally compose well with functional programming tools, and allow for some very powerful idioms. In this case, we're using them in a way that resembles C pointers. "

---

http://picat-lang.org/ is cool

---

" We use Go for a lot of our server development here at Mailgun, and it’s great. Coming from Python, though, there is one thing I really missed:

import pdb; pdb.set_trace()

I could insert this little line whenever I was confused about what was happening in the code and see exactly what was going on. But in Go? Not so much. When I started this project in January, gdb failed on every program I tried it on. delve didn’t work on OS X, and print-statement-debugging was too slow and limited. What's a developer to do? "

" godebug

    All that stands in the way [of a good Go debugger] is the writing of a lot of non-portable low-level code talking to buggy undocumented interfaces.

godebug is a different kind of debugger. Traditional debuggers for compiled languages use low-level system calls and read binary files for debugging symbols. They’re hard to get right and they’re hard to port.

godebug takes a different approach: take the source code of a target program, insert debugging code between every line, then compile and run that instead. " -- http://blog.mailgun.com/introducing-a-new-cross-platform-debugger-for-go/

discussion: https://news.ycombinator.com/item?id=9409423

 DannyBee 5 hours ago

"Since it's modifying the source before compiling it, I expect that the compiler will conclude that most optimizations can't be applied when they cross breakpoint boundaries."

While true, this depends on the compiler knowing this is a magical breakpoint barrier it can't move things across. The compiler has no idea this is a magical barrier unless something has told it it's a magical barrier. Looking at godebug library, i don't see this being the case, it looks like it translates into an atomic store and an atomic load to some variables, and then a function call, which the compiler is definitely not going to see as a "nothing can move across" barrier.

(Also, having the debugging library alter the semantics of the program is 100% guaranteed to lead to bugs that are not visible when using the library, etc)

reply

skybrian 3 hours ago

I think you're right that it's going to introduce bugs in concurrent code. For example, it's legal to send a pointer through a channel as a way of transferring ownership and never access the object again. If the debugger rewrites the code so that "never accesses it again" is no longer true, it's created a data race.

On the other hand, godebug generates straightforward single-threaded code that creates pointers to locals in a shadow data structure and accesses them later. There's no reason it shouldn't work if you're not using goroutines.

In particular, a previous call to godebug.Declare("x", &x) will add a pointer to what was previously a local variable to a data structure. This effectively moves all locals to a heap representation of the goroutine's stack, to be accessed later. It's going to kill performance, but it's legal to do.

---

 tormeh 1 day ago

The one I'm writing now: "Creating and implementing deterministic multithreading programming language significantly harder than hoped"

reply

thechao 1 day ago

Finished this five years ago: "Stepanov-style generic programming is the bees knees; we don't understand why."

reply

seanmcdirmid 1 day ago

It's not that hard. I'm thinking the song "let it go" can help (define determinism as an eventual goal; at least, that is the approach I find that works for me).

reply

tormeh 1 day ago

It's not that it's theoretically hard, but writing a compiler with typechecking and all is just a lot of work. It's a master thesis, btw, not a doctoral one.

reply

seanmcdirmid 1 day ago

My own project, Glitch uses replay to work out glitches (bubbles of non determinism in a deterministic execution), it incidentally also makes writing incremental compilers easy (see http://research.microsoft.com/en-us/people/smcdirm/managedti...), and I recently used it for a new type checker (see https://www.youtube.com/watch?v=__28QzBdyBU&feature=youtu.be). Ok, it's still a lot of work, but if you look at the problem you are solving, solving it can make the compiler writing aspect easier also.

reply

---

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/arguments

---

" Sony's Open Source command line tool for performing python one liners using unix-like pipes

They call it "The Pyed Piper" or pyp. It's pretty similar to the -c way of executing python, but it imports common modules and has it's own preset variable that help with splitting/joining, line counter, etc. You use pipes to pass information forward instead of nested parentheses, and then use your normal python string and list methods. Here is an example from the homepage:

Here, we take a linux long listing, capture every other of the 5th through the 10th lines, keep username and file name fields, replace "hello" with "goodbye", capitalize the first letter of every word, and then add the text "is splendid" to the end:

ls -l

pyp "pp[5:11:2]whitespace[2], w[-1]p.replace('hello','goodbye')p.title(),'is splendid'"

and the explanation:

This uses pyp's built-in string and list variables (p and pp), as well as the variable whitespace and it's shortcut w, which both represent a list based on splitting each line on whitespace (whitespace = w = p.split()). The other functions and selection techniques are all standard python. Notice the pipes ("

") are inside the pyp command.

http://code.google.com/p/pyp/ http://opensource.imageworks.com/?p=pyp "

---

https://news.ycombinator.com/item?id=9446980

" At the same time it demonstrates everything that is wrong with traditional shells and what PowerShell? gets so right.

jq is not a "Unixy" tool in the sense that it should do one thing and do it right. jq implements its own expression language, command processor complete with internal "pipelines". Why would a tool need to do that? find is another utility that does many, many other things than to "find" items: It executes commands, deletes objects etc.

Consider this challenge that included parsing json, filtering, projecting and csv-formatting output: https://news.ycombinator.com/item?id=9438109

Several solutions uses jq - to good effect. But the PowerShell? solution uses PowerShell? expressions to filter, sort and project items.

The problem is - at the core - that the traditional command tools are severely restricted by only being able to rely on a text pipeline convention. You cannot parse json and send the "objects" along to another tool. Well, you can, if the json tree is extremely basic - like 2 levels. "

probably other good comments in there too, i havent read it yet

--

learn to code / hypercard application domain:

http://livecode.com/

--

aliasing: "no man is an island"

--

	Postgres gets support for upsert

anilshanbhag 14 hours ago

This is actually huge. A common problem that arises when you write applications is you want to INSERT if key does not exist else UPDATE. The right way of doing this without an upsert is using a transaction. However this will make life easier as you can do it directly in one SQL statement.

reply

colanderman 14 hours ago

Just "using a transaction" is insufficient. You must be prepared to handle the case that neither the INSERT nor the UPDATE succeeds (with READ COMMITTED isolation), or that the transaction fails (with REPEATABLE READ isolation or better), by repeating the transaction. And if latency is at all a concern to you, you must wrap this all in a stored procedure to avoid the necessary round-trip-time between the commands.

Hence this is more than just saving typing a couple lines -- this saves writing entire stupid loops to do what is conceptually a simple (and very common) operation.

Postgres gets better and better.

reply

--

" These applications (“appliances” is a better word) come equipped with a fixed vocabulary of actions, speak no common language, and cannot be extended, composed, or combined with other applications except with enormous friction. By analogy, what we have is a railway system where the tracks in each region are of differing widths, forcing trains and their cargo to be totally disassembled and then reassembled to transport anything across the country. As ridiculous as this sounds, this is roughly what we do at application boundaries: write explicit serialization and parsing code and lots of tedious (not to mention inefficient) code to deconstruct and reconstruct application data and functions. " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

---

" Alice: You’re missing my point! Compare the overhead of calling a function in the ‘native’ language of your program vs calling a function exposed via JSON+REST. And no I don’t mean the computational overhead, though that is a problem too. Within the language of your program, if I want to call a function returning a list of (employee, date) pairs, I simply invoke the function and get back the result. With JSON+REST, I get back a blob of text, which I then have to parse into a syntax tree and then convert back to some meaningful business objects. If I had that overhead and tedium for every function call I made I’d have quit programming long ago.

Bob: Are you just saying you want more built in types for JSON, then? That’s easy, I hear there’s even a proposal to add a date type to JSON.

Alice: And maybe in another fifteen years JSON will grow a standard for sending algebraic data types (they’ve been around for like 40 years, you know) and other sorts of values, like you know, functions. "

" Any creator wishing to build atop or extend the functionality of an application faces a mountain of idiosyncratic protocols and data representations and some of the most tedious sort of programming imaginable: parsing, serializing, converting between different data representations, and error handling due to the inherent problem of having to pass through a dynamically typed and insufficiently expressive communication channel! " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

--

" People write essays, create illustrations, organize and edit photographs, send messages to friends, play card games, watch movies, comment on news articles, and they do serious work too–analyze portfolios, create budgets and track expenses, find plane flights and hotels, automate tasks, and so on. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some ‘application’ is not in any way essential. In fact, I hope you can start to see how unnatural it is that such stark boundaries exist between applications, and how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for ‘programmers’, and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished. "

" Alice: ‘Complex programs’? You mean like Instagram? A website where you can post photos of kittens and subscribe to a feed of photos produced by other people? Or Twitter? Or any one of the 95% of applications which are just a CRUD interface to some data store? The truth is, if you strip applications of all their incidental complexity (largely caused by the artificial barriers at application boundaries), they are often extremely simple. But in all seriousness, why can’t more people write programs? Millions of people use spreadsheets, an even more impoverished and arcane programming environment than what we could build. " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

--

"

The result of all this is that most of the time people spend building software is wasted on repeatedly solving uninteresting problems, artificially created due to bad foundational assumptions:

    Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.
    Another 25% is spent on explicit networking. We don’t merely specify that a value must be sent from one node to another, we also specify how in exhaustive detail.
    Somewhere in between all this plumbing code is a tiny amount of interesting, pure computation, which takes up the remaining 5% of developer time. And there’s very little reuse of that 5% across applications, because every app is wrapped in a different 95% of cruft and the useful logic is often difficult to separate!

These numbers are made up, of course, but if anything they are optimistic. "

-- http://unisonweb.org/2015-05-07/about.html

--

nostrademons 23 hours ago

I suspect that "how to structure my data for serialization" is what the author means by the 70% of time spent on parsing, serialization, and persistence. I hadn't heard of Unison before, but I recognize the author's name from Lambda: The Ultimate, and I suspect that what he has in mind is that any value within the Unison language can appear on any Unison node, transparently. Instead of picking out exactly which fields you need and then creating new JSONObjects or protobufs for them, just send the whole variable over.

I also suspect (being a language design geek, and also having worked with some very large distributed systems) that the reason why this is seductive is also why it's unworkable. I think I probably do spend close to 70% of my time dealing with networking and data formats (and yes, I use off-the-shelf serialization formats and networking protocols), but that's because a watch is very different from a phone which is very different from a persistent messaging server which is different from a webpage, and Bluetooth is very different from cell networks which are very different from 10G-Ethernet in a DC. Try to dump your server data structures directly to your customer's cell phone and you're about to have a lot of performance and security problems.

--

"

Why UX designers should care about type theory

Applications are bad enough in that they trap potentially useful building blocks for larger program ideas behind artificial barriers, but they fail at even their stated purpose of providing an ‘intuitive’ interface to whatever fixed set of actions and functionality its creators have imagined. Here is why: the problem is that for all but the simplest applications, there are multiple contexts within the application and there needs to be a cohesive story for how to present only ‘appropriate’ actions to the user and prevent nonsensical combinations based on context. This becomes serious business as the total number of actions offered by an application grows and the set of possible actions and contexts grows. As an example, if I just have selected a message in my inbox (this is a ‘context’), the ‘send’ action should not be available, but if I am editing a draft of a message it should be. Likewise, if I have just selected some text, the ‘apply Kodachrome style retro filter’ action should not be available, since that only makes sense applied to a picture of some sort.

These are just silly examples, but real applications will have many more actions to organize and present to users in a context-sensitive way. Unfortunately, the way ‘applications’ tend to do this is with various ad hoc approaches that don’t scale very well as more functionality is added–generally, they allow only a fixed set of contexts, and they hardcode what actions are allowed in each context. (‘Oh, the send function isn’t available from the inbox screen? Okay, I won’t add that option to this static menu’; ‘Oh, only an integer is allowed here? Okay, I’ll add some error checking to this text input’) Hence the paradox: applications never seem to do everything we want (because by design they can only support a fixed set of contexts and because how to handle each context must be explicitly hardcoded), and yet we also can’t seem to easily find the functionality they do support (because the set of contexts and allowed actions is arbitrary and unguessable in a complex application).

There is already a discipline with a coherent story for how to handle concerns of what actions are appropriate in what contexts: type theory. Which is why I now (half) jokingly introduce Chiusano’s 10th corollary:

    Any sufficiently advanced user-facing program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a real programming language and type system.

Programming languages and type theory have largely solved the problem of how to constrain user actions to only ‘appropriate’ alternatives and present these alternatives to users in an exquisitely context-sensitive way. The fundamental contribution of a type system is to provide a compositional language for describing possible forms values can take, and to provide a fully generic program (the typechecker) for determining whether an action (a function) is applicable to a particular value (an argument to the function). Around this core idea we can build UI for autocompletion, perfectly appropriate context menus, program search, and so on. Type systems provide a striking, elegant solution to a problem that UX designers now solve in more ad hoc ways. "

-- http://pchiusano.github.io/2013-05-22/future-of-software.html

" Today I was forced to edit a Microsoft Word document, containing comments (made by me, and by others) and tracked changes. I found myself wanting to delete all comments, and accept all tracked changes. It took a few minutes to figure out, and I very quickly gave up trying to actually discover the functionality within Word’s actual UI and resorted to using Google. God help me if I wanted to, say, delete only comments made by me within the last ten days. ... Type systems solve exactly this problem. Here’s a sketch of how a type directed version of Word could work:

    I click on a comment. A status bar indicates that I have selected something of type Comment. Now that I have a handle to this type, I then ask for functions of accepting a List Comment. The delete comment function pops up, and I select it.
    The UI asks that I fill in the argument to the delete comment function. It knows the function expects a List Comment and populates an autocomplete box with several entries, including an all comments choice. I select that, and hit Apply. The comments are all deleted.
    If I want, I can insert a filter in between the call to all comments and the function to delete those comments. Of course, the UI is type directed–it knows that the input type to the filtering function must accept a Comment, and prepopulates an autocomplete with common choices–by person, by date, etc.

"

-- http://pchiusano.github.io/2013-09-10/type-systems-and-ux-example.html

---

" Programs are edited in a (browser-based) semantic editor which guarantees programs are well-formed and typecheck by construction. There are no parse or type errors to report to the user, since the UI constrains edits to those that are well-typed.

"

" The codebase is a purely functional data structure: In Unison, terms and types are uniquely identified by a nameless hash of their structure. References stored in the syntax tree are by hash, and human-readable names are separately stored metadata used only for display purposes by the editor. As in Nix, the value associated with a hash never changes. “Modifying” a term creates a new term, with a new hash. This has far-ranging consequences, from much better support for large-scale refactoring, to trivial sharing of data and functions across node boundaries in distributed systems. "

-- http://unisonweb.org/2015-05-07/about.html

"

In the nontrivial case: For many interesting cases of codebase transformations, simply making the change and fixing the errors doesn’t scale. You have to deal with an overwhelming list of errors, many of which are misleading, and the codebase ends up living in a non-compiling state for long periods of time. You begin to feel adrift in a sea of errors. Sometimes you’ll make a change, and the error count goes down. Other times, you’ll make a change, and it goes up. Hmm, I was relatively sure that was the right change, but maybe not… I’m going to just hope that was correct, and the compiler is getting a bit further now.

What’s happened? You’re in a state where you are not necessarily getting meaningful, accurate feedback from the compiler. That’s bad for two reasons. Without this feedback, you may be writing code that is making things worse, not better, building further on faulty assumptions. But more than the technical difficulties, working in this state is demoralizing, and it kills focus and productivity.

All right, so what do we do instead? Should we just avoid even considering any codebase transformations that are intractable with the “edit and fix errors” approach? No, that’s too conservative. Instead, we just have to avoid modifying our program in place. This lets us make absolutely any codebase transformation while keeping the codebase compiling at all times. Here’s a procedure, it’s quite simple:

    Suppose the file you wish to modify is Foo.hs. Create Foo__2.hs and call the module inside it Foo__2 as well. Copy any over bits of code you want from Foo.hs, then make the changes you want and get Foo__2 compiling. At this point, your codebase still compiles, but nothing is referencing the new definition of Foo.
    Pick one of the modules which depends on Foo.hs. Let’s say Bar.hs. Create Bar__2.hs and call the module inside it Bar__2 as well. You can probably see where this is going. You are going to have Bar__2 depend on the newly created Foo__2. You can start by copying over the existing Bar.hs, but perhaps you want to copy over bits and pieces at a time and get them each to compile against Foo__2. Or maybe you just copy all of Bar.hs over at once and crank through the errors. Whatever makes it easiest for you, just get Bar__2 compiling against Foo__2.
        Note: For languages that allow circular module dependencies, the cycle acts effectively like a single module. The strategy of copying over bits at a time works well for this. And while you’re at it, how about breaking up those cycles!
    Now that you’re done with Bar__2.hs, pick another module which depends on either Foo or Bar and follow the same procedure. Continue doing this until you’ve updated all the transitive dependents of Foo. You might end up with a lot of __2-suffixed copies of files, some of which might be quite similar to their old state, and some of which might be quite different. Perhaps some modules have been made obsolete or unnecessary. In any case, if you’ve updated all the transitive dependents of your initial change, you’re ready for the final step.
    For any file which has a corresponding __2 file, delete the original, and rename the Foo__2.hs to Foo.hs, and so on. Also do a recursive find/replace in the text of all files, replacing __2 with nothing. (Obviously, you don’t need to use __2, any prefix or suffix that is unique and unused will do fine.)
    Voilà! Your codebase now compiles with all the changes.

Note: I’m not claiming this is a new idea. Programmers do something like this all the time for large changes.

Notice that at each step, you are only dealing with errors from at most a single module and you are never confronted with a massive list of errors, many of which might be misleading or covering up more errors. Progress on the refactoring is measured not by the number of errors (which might not be accurate anyway), but by the number of modules updated vs the total number of modules in the set of transitive dependents of the immediate change(s). For those who like burndown charts and that sort of thing, you may want to compute this set up front and track progress as a percentage accordingly.

What happens if we take this good idea to its logical conclusion is we end up with a model in which the codebase is represented as a purely functional data type. (In fact, the refactoring algorithm I gave above might remind you of how a functional data structure like a tree gets “modified”—we produce a new tree and the old tree sticks around, immutable, as long as we keep a reference to it.) So in this model, we never modify a definition in place, causing other code to break. When we modify some code, we are creating a new version, referenced by no one. It is up to us to then propagate that change to the transitive dependents of the old code.

This is the model adopted by Unison. All terms, types, and type declarations are uniquely identified by a nameless, content-based hash. In the editor, when you reference the symbol identity, you immediately resolve that to some hash, and it is the hash, not the name, which is stored in the syntax tree. The hash will always and forever reference the same term. We can create new terms, perhaps even based on the old term, but these will have different content and hence different hashes. We can change the name associated with a hash, but that just affects how the term is displayed, not how it behaves! And if we call something else identity (there’s no restriction of name uniqueness), all references continue to point to the previous definition. Refactoring is hence a purely functional transformation from one codebase to another.

Aside: One lovely consequence of this model is that incremental typechecking is trivial. " -- https://pchiusano.github.io/2015-04-23/unison-update7.html

" Representing the codebase as a purely functional structure, with references done by immutable hash, means that renaming is a trivial refactoring which involves updating the metadata for a hash in a single location!

    At the same time, since no function, type, or value, is ever modified in place, editing creates a new term, with a distinct hash, referenced nowhere. Propagating a change to the transitive set of dependents of an initial changeset is instead done via structured refactoring sessions rather than tedious and error-prone text munging.

Since the value stored for a hash never changes, we can cache metadata about it like its type, and trivially do incremental typechecking in response to program edits. There is also no need to solve complicated problems of incremental reparsing, since there is no parser—the semantic editor directly constructs the syntax tree. "

" Since there is no global namespace, there are no conflicts whereby library A and library B disagree about what meaning to assign to a symbol (like if A and B depend on different, conflicting versions of some common library). It works just fine to write a function that uses bits from both library A and library B, indeed the very concept of a library or package becomes more fluid. A library is just a collection of hashes, and the same hash may be included in multiple libraries.

Also as a result, Unison has a simple story for serialization and sharing of arbitrary terms, including functions. Two Unison nodes may freely exchange data and functions—when sending a value, each states the set of hashes that value depends on, and the receiving node requests transmission of any hashes it doesn’t already know about. Using nameless, content-based hashes for references sidesteps complexities that arise due to the possibility that sender and receiver may each have different notions of what a particular symbol means (because they have different versions of some libraries, say).

Running arbitrary C code received over the network would obviously be a huge security liability. But since Unison is purely functional and uses a safe runtime, executing functions sent over the network can be made safe—our main concern becomes how much CPU and memory resources to allot ‘untrusted’ code. This is much simpler to address compared to worrying about whether the code will erase your root directory, take over your machine, or monkey-patch some commonly used function in your codebase! " -- http://unisonweb.org/2015-05-07/about.html

---

" Trivial sharing, and linking to Unison panels, values, and types. A Unison panel is itself a Unison term, with a unique hash. Thus, Unison preserves an important aspect of the web: linkability. Any publicly available Unison node can expose the set of values it knows about, and others can link to and thus build upon functionality written by others. Unlike applications, Unison panels are a starting point for further composition and experimentation! " -- http://unisonweb.org/2015-05-07/about.html

---

http://www.jetbrains.com/mps/

---

" We typically think of programs as being represented by their textual form. If we want to produce a UI, we write a program whose output is some sort of renderable graphical object—perhaps a pile of HTML/CSS/JS. Let’s consider this a form of compilation from our program to our graphical object, G. Like most forms of compilation, changes to the compiled output can’t be trivially reflected back into our source program. As a result, we tend to think of edits and interactivity on G as being distinct from the activity of editing our program. Put another way, we think of interaction with a UI as being a fundamentally different activity than programming.

That is how we typically think of things, but there’s another perspective—the UI is a program. We don’t write a program to produce a UI, we write a program that is a UI. That is, the UI is simply a specific kind of rendering of a program, and interacting with the UI is, quite literally, programming. I don’t mean that it’s analogous to programming, I mean that it is programming, in the sense that the user’s interaction with the UI is directly mapped to edits of an underlying program.

This probably doesn’t make much sense, so I’m hoping a few demonstrations will clarify.

Aside: This perspective, of the program as UI, is something Conal Elliott also talks about in his work on Tangible Functional Programming.

" -- http://pchiusano.github.io/2014-11-13/program-as-ui.html

--

https://www.flickr.com/photos/wholeplatform/5340227177/

--

here's an example of how Paul Chiusano's 'unison' works. If there is code like:

" answer : Int -> Bool answer 42 = True answer _ = False

visualAnswer : Int -> Bool visualAnswer = let msg = "The Answer to The Ultimate Question of Life, the Universe, and Everything..." in cell (function1 (arg -> vertical [ panel (text h1) msg, arg])) answer

visualAnswer 42 -- renders as /resources/unison/demo-42.html " -- http://pchiusano.github.io/2014-11-13/program-as-ui.html

then the definition for 'visualAnswer' specifies an interactive page which is rendered as a header1 (h1) message "The Answer to The Ultimate Question of Life, the Universe, and Everything..." at the top, and then underneath it, a field. The contents of the field will be given as an input to the function 'answer'. Then 'visualAnswer 42' instantiates this with the field bound to '42'. As you browse the page, your cursor tells you the path (lens) to the thing the cursor is over. Example at http://pchiusano.github.io/resources/unison/demo-42.html

--

responses to the Unison language:

bcg1 1 day ago

On the surface it sounds interesting, but I don't really buy the idea that programming is terribly limited by the quality of text editors. Even "basic" editors like vim and emacs can almost instantly catch most syntax problems, and IDEs like Eclipse and Visual Studio are great about complaining if you make a mistake.

In my experience, these days relatively few problems come from language syntax errors ... most problems come from mistakes in logic, poor systems integration, or incorrect validation of input data. Not sure how Unison will solve those challenges, but to be fair maybe that is out of scope.

However it is hard to slog through the marketing, and in the end its just not clear from the writeup what sort of programs one would write in Unison, so it doesn't really inspire me to spend any more time trying to learn about it.

reply

Ericson2314 15 hours ago

One of the mantras of functional programming is get your data structure right, and everything else falls into place. As an experienced functional programmer, I see this in practice every time I program--even a slightly "wrong" type pollutes the code dramatically.

The idea then with structure/tree/semantic editors is two-fold.

First, text is obviously the wrong the wrong data structure for programs, so even if people manage to provide fairly good support with the "modern IDE", the effort it takes is staggering. And still details like files, character encodings, god damn tabs vs. spaces, and terminals leak through. One can't help but wonder how much farther they would get were their efforts directed more efficiently. Look at this https://github.com/yinwang0/ydiff for example, blows text diff out of the water.

Second the idea is maybe text is the "wrong data structure" for programmers too, not just tool-smiths. I admit there are bunch more psychological/neurological aspects that make the answer less obvious. But I see no reason why not to keep an open mind until a polished tree editor exists for empirical testing. Even if text is in someways better, the improved functionality a tree editor offers could offset that dramatically.

reply

seanmcdirmid 15 hours ago

Functional programming kind of screws itself when it comes to "implementing" modern good IDEs; the aversion to state makes it difficult to implement good tree-incremental error resistant parsing and type checking that is usable in a language aware editor.

I've developed plenty of advanced IDE infrastructure; e.g. see

http://research.microsoft.com/en-us/people/smcdirm/managedtime.aspx http://research.microsoft.com/en-us/projects/liveprogramming/typography.aspx

The trick was abondoning FRP-style declarative state abstractions (which was my previous research topic) and moving onto something that could manage state in more flexible ways (as we say, by managing times and side effects instead of avoiding them). And once you nailed the state problem, incremental parsing, type checking, and rich editing are actually easy problems.

reply

tome 14 hours ago

Can you give some more detail about what you replaced FRP with? Sounds interesting.

reply

seanmcdirmid 13 hours ago

I replaced it with glitch. It was actually something I started developing for the scala IDE a long time ago that had to integrate easily with scalac, so it had to handle the limited set of effects performed by it (they no longer use this in the scala plugin, but initial results were promising, and being very incremental dealt with scalac's performance problems).

I've refined the technique over the last 7 years, you can read about it in a conference paper:

http://research.microsoft.com/pubs/211297/onward14.pdf

You can think of Glitch as being like React with dependency tracing (no world diffing) and support for state (effects are logged, must be commutative to support replay, and are rolled back when no longer executed by a replay).

reply

---

"

Persistent data sources must be accessible via a high-level, typed API. Unison’s architecture means that all Unison values can be trivially (and efficiently) serialized and deserialized to a persistent store. No more having to discard all type information and invent ad hoc encodings for persistence of each and every business object to whatever data store. In addition to internally managed datasets, support is also planned for connections to external data sources, while retaining a high-level API that abstracts over the particulars of each source. Individuals and businesses will be able to hook Unison up to datasets they own, and if they wish, share access to these datasets with other nodes in the Unison web. In doing so they get access to all the reporting, visualization, and computational capabilities of the Unison language and its general-purpose editor. "

---

a = 3 b = 4 a,b = b,a a,b == (4,3)

---

"I skimmed documentation of Python after people told me it was fundamentally similar to Lisp. My conclusion is that that is not so. `read', `eval', and `print' are all missing in Python." -- https://stallman.org/stallman-computing.html

---

charlies stross's security rant:

http://www.antipope.org/charlie/blog-static/2010/08/where-we-went-wrong.html

1) should not be able to execute data as code (harvard arch over von neumann) 2) no null-terminated string (pointer to end of array as first element of array) 3) crypto at the TCP/IP level (both encrytion and authentication) 4) misc stuff regarding WWW (the only one he precisely states is that Javascript is too dangerous, as it's a form of 'data as code')

---

python vs R:

---

https://wiki.python.org/moin/Powerful%20Python%20One-Liners

https://news.ycombinator.com/item?id=8158976

tangentially related:

---

             Stanford EE Computer Systems Colloquium             
                                                                 
                 4:15PM, Wednesday, June 3, 2015                 
      HP Auditorium, Gates Computer Science Building Room B1     
                       Stanford University                       
                   http://ee380.stanford.edu[1]                  

Topic: The Future of Trustworthy Computer Systems: A Holistic View from the Perspectives of Hardware, Software, and Programming Languages

Speaker: Peter Neumann SRI International

About the talk:

The state of the art of trustworthiness is inherently weak with respect to computer systems and networks. Essentially every component today is a potential weak link, including hardware, operating systems, and apps (for desktops, laptops, network switches and controllers, servers, clouds, and even mobile devices), and above all, people (insiders, penetrators, malware creators, and so on). The potentially untrustworthy nature of our supply chains adds further uncertainty. Indeed, the ubiquity of computer-based devices in the so-called Internet of Things is likely to make this situation even more volatile than it already is.

This talk will briefly consider system vulnerabilities and risks, and some of the limitations of software engineering and programming languages. It will also take a holistic view of total-system architectures and their implementations, which suggests that some radical systemic improvements are needed, as well as changes in how we develop hardware and software.

To this end, we will discuss some lessons from joint work between SRI and the University of Cambridge for DARPA, which is now nearing several possible transition opportunities relating to some relatively clean-slate approaches. In particular, we are pursuing formally based hardware design that enables efficient fine-grained compartmentalization and access controls, new software and compiler extensions that can take significant advantage of the hardware features. SRI's formal methods tools (theorem prover PVS, model checker SAL, and SMT solver Yices) have been embedded into the hardware design process, and are also applicable selectively to the software. This work for DARPA is entirely open-sourced. The potential implications for hardware and software developers are quite considerable. SRI and U.Cambridge are also applying the knowledge gained from our trustworthy systems to software-defined networking, servers, and clouds, along with some network switch/controller approaches that can also benefit from the new hardware.. For example, Phil Porras has described some of the SDN work of his team in last week's talk at this colloquium.

Slides:

No slides to download at this time.

Videos:

Join the live presentation.[2] Wednesday June 3, 4:15-5:30.  Requires Microsoft Windows Media player. View video by lecture sequence. [3] Spring 2015 series only, HTML5. Available after 8PM on the day of the lecture. View video on YouTube?