proj-oot-ootNotes13

https://wizardsofsmart.wordpress.com/2015/03/19/web-apis-vs-rpc/

---

django release discussion:

"

bebop 3 days ago

Some great things in this release:

and it is an LTS. Time to get upgrading.

reply

crdoconnor 2 days ago

>uuid field

This one is good.

I kind of wish it were default for primary keys, since the number of times I got burned by having databases I couldn't easily merge (which UUIDs help a lot with) way exceeds the number of times I had performance/memory issues caused by actually using UUIDs. "

so, we should support things like UUID keys

--

being able to do stuff like:

[(k,v) for (k,v) in geneName2Idx.items() if v == 0]

is just great. Note the destructuring bind.

--

some stuff coming in C++:

"

hendzen 14 days ago

I think C++ has a very bright future - due to two driving forces:

1) The C++ Standards committee has been doing a very, very good job. Aside from the big one (compiler enforced memory safety), some of the best Rust features are on their way into C++. For example:

Other great stuff coming to C++ over the next few years:

... much much more "


hsivonen 14 days ago

Fun times ahead if document.write isn't already supported. When I rewrote Gecko's HTML parsing, accommodating document.write was a (or maybe the) dominant design issue.


realusername 14 days ago

I'm quite curious, why document.write is so hard to implement compared to other methods ? Is it not working a bit like innerHTML ?


hsivonen 13 days ago

It's quite different from innerHTML, since document.write inserts source characters to the character stream going into the parser, and there's no guarantee that all elements that get opened get closed. There's even no guarantee that the characters inserted don't constitute a partial tag. So document.write potentially affects the parsing of everything that comes after it.

For this to work, scripts have to appear to block the parser. However, it's desirable to start fetching external resources (images, scripts, etc.) that occur after the script that's blocking the parser. In Firefox, the scripts see the state of the world as if the parser was blocked, but in reality the parser continues in the background and keeps starting fetches for the external resources it finds and keeps building a queue of operations that need to be performed in order to build the DOM according to what was parsed. If the script doesn't call document.write or calls it in a way that closes all the elements that it opens, the operation queue that got built in the background is used. If the document.write is of the bad kind, the work that was done in the background is thrown away and the input stream is rewound. See https://developer.mozilla.org/en-US/docs/Mozilla/Gecko/HTML_... for the details.

For added fun, document.write can write a script that calls document.write.

reply

Animats 13 days ago

What a mess. To support a stupid HTML feature, the browser's parser has to be set up like a superscalar CPU, retirement unit and all. Hopefully the discard operation doesn't happen very often.

---

"

In addition to bi-directional data binding, we can also bind parameterized functions to events:

var vm = todo.vm

m("button", {onclick: vm.add.bind(vm, vm.description)}, "Add")

In the code above, we are simply using the native Javascript Function::bind method. This creates a new function with the parameter already set. In functional programming, this is called partial application.

The vm.add.bind(vm, vm.description) expression above returns a function that is equivalent to this code:

onclick: function(e) { todo.vm.add(todo.vm.description) }

Note that when we construct the parameterized binding, we are passing the description getter-setter by reference, and not its value. We only evaluate the getter-setter to get its value in the controller method. This is a form of lazy evaluation: it allows us to say "use this value later, when the event handler gets called".

Hopefully by now, you're starting to see why Mithril encourages the usage of m.prop: Because Mithril getter-setters are functions, they naturally compose well with functional programming tools, and allow for some very powerful idioms. In this case, we're using them in a way that resembles C pointers. "

---

http://picat-lang.org/ is cool

---

" We use Go for a lot of our server development here at Mailgun, and it’s great. Coming from Python, though, there is one thing I really missed:

import pdb; pdb.set_trace()

I could insert this little line whenever I was confused about what was happening in the code and see exactly what was going on. But in Go? Not so much. When I started this project in January, gdb failed on every program I tried it on. delve didn’t work on OS X, and print-statement-debugging was too slow and limited. What's a developer to do? "

" godebug

    All that stands in the way [of a good Go debugger] is the writing of a lot of non-portable low-level code talking to buggy undocumented interfaces.

godebug is a different kind of debugger. Traditional debuggers for compiled languages use low-level system calls and read binary files for debugging symbols. They’re hard to get right and they’re hard to port.

godebug takes a different approach: take the source code of a target program, insert debugging code between every line, then compile and run that instead. " -- http://blog.mailgun.com/introducing-a-new-cross-platform-debugger-for-go/

discussion: https://news.ycombinator.com/item?id=9409423

 DannyBee 5 hours ago

"Since it's modifying the source before compiling it, I expect that the compiler will conclude that most optimizations can't be applied when they cross breakpoint boundaries."

While true, this depends on the compiler knowing this is a magical breakpoint barrier it can't move things across. The compiler has no idea this is a magical barrier unless something has told it it's a magical barrier. Looking at godebug library, i don't see this being the case, it looks like it translates into an atomic store and an atomic load to some variables, and then a function call, which the compiler is definitely not going to see as a "nothing can move across" barrier.

(Also, having the debugging library alter the semantics of the program is 100% guaranteed to lead to bugs that are not visible when using the library, etc)

reply

skybrian 3 hours ago

I think you're right that it's going to introduce bugs in concurrent code. For example, it's legal to send a pointer through a channel as a way of transferring ownership and never access the object again. If the debugger rewrites the code so that "never accesses it again" is no longer true, it's created a data race.

On the other hand, godebug generates straightforward single-threaded code that creates pointers to locals in a shadow data structure and accesses them later. There's no reason it shouldn't work if you're not using goroutines.

In particular, a previous call to godebug.Declare("x", &x) will add a pointer to what was previously a local variable to a data structure. This effectively moves all locals to a heap representation of the goroutine's stack, to be accessed later. It's going to kill performance, but it's legal to do.

---

 tormeh 1 day ago

The one I'm writing now: "Creating and implementing deterministic multithreading programming language significantly harder than hoped"

reply

thechao 1 day ago

Finished this five years ago: "Stepanov-style generic programming is the bees knees; we don't understand why."

reply

seanmcdirmid 1 day ago

It's not that hard. I'm thinking the song "let it go" can help (define determinism as an eventual goal; at least, that is the approach I find that works for me).

reply

tormeh 1 day ago

It's not that it's theoretically hard, but writing a compiler with typechecking and all is just a lot of work. It's a master thesis, btw, not a doctoral one.

reply

seanmcdirmid 1 day ago

My own project, Glitch uses replay to work out glitches (bubbles of non determinism in a deterministic execution), it incidentally also makes writing incremental compilers easy (see http://research.microsoft.com/en-us/people/smcdirm/managedti...), and I recently used it for a new type checker (see https://www.youtube.com/watch?v=__28QzBdyBU&feature=youtu.be). Ok, it's still a lot of work, but if you look at the problem you are solving, solving it can make the compiler writing aspect easier also.

reply

---

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/arguments

---

" Sony's Open Source command line tool for performing python one liners using unix-like pipes

They call it "The Pyed Piper" or pyp. It's pretty similar to the -c way of executing python, but it imports common modules and has it's own preset variable that help with splitting/joining, line counter, etc. You use pipes to pass information forward instead of nested parentheses, and then use your normal python string and list methods. Here is an example from the homepage:

Here, we take a linux long listing, capture every other of the 5th through the 10th lines, keep username and file name fields, replace "hello" with "goodbye", capitalize the first letter of every word, and then add the text "is splendid" to the end:

ls -l

pyp "pp[5:11:2]whitespace[2], w[-1]p.replace('hello','goodbye')p.title(),'is splendid'"

and the explanation:

This uses pyp's built-in string and list variables (p and pp), as well as the variable whitespace and it's shortcut w, which both represent a list based on splitting each line on whitespace (whitespace = w = p.split()). The other functions and selection techniques are all standard python. Notice the pipes ("

") are inside the pyp command.

http://code.google.com/p/pyp/ http://opensource.imageworks.com/?p=pyp "

---

https://news.ycombinator.com/item?id=9446980

" At the same time it demonstrates everything that is wrong with traditional shells and what PowerShell? gets so right.

jq is not a "Unixy" tool in the sense that it should do one thing and do it right. jq implements its own expression language, command processor complete with internal "pipelines". Why would a tool need to do that? find is another utility that does many, many other things than to "find" items: It executes commands, deletes objects etc.

Consider this challenge that included parsing json, filtering, projecting and csv-formatting output: https://news.ycombinator.com/item?id=9438109

Several solutions uses jq - to good effect. But the PowerShell? solution uses PowerShell? expressions to filter, sort and project items.

The problem is - at the core - that the traditional command tools are severely restricted by only being able to rely on a text pipeline convention. You cannot parse json and send the "objects" along to another tool. Well, you can, if the json tree is extremely basic - like 2 levels. "

probably other good comments in there too, i havent read it yet

--

learn to code / hypercard application domain:

http://livecode.com/

--

aliasing: "no man is an island"

--

	Postgres gets support for upsert

anilshanbhag 14 hours ago

This is actually huge. A common problem that arises when you write applications is you want to INSERT if key does not exist else UPDATE. The right way of doing this without an upsert is using a transaction. However this will make life easier as you can do it directly in one SQL statement.

reply

colanderman 14 hours ago

Just "using a transaction" is insufficient. You must be prepared to handle the case that neither the INSERT nor the UPDATE succeeds (with READ COMMITTED isolation), or that the transaction fails (with REPEATABLE READ isolation or better), by repeating the transaction. And if latency is at all a concern to you, you must wrap this all in a stored procedure to avoid the necessary round-trip-time between the commands.

Hence this is more than just saving typing a couple lines -- this saves writing entire stupid loops to do what is conceptually a simple (and very common) operation.

Postgres gets better and better.

reply

--

" These applications (“appliances” is a better word) come equipped with a fixed vocabulary of actions, speak no common language, and cannot be extended, composed, or combined with other applications except with enormous friction. By analogy, what we have is a railway system where the tracks in each region are of differing widths, forcing trains and their cargo to be totally disassembled and then reassembled to transport anything across the country. As ridiculous as this sounds, this is roughly what we do at application boundaries: write explicit serialization and parsing code and lots of tedious (not to mention inefficient) code to deconstruct and reconstruct application data and functions. " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

---

" Alice: You’re missing my point! Compare the overhead of calling a function in the ‘native’ language of your program vs calling a function exposed via JSON+REST. And no I don’t mean the computational overhead, though that is a problem too. Within the language of your program, if I want to call a function returning a list of (employee, date) pairs, I simply invoke the function and get back the result. With JSON+REST, I get back a blob of text, which I then have to parse into a syntax tree and then convert back to some meaningful business objects. If I had that overhead and tedium for every function call I made I’d have quit programming long ago.

Bob: Are you just saying you want more built in types for JSON, then? That’s easy, I hear there’s even a proposal to add a date type to JSON.

Alice: And maybe in another fifteen years JSON will grow a standard for sending algebraic data types (they’ve been around for like 40 years, you know) and other sorts of values, like you know, functions. "

" Any creator wishing to build atop or extend the functionality of an application faces a mountain of idiosyncratic protocols and data representations and some of the most tedious sort of programming imaginable: parsing, serializing, converting between different data representations, and error handling due to the inherent problem of having to pass through a dynamically typed and insufficiently expressive communication channel! " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

--

" People write essays, create illustrations, organize and edit photographs, send messages to friends, play card games, watch movies, comment on news articles, and they do serious work too–analyze portfolios, create budgets and track expenses, find plane flights and hotels, automate tasks, and so on. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some ‘application’ is not in any way essential. In fact, I hope you can start to see how unnatural it is that such stark boundaries exist between applications, and how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for ‘programmers’, and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished. "

" Alice: ‘Complex programs’? You mean like Instagram? A website where you can post photos of kittens and subscribe to a feed of photos produced by other people? Or Twitter? Or any one of the 95% of applications which are just a CRUD interface to some data store? The truth is, if you strip applications of all their incidental complexity (largely caused by the artificial barriers at application boundaries), they are often extremely simple. But in all seriousness, why can’t more people write programs? Millions of people use spreadsheets, an even more impoverished and arcane programming environment than what we could build. " -- http://pchiusano.github.io/2013-05-22/future-of-software.html

--

"

The result of all this is that most of the time people spend building software is wasted on repeatedly solving uninteresting problems, artificially created due to bad foundational assumptions:

    Perhaps 70% of developer time is spent dealing with parsing, serialization, and persistence. Values are encoded to and from JSON, to and from various binary formats, and to and from various persistent data stores… over and over again.
    Another 25% is spent on explicit networking. We don’t merely specify that a value must be sent from one node to another, we also specify how in exhaustive detail.
    Somewhere in between all this plumbing code is a tiny amount of interesting, pure computation, which takes up the remaining 5% of developer time. And there’s very little reuse of that 5% across applications, because every app is wrapped in a different 95% of cruft and the useful logic is often difficult to separate!

These numbers are made up, of course, but if anything they are optimistic. "

-- http://unisonweb.org/2015-05-07/about.html

--

nostrademons 23 hours ago

I suspect that "how to structure my data for serialization" is what the author means by the 70% of time spent on parsing, serialization, and persistence. I hadn't heard of Unison before, but I recognize the author's name from Lambda: The Ultimate, and I suspect that what he has in mind is that any value within the Unison language can appear on any Unison node, transparently. Instead of picking out exactly which fields you need and then creating new JSONObjects or protobufs for them, just send the whole variable over.

I also suspect (being a language design geek, and also having worked with some very large distributed systems) that the reason why this is seductive is also why it's unworkable. I think I probably do spend close to 70% of my time dealing with networking and data formats (and yes, I use off-the-shelf serialization formats and networking protocols), but that's because a watch is very different from a phone which is very different from a persistent messaging server which is different from a webpage, and Bluetooth is very different from cell networks which are very different from 10G-Ethernet in a DC. Try to dump your server data structures directly to your customer's cell phone and you're about to have a lot of performance and security problems.

--

"

Why UX designers should care about type theory

Applications are bad enough in that they trap potentially useful building blocks for larger program ideas behind artificial barriers, but they fail at even their stated purpose of providing an ‘intuitive’ interface to whatever fixed set of actions and functionality its creators have imagined. Here is why: the problem is that for all but the simplest applications, there are multiple contexts within the application and there needs to be a cohesive story for how to present only ‘appropriate’ actions to the user and prevent nonsensical combinations based on context. This becomes serious business as the total number of actions offered by an application grows and the set of possible actions and contexts grows. As an example, if I just have selected a message in my inbox (this is a ‘context’), the ‘send’ action should not be available, but if I am editing a draft of a message it should be. Likewise, if I have just selected some text, the ‘apply Kodachrome style retro filter’ action should not be available, since that only makes sense applied to a picture of some sort.

These are just silly examples, but real applications will have many more actions to organize and present to users in a context-sensitive way. Unfortunately, the way ‘applications’ tend to do this is with various ad hoc approaches that don’t scale very well as more functionality is added–generally, they allow only a fixed set of contexts, and they hardcode what actions are allowed in each context. (‘Oh, the send function isn’t available from the inbox screen? Okay, I won’t add that option to this static menu’; ‘Oh, only an integer is allowed here? Okay, I’ll add some error checking to this text input’) Hence the paradox: applications never seem to do everything we want (because by design they can only support a fixed set of contexts and because how to handle each context must be explicitly hardcoded), and yet we also can’t seem to easily find the functionality they do support (because the set of contexts and allowed actions is arbitrary and unguessable in a complex application).

There is already a discipline with a coherent story for how to handle concerns of what actions are appropriate in what contexts: type theory. Which is why I now (half) jokingly introduce Chiusano’s 10th corollary:

    Any sufficiently advanced user-facing program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a real programming language and type system.

Programming languages and type theory have largely solved the problem of how to constrain user actions to only ‘appropriate’ alternatives and present these alternatives to users in an exquisitely context-sensitive way. The fundamental contribution of a type system is to provide a compositional language for describing possible forms values can take, and to provide a fully generic program (the typechecker) for determining whether an action (a function) is applicable to a particular value (an argument to the function). Around this core idea we can build UI for autocompletion, perfectly appropriate context menus, program search, and so on. Type systems provide a striking, elegant solution to a problem that UX designers now solve in more ad hoc ways. "

-- http://pchiusano.github.io/2013-05-22/future-of-software.html

" Today I was forced to edit a Microsoft Word document, containing comments (made by me, and by others) and tracked changes. I found myself wanting to delete all comments, and accept all tracked changes. It took a few minutes to figure out, and I very quickly gave up trying to actually discover the functionality within Word’s actual UI and resorted to using Google. God help me if I wanted to, say, delete only comments made by me within the last ten days. ... Type systems solve exactly this problem. Here’s a sketch of how a type directed version of Word could work:

    I click on a comment. A status bar indicates that I have selected something of type Comment. Now that I have a handle to this type, I then ask for functions of accepting a List Comment. The delete comment function pops up, and I select it.
    The UI asks that I fill in the argument to the delete comment function. It knows the function expects a List Comment and populates an autocomplete box with several entries, including an all comments choice. I select that, and hit Apply. The comments are all deleted.
    If I want, I can insert a filter in between the call to all comments and the function to delete those comments. Of course, the UI is type directed–it knows that the input type to the filtering function must accept a Comment, and prepopulates an autocomplete with common choices–by person, by date, etc.

"

-- http://pchiusano.github.io/2013-09-10/type-systems-and-ux-example.html

---

" Programs are edited in a (browser-based) semantic editor which guarantees programs are well-formed and typecheck by construction. There are no parse or type errors to report to the user, since the UI constrains edits to those that are well-typed.

"

" The codebase is a purely functional data structure: In Unison, terms and types are uniquely identified by a nameless hash of their structure. References stored in the syntax tree are by hash, and human-readable names are separately stored metadata used only for display purposes by the editor. As in Nix, the value associated with a hash never changes. “Modifying” a term creates a new term, with a new hash. This has far-ranging consequences, from much better support for large-scale refactoring, to trivial sharing of data and functions across node boundaries in distributed systems. "

-- http://unisonweb.org/2015-05-07/about.html

"

In the nontrivial case: For many interesting cases of codebase transformations, simply making the change and fixing the errors doesn’t scale. You have to deal with an overwhelming list of errors, many of which are misleading, and the codebase ends up living in a non-compiling state for long periods of time. You begin to feel adrift in a sea of errors. Sometimes you’ll make a change, and the error count goes down. Other times, you’ll make a change, and it goes up. Hmm, I was relatively sure that was the right change, but maybe not… I’m going to just hope that was correct, and the compiler is getting a bit further now.

What’s happened? You’re in a state where you are not necessarily getting meaningful, accurate feedback from the compiler. That’s bad for two reasons. Without this feedback, you may be writing code that is making things worse, not better, building further on faulty assumptions. But more than the technical difficulties, working in this state is demoralizing, and it kills focus and productivity.

All right, so what do we do instead? Should we just avoid even considering any codebase transformations that are intractable with the “edit and fix errors” approach? No, that’s too conservative. Instead, we just have to avoid modifying our program in place. This lets us make absolutely any codebase transformation while keeping the codebase compiling at all times. Here’s a procedure, it’s quite simple:

    Suppose the file you wish to modify is Foo.hs. Create Foo__2.hs and call the module inside it Foo__2 as well. Copy any over bits of code you want from Foo.hs, then make the changes you want and get Foo__2 compiling. At this point, your codebase still compiles, but nothing is referencing the new definition of Foo.
    Pick one of the modules which depends on Foo.hs. Let’s say Bar.hs. Create Bar__2.hs and call the module inside it Bar__2 as well. You can probably see where this is going. You are going to have Bar__2 depend on the newly created Foo__2. You can start by copying over the existing Bar.hs, but perhaps you want to copy over bits and pieces at a time and get them each to compile against Foo__2. Or maybe you just copy all of Bar.hs over at once and crank through the errors. Whatever makes it easiest for you, just get Bar__2 compiling against Foo__2.
        Note: For languages that allow circular module dependencies, the cycle acts effectively like a single module. The strategy of copying over bits at a time works well for this. And while you’re at it, how about breaking up those cycles!
    Now that you’re done with Bar__2.hs, pick another module which depends on either Foo or Bar and follow the same procedure. Continue doing this until you’ve updated all the transitive dependents of Foo. You might end up with a lot of __2-suffixed copies of files, some of which might be quite similar to their old state, and some of which might be quite different. Perhaps some modules have been made obsolete or unnecessary. In any case, if you’ve updated all the transitive dependents of your initial change, you’re ready for the final step.
    For any file which has a corresponding __2 file, delete the original, and rename the Foo__2.hs to Foo.hs, and so on. Also do a recursive find/replace in the text of all files, replacing __2 with nothing. (Obviously, you don’t need to use __2, any prefix or suffix that is unique and unused will do fine.)
    Voilà! Your codebase now compiles with all the changes.

Note: I’m not claiming this is a new idea. Programmers do something like this all the time for large changes.

Notice that at each step, you are only dealing with errors from at most a single module and you are never confronted with a massive list of errors, many of which might be misleading or covering up more errors. Progress on the refactoring is measured not by the number of errors (which might not be accurate anyway), but by the number of modules updated vs the total number of modules in the set of transitive dependents of the immediate change(s). For those who like burndown charts and that sort of thing, you may want to compute this set up front and track progress as a percentage accordingly.

What happens if we take this good idea to its logical conclusion is we end up with a model in which the codebase is represented as a purely functional data type. (In fact, the refactoring algorithm I gave above might remind you of how a functional data structure like a tree gets “modified”—we produce a new tree and the old tree sticks around, immutable, as long as we keep a reference to it.) So in this model, we never modify a definition in place, causing other code to break. When we modify some code, we are creating a new version, referenced by no one. It is up to us to then propagate that change to the transitive dependents of the old code.

This is the model adopted by Unison. All terms, types, and type declarations are uniquely identified by a nameless, content-based hash. In the editor, when you reference the symbol identity, you immediately resolve that to some hash, and it is the hash, not the name, which is stored in the syntax tree. The hash will always and forever reference the same term. We can create new terms, perhaps even based on the old term, but these will have different content and hence different hashes. We can change the name associated with a hash, but that just affects how the term is displayed, not how it behaves! And if we call something else identity (there’s no restriction of name uniqueness), all references continue to point to the previous definition. Refactoring is hence a purely functional transformation from one codebase to another.

Aside: One lovely consequence of this model is that incremental typechecking is trivial. " -- https://pchiusano.github.io/2015-04-23/unison-update7.html

" Representing the codebase as a purely functional structure, with references done by immutable hash, means that renaming is a trivial refactoring which involves updating the metadata for a hash in a single location!

    At the same time, since no function, type, or value, is ever modified in place, editing creates a new term, with a distinct hash, referenced nowhere. Propagating a change to the transitive set of dependents of an initial changeset is instead done via structured refactoring sessions rather than tedious and error-prone text munging.

Since the value stored for a hash never changes, we can cache metadata about it like its type, and trivially do incremental typechecking in response to program edits. There is also no need to solve complicated problems of incremental reparsing, since there is no parser—the semantic editor directly constructs the syntax tree. "

" Since there is no global namespace, there are no conflicts whereby library A and library B disagree about what meaning to assign to a symbol (like if A and B depend on different, conflicting versions of some common library). It works just fine to write a function that uses bits from both library A and library B, indeed the very concept of a library or package becomes more fluid. A library is just a collection of hashes, and the same hash may be included in multiple libraries.

Also as a result, Unison has a simple story for serialization and sharing of arbitrary terms, including functions. Two Unison nodes may freely exchange data and functions—when sending a value, each states the set of hashes that value depends on, and the receiving node requests transmission of any hashes it doesn’t already know about. Using nameless, content-based hashes for references sidesteps complexities that arise due to the possibility that sender and receiver may each have different notions of what a particular symbol means (because they have different versions of some libraries, say).

Running arbitrary C code received over the network would obviously be a huge security liability. But since Unison is purely functional and uses a safe runtime, executing functions sent over the network can be made safe—our main concern becomes how much CPU and memory resources to allot ‘untrusted’ code. This is much simpler to address compared to worrying about whether the code will erase your root directory, take over your machine, or monkey-patch some commonly used function in your codebase! " -- http://unisonweb.org/2015-05-07/about.html

---

" Trivial sharing, and linking to Unison panels, values, and types. A Unison panel is itself a Unison term, with a unique hash. Thus, Unison preserves an important aspect of the web: linkability. Any publicly available Unison node can expose the set of values it knows about, and others can link to and thus build upon functionality written by others. Unlike applications, Unison panels are a starting point for further composition and experimentation! " -- http://unisonweb.org/2015-05-07/about.html

---

http://www.jetbrains.com/mps/

---

" We typically think of programs as being represented by their textual form. If we want to produce a UI, we write a program whose output is some sort of renderable graphical object—perhaps a pile of HTML/CSS/JS. Let’s consider this a form of compilation from our program to our graphical object, G. Like most forms of compilation, changes to the compiled output can’t be trivially reflected back into our source program. As a result, we tend to think of edits and interactivity on G as being distinct from the activity of editing our program. Put another way, we think of interaction with a UI as being a fundamentally different activity than programming.

That is how we typically think of things, but there’s another perspective—the UI is a program. We don’t write a program to produce a UI, we write a program that is a UI. That is, the UI is simply a specific kind of rendering of a program, and interacting with the UI is, quite literally, programming. I don’t mean that it’s analogous to programming, I mean that it is programming, in the sense that the user’s interaction with the UI is directly mapped to edits of an underlying program.

This probably doesn’t make much sense, so I’m hoping a few demonstrations will clarify.

Aside: This perspective, of the program as UI, is something Conal Elliott also talks about in his work on Tangible Functional Programming.

" -- http://pchiusano.github.io/2014-11-13/program-as-ui.html

--

https://www.flickr.com/photos/wholeplatform/5340227177/

--

here's an example of how Paul Chiusano's 'unison' works. If there is code like:

" answer : Int -> Bool answer 42 = True answer _ = False

visualAnswer : Int -> Bool visualAnswer = let msg = "The Answer to The Ultimate Question of Life, the Universe, and Everything..." in cell (function1 (arg -> vertical [ panel (text h1) msg, arg])) answer

visualAnswer 42 -- renders as /resources/unison/demo-42.html " -- http://pchiusano.github.io/2014-11-13/program-as-ui.html

then the definition for 'visualAnswer' specifies an interactive page which is rendered as a header1 (h1) message "The Answer to The Ultimate Question of Life, the Universe, and Everything..." at the top, and then underneath it, a field. The contents of the field will be given as an input to the function 'answer'. Then 'visualAnswer 42' instantiates this with the field bound to '42'. As you browse the page, your cursor tells you the path (lens) to the thing the cursor is over. Example at http://pchiusano.github.io/resources/unison/demo-42.html

--

responses to the Unison language:

bcg1 1 day ago

On the surface it sounds interesting, but I don't really buy the idea that programming is terribly limited by the quality of text editors. Even "basic" editors like vim and emacs can almost instantly catch most syntax problems, and IDEs like Eclipse and Visual Studio are great about complaining if you make a mistake.

In my experience, these days relatively few problems come from language syntax errors ... most problems come from mistakes in logic, poor systems integration, or incorrect validation of input data. Not sure how Unison will solve those challenges, but to be fair maybe that is out of scope.

However it is hard to slog through the marketing, and in the end its just not clear from the writeup what sort of programs one would write in Unison, so it doesn't really inspire me to spend any more time trying to learn about it.

reply

Ericson2314 15 hours ago

One of the mantras of functional programming is get your data structure right, and everything else falls into place. As an experienced functional programmer, I see this in practice every time I program--even a slightly "wrong" type pollutes the code dramatically.

The idea then with structure/tree/semantic editors is two-fold.

First, text is obviously the wrong the wrong data structure for programs, so even if people manage to provide fairly good support with the "modern IDE", the effort it takes is staggering. And still details like files, character encodings, god damn tabs vs. spaces, and terminals leak through. One can't help but wonder how much farther they would get were their efforts directed more efficiently. Look at this https://github.com/yinwang0/ydiff for example, blows text diff out of the water.

Second the idea is maybe text is the "wrong data structure" for programmers too, not just tool-smiths. I admit there are bunch more psychological/neurological aspects that make the answer less obvious. But I see no reason why not to keep an open mind until a polished tree editor exists for empirical testing. Even if text is in someways better, the improved functionality a tree editor offers could offset that dramatically.

reply

seanmcdirmid 15 hours ago

Functional programming kind of screws itself when it comes to "implementing" modern good IDEs; the aversion to state makes it difficult to implement good tree-incremental error resistant parsing and type checking that is usable in a language aware editor.

I've developed plenty of advanced IDE infrastructure; e.g. see

http://research.microsoft.com/en-us/people/smcdirm/managedtime.aspx http://research.microsoft.com/en-us/projects/liveprogramming/typography.aspx

The trick was abondoning FRP-style declarative state abstractions (which was my previous research topic) and moving onto something that could manage state in more flexible ways (as we say, by managing times and side effects instead of avoiding them). And once you nailed the state problem, incremental parsing, type checking, and rich editing are actually easy problems.

reply

tome 14 hours ago

Can you give some more detail about what you replaced FRP with? Sounds interesting.

reply

seanmcdirmid 13 hours ago

I replaced it with glitch. It was actually something I started developing for the scala IDE a long time ago that had to integrate easily with scalac, so it had to handle the limited set of effects performed by it (they no longer use this in the scala plugin, but initial results were promising, and being very incremental dealt with scalac's performance problems).

I've refined the technique over the last 7 years, you can read about it in a conference paper:

http://research.microsoft.com/pubs/211297/onward14.pdf

You can think of Glitch as being like React with dependency tracing (no world diffing) and support for state (effects are logged, must be commutative to support replay, and are rolled back when no longer executed by a replay).

reply

---

"

Persistent data sources must be accessible via a high-level, typed API. Unison’s architecture means that all Unison values can be trivially (and efficiently) serialized and deserialized to a persistent store. No more having to discard all type information and invent ad hoc encodings for persistence of each and every business object to whatever data store. In addition to internally managed datasets, support is also planned for connections to external data sources, while retaining a high-level API that abstracts over the particulars of each source. Individuals and businesses will be able to hook Unison up to datasets they own, and if they wish, share access to these datasets with other nodes in the Unison web. In doing so they get access to all the reporting, visualization, and computational capabilities of the Unison language and its general-purpose editor. "

---

a = 3 b = 4 a,b = b,a a,b == (4,3)

---

"I skimmed documentation of Python after people told me it was fundamentally similar to Lisp. My conclusion is that that is not so. `read', `eval', and `print' are all missing in Python." -- https://stallman.org/stallman-computing.html

---

charlies stross's security rant:

http://www.antipope.org/charlie/blog-static/2010/08/where-we-went-wrong.html

1) should not be able to execute data as code (harvard arch over von neumann) 2) no null-terminated string (pointer to end of array as first element of array) 3) crypto at the TCP/IP level (both encrytion and authentication) 4) misc stuff regarding WWW (the only one he precisely states is that Javascript is too dangerous, as it's a form of 'data as code')

---

python vs R:

---

https://wiki.python.org/moin/Powerful%20Python%20One-Liners

https://news.ycombinator.com/item?id=8158976

tangentially related:

---

             Stanford EE Computer Systems Colloquium             
                                                                 
                 4:15PM, Wednesday, June 3, 2015                 
      HP Auditorium, Gates Computer Science Building Room B1     
                       Stanford University                       
                   http://ee380.stanford.edu[1]                  

Topic: The Future of Trustworthy Computer Systems: A Holistic View from the Perspectives of Hardware, Software, and Programming Languages

Speaker: Peter Neumann SRI International

About the talk:

The state of the art of trustworthiness is inherently weak with respect to computer systems and networks. Essentially every component today is a potential weak link, including hardware, operating systems, and apps (for desktops, laptops, network switches and controllers, servers, clouds, and even mobile devices), and above all, people (insiders, penetrators, malware creators, and so on). The potentially untrustworthy nature of our supply chains adds further uncertainty. Indeed, the ubiquity of computer-based devices in the so-called Internet of Things is likely to make this situation even more volatile than it already is.

This talk will briefly consider system vulnerabilities and risks, and some of the limitations of software engineering and programming languages. It will also take a holistic view of total-system architectures and their implementations, which suggests that some radical systemic improvements are needed, as well as changes in how we develop hardware and software.

To this end, we will discuss some lessons from joint work between SRI and the University of Cambridge for DARPA, which is now nearing several possible transition opportunities relating to some relatively clean-slate approaches. In particular, we are pursuing formally based hardware design that enables efficient fine-grained compartmentalization and access controls, new software and compiler extensions that can take significant advantage of the hardware features. SRI's formal methods tools (theorem prover PVS, model checker SAL, and SMT solver Yices) have been embedded into the hardware design process, and are also applicable selectively to the software. This work for DARPA is entirely open-sourced. The potential implications for hardware and software developers are quite considerable. SRI and U.Cambridge are also applying the knowledge gained from our trustworthy systems to software-defined networking, servers, and clouds, along with some network switch/controller approaches that can also benefit from the new hardware.. For example, Phil Porras has described some of the SDN work of his team in last week's talk at this colloquium.

Slides:

No slides to download at this time.

Videos:

Join the live presentation.[2] Wednesday June 3, 4:15-5:30.  Requires Microsoft Windows Media player. View video by lecture sequence. [3] Spring 2015 series only, HTML5. Available after 8PM on the day of the lecture. View video on YouTube? about 48 hours after the presentation. A link to the video will be installed here.

About the speaker:

[speaker photo]

Peter G. Neumann (email address omitted) has doctorates from Harvard and Darmstadt. After 10 years at Bell Labs in Murray Hill, New Jersey, in the 1960s, during which he was heavily involved in the Multics development jointly with MIT and Honeywell, he has been in the Computer Science Lab at SRI International (formerly Stanford Research Institute) since September 1971 -- where he is now Senior Principal Scientist. He is concerned with computer systems and networks, trustworthiness/dependability, high assurance, security, reliability, survivability, safety, and many risks-related issues such as election-system integrity, crypto applications and policies, health care, social implications, and human needs -- including privacy. He is currently Principal Investigator on two projects: clean-slate trustworthy hosts for the DARPA CRASH program with new hardware and new software, and clean-slate networking for the DARPA Mission-oriented Resilient Clouds program. He moderates the ACM Risks Forum (http://www.risks.org[4]),s and has been reponsible for CACM's ongoing Inside Risks articles since 1990, when he began chairing the ACM Committee on Computers and Public Policy. He created the ACM SIGSOFT Software Engineering Notes in 1976, was its editor for 19 years, and still contributes a RISKS-highlights section six times yearly. He has participated in four studies for the National Academies of Science: Multilevel Data Management Security (1982), Computers at Risk (1991), Cryptography's Role in Securing the Information Society (1996), and Improving Cybersecurity for the 21st Century: Rationalizing the Agenda (2007). His 1995 book, Computer-Related Risks, is still timely; perhaps surprising, many of its conclusions and recommendations are still valid today, as incidents similar to those described continue to occur. He is a Fellow of the ACM, IEEE, AAAS, and SRI. He received the National Computer System Security Award in 2002, the ACM SIGSAC Outstanding Contributions Award in 2005, and the Computing Research Association Distinguished Service Award in 2013. In 2012, he was elected to the newly created National Cybersecurity Hall of Fame as one of the first set of inductees. He is a member of the U.S. Government Accountability Office Executive Council on Information Management and Technology. He co-founded People For Internet Responsibility (PFIR, http://www.PFIR.org[5]. He has taught courses at Darmstadt, Stanford, U.C. Berkeley, and the University of Maryland. See his website ( http://www.csl.sri.com/neumann [6]) for testimonies for the U.S. Senate and House and California state Senate and Legislature, papers, bibliography, further background, etc.

Contact information:

Peter Neumann SRI International (email address omitted)

Embedded Links: [ 1 ] http://ee380.stanford.edu [ 2 ] http://coursematerials.stanford.edu/live/ee380.asx [ 3 ] https://mvideos.stanford.edu/graduate#/SeminarDetail/Spring/2015/EE/380 [ 4 ] http://www.risks.org [ 5 ] http://www.PFIR.org [ 6 ] http://www.csl.sri.com/neumann [ 7 ] (email address omitted)

---

wumbernang 3 hours ago

TBH I couldn't find a decent book or resource and sort of hacked my way around for a year or so. It had a good build in manual (get-help). This is a good poke at the fundamentals of my example:

Scrape a page:

   $flight = " LH3396"
   $url = "http://bing.com?q=flight status for $flight"
   $result = Invoke-WebRequest $url
   $elements = $result.AllElements | Where Class -eq "ans" | Select -First 1 -ExpandProperty innerText

Hit a REST endpoint:

   $body = @{
       Name = "So long and thanks for all the fish"
   }
   Invoke-RestMethod -Method Post -Uri "$resource\new" -Body (ConvertTo-Json $body) -Header @{"X-ApiKey"=$apiKey}

Sources:

[1] http://stackoverflow.com/questions/9053573/powershell-html-p...

[2] http://www.lavinski.me/calling-a-rest-json-api-with-powershe...

reply

tim333 4 hours ago

Not an expert but there's a "PowerShell? in 10 Minutes" here:

http://social.technet.microsoft.com/wiki/contents/articles/1...

reply

david-given 4 hours ago

I did find an open source implementation here:

http://pash.sourceforge.net/

Don't know if it's any good --- never tried it; it says it's about half complete, but I don't know if it's a useful half.

After looking at the docs, you could do a lot of what Powershell does with Unix shells; you'd need a different set of conventions, where instead of using unformatted text as an intermediate format you used a streamable table format with support for metadata. Then you could have commands like 'where', which would be awesome.

$ xps

xwhere user -eq dg xsort -desc rss xtop 10xecho "rss=@.rss cmdline=@.command"

...or something. sh syntax is a bit lacking; PowerShell?'s got lots of useful builtins, including having native support for the format so it knows how to present it to the user. An sh version would conversion routines back and forth from text.

The tricky part would be bootstrapping; getting enough functionality quickly enough that enough people would start using it to make it sustainable.

I'd still rather use this than faff around with awk, though. I've done way too much of that. And if I never have to parse the output of ls -l using cut again, I will be a happy person.

reply

XorNot? 3 hours ago

Isn't this just complaining you want a different tool? The entire Powershell format depends on, amongst other things, the object interface being sane, existing, and usable.

The whole point of the GNU system was working around the usual "I can see it here, need this bit, and want to put it there". If you need to do something really specific a lot, you write a tool which does that.

reply

emmelaich 1 hour ago

There's also http://www.lbreyer.com/xml-coreutils.html which is interesting.

But I've never tried it; just came across it recently as a homebrew update.

reply

---

freehunter 6 hours ago

SYSLOG! For the love of god, please support syslog! I work as a consultant supporting a SIEM, and the amount of hoops we need to jump through to get logs from Windows servers is crazy compared to changing one line in a syslog.conf file. I actually dread when a client says "we're an all Windows environment" because wow initial setup just got that much harder.

And if we want to install a syslog forwarder on their domain controllers... no one ever trusts software installed on their domain controllers. Everyone trusts a single line in syslog.conf.

reply

sudioStudio64 4 hours ago

MS has supported event forwarding since 2003. You can set machines to forward events or have them pulled. The events are XML that conform to a published schema. There is a WMI call that call pull the aggregated events off the collection servers.

I heard this kind of thing from a vendor the other day. It's like people don't even try to learn how it works.

Why are they different? The event log has some transactional guarantees that were required for a specific kind of security evaluation...C2? I can't remember the rest.

The thing is...you don't have to install a client on the domain controllers. You can setup forwarding to some log hosts and collect them there.

reply

pjc50 3 hours ago

It's like people don't even try to learn how it works

It's much harder to know how it works, for some reason. Information like this doesn't make its way into the community and circulate. On a UNIX system you can poke around /etc and get an idea of the scope of what is configurable. The same is very much not true of the registry and only slightly true of WMI.

reply

sudioStudio64 30 minutes ago

You are right. The guy that wrote PowerShell? says that UNIX is document oriented configuration while windows is API oriented configuration.

To get into it in any depth you have to approach windows programmatically. The most power is through C\C++...to be a good windows admin you need to read the docs about how you interact with different subsystems, even if you aren't going to code against them.

reply

darklajid 6 hours ago

You CAN export the event log data of course, even remotely. But it's rather ugly and I agree that some standard aggregation (syslog or anything) would be great.

reply

mhurron 6 hours ago

Even the choice of outputs (of which remote syslog should be one) would be a great addition to the Windows event log.

reply

---

you should be able to copy text from a function into an interactive terminal pretty easily to test a fn. This includes setting the parameters which have default values to their default values

eg in Python from

def image_mask_onto_atlas_superimpose_pipeline(filepath, atlas_id, superimposed_filepath_template='%(input_filepath)s_atlas_%(atlas_section_number).png', selectChannel=3, downsampling_atlas=16, colorRGB = (1,0,0), atlas_image_type='Atlas+-+Adult+Mouse', aba_api_base = 'http://api.brain-map.org/api/v2'):

you get

"downsampling_atlas=16, colorRGB = (1,0,0), atlas_image_type='Atlas+-+Adult+Mouse', aba_api_base = 'http://api.brain-map.org/api/v2'"

which is almost there, but you'd prefer

downsampling_atlas=16; colorRGB = (1,0,0); atlas_image_type='Atlas+-+Adult+Mouse'; aba_api_base = 'http://api.brain-map.org/api/v2';

---

to quote a friend, sometimes formal languages/taxonomies/bureaucracy makes it "impossible to be imprecise about things that by nature are ambiguous". i guess this is a problem with type systems too. And then on the other hand, if the type system tries to let you specify exactly how much ambiguity you want (generics with upper and lower type bounds, for example), this makes it more complicated and verbose.

(perhaps all we need is more typedefs for the ambiguity? plus useful syntactic defaults to ambiguity, eg the way that 'is' can mean identity (he is Bob), property (the sky is blue) isa (he is a human), or subclass (a human is an animal))

---

powershell:

https://certsimple.com/rosetta-stone

---

codygman 4 hours ago

As someone who used Go in the past for work and now uses Haskell, I can say that the advantages of Haskell over Go are more than "just theoretical".

In my day job I use Haskell to "just get shit done". However when it comes time to refactor I can do it much faster and safer in Haskell.

The ease of refactoring creates an incentive to improve code.

reply

crimsonalucard 3 hours ago

Seems like Go and other imperative languages are easier to start a project off with, but functional languages really shine when the code becomes really large and complex.

reply

---

https://www.tbray.org/ongoing/When/201x/2015/06/05/End-of-HTML

---

"Apart from syntax wins, you can't get much lower-level semantically and keep both safety and linear-time verifiability. Java bytecode with unrestricted goto and type confusion at join points makes for O(n^4) verification complexity. asm.js type checking is linear.

New and more concise syntax may come, but it's not a priority (gzip helps a lot), and doing it early makes two problem-kids to feed (JS as source language; new syntax for asm.js), which not only costs more but can make for divergence and can overconstrain either child. (This bit Java, pretty badly.)" -- https://news.ycombinator.com/item?id=9673582

---

isaiahg 15 hours ago

What would be really nice is something similar to .Net's virtual machine CLR. It would offer the flexibility to design in whatever language you want and possibly a performance boost as well.

reply

sparkie 9 hours ago

The CLR isn't really very flexible. Most languages that run on it are pretty similar. If you try to implement a language like Haskell on top of it for example, it quickly becomes obvious that it isn't a good fit - because the CLR has no support for higher kinds, typeclasses, laziness, parametric polymorphism etc. Sure, you can implement a Haskell interpreter/compiler that converts to .NET, but there certainly won't be any performance boost by boxing everything into objects and having several layers of indirection to simply invoke a function (and not really any advantage over writing such interpreter in say, JS).

A more ideal VM would be one which doesn't force a particular paradigm on you, but just abstracts over the CPU, using capabilities to restrict which instructions can be invoked. The CLR lacks such capabilities. Perhaps something like SafeHaskell? would be in the right direction, where side-effects are limited, and the user can optionally allow websites to invoke trusted modules.

reply

---

nerraga 13 hours ago

That only addresses the language component though, right? You'd still be stuck with HTML and CSS. I'm okay with javascript although I'd love to see another language supported in a similar first class fashion. It's HTML and particularly CSS that feel too overloaded, document-centric, and just plain hacky. I think something a little closer to XAML, or possibly like AML, would be a great addition. It'd be great to have support for a responsive layout without having to deal with responsive design as it exists today (amazing as it is).

reply

---

" Citing linguistic theory of the late 20th century, Graham (1989) maintains that the tendency of Chinese thought, as demonstrated in the Chinese language, is to think in terms of whole/part rather than class/member relationships. That is, the parts of a whole are considered in terms of their relationships with the whole, not their similarities to/differences from one another. In the above quote, all the masters are likened to be the sage ruler’s limbs. It is not to say that the sage ruler has dozens of limbs anatomically. Instead, a limb is only an analogy of a part and the masters’ writings are all simply parts of the supreme wisdom regardless of whether they share a common set of characteristics " -- Organizing Knowledge the Chinese Way, by Hur-Li Lee.

so, we need to have fields, and also roles, to have lots of meronymy as well as isa (instance/class) relationships

---

http://www.codersnotes.com/notes/a-constructive-look-at-templeos

" TempleOS? has system-wide autocomplete. You can hit Ctrl-F1 at any point and get a list of completable words. Not just filenames, but also symbol names. All source code is indexed and you can jump to any function from anywhere, even from the shell. The same system works in any program throughout the OS.

TempleOS’s? unified hypertext really shines when presented in the shell. From the command-line, you can call Uf(“Foo”) to disassemble a function, and each symbol printed will be hyperlinked in the shell window. Click on it to go to the source. objdump can’t do that.

The Type() function is used to display files, like DOS’s type or Unix’s cat. Of course, hypertext is respected. You can even use Type to show .BMP files directly in the shell. It raises an interesting challenge for other OSs – why do shells have to be pure text? Why can’t we have a multimedia shell? ... By using the hyperlink system that permeates the operating system, the shell itself can act as an explorer. Type Dir; for a listing, then you can simply click on any directory hyperlink to change to that directory and get a new listing, all within the same shell. Or click on “..” to go up. It takes a little getting used to, but after having used it for a while I have to admit growing quite attached to it.

...

The most notable feature of TempleOS? is it’s ubiquitous hypertext system, DolDoc?. This is the foundation for the both the shell and the text editor. Unlike Unix which represents everything via plain-text, everything in Temple is stored in DolDoc? format. The format itself is somewhat akin to RTF, and you can hit Ctrl-T at any point to inspect the raw text directly.

But DolDoc? isn’t just for text. You can store images (and even 3D meshes) directly into documents. You can put macros in there: hyperlink commands that run when you click on them. So if you want to build a menu or launcher, you just make a new text document and put links in it.

All of this allows something similar to the Oberon system developed at ETH Zürich http://ignorethecode.net/blog/2009/04/22/oberon/ , where the distinction of text, programs, menus and forms all blurs together into one.

...

In a file from the TempleOS? source code, one line contains the passage “Several other routines include a …”, where the “other routines” part is a hyperlink. Unlike in HTML, where that perhaps may lead to a page listing those other routines, here a DolDoc? macro is used so that a grep is actually performed when you click on it. While the HTML version could become stale if no-one updated it, this is always up-to-date.

http://www.codersnotes.com/content/uploads/2015/06/flowchart.png

It’s not every IDE that lets you embed images and flowcharts directly into your source code, that kinda makes you sit up and take notice. And yes, those flowchart boxes are hotlinked, so you can click on them to go directly to the source code that implements them.

You can press Ctrl-R at any point to bring up the resource editor, which lets you draw things. The sprites you draw are embedded directly in the document, and you can refer to them using numbered tags. There’s no standalone paint program supplied with the OS because you already have one accessible at any time, from within any program. If you want to sketch a doodle, just open a new document, draw things into it and save it out.

HolyC?

The language provided, HolyC?, is at it’s heart a reasonably complete version of C but with some notable extensions.

There is no main() function in TempleOS? programs. Anything you write at top-level scope is executed as it runs through the compiler. In C++ you can do something like “int a = myfunction();”, but you can’t just write “myfunction();” and just run it. Why not?

Every piece of code in TempleOS? (except the initial kernel/compiler) is JIT compiled on demand. Yes that’s right – you can run a program without compiling it, simply by using an #include statement from the command line. The program is then brought into the shell’s current namespace, and from there you can execute individual functions directly just by issuing commands.

You can tag a function with the #help_index compiler directive, and it’ll automatically appear in the documentation at the right place. And yes that’s fully dynamic. You don’t need to run a rebuild process, just compile the file and the documentation updates. Hit F1 and you can see your changes reflected in the help system.

HolyC? provides an #exe compiler directive, which can be used to shell out to external commands and include their output back into the source code. This provides a way for the user to implement a certain set of functionality that would otherwise require macros or specialized compiler support. You can attach any metdata to any class member.

http://www.codersnotes.com/content/uploads/2015/06/metadata.png

HolyC’s? class system implements full metadata and reflection support. Given a class, you can enumerate every member to get it’s name, offset, etc. What’s surprising is that you can also attach any custom metadata to any class member at compile time. Example uses for this might include storing it’s default value, min/max range, printf format string. Does your language support this?

The special lastclass keyword can be used as a default argument for functions. It causes the compiler to supply the name of the previous argument’s type as a string, allowing you to then do metadata lookups from it.

There is no ahead-of-time linker, nor object files. The dynamic linker is exclusively responsible for binding symbols together at load time. The symbol table remains accessible at runtime, and can be used for other purposes. TempleOS? has no environment variables – you just use regular variables.

http://www.codersnotes.com/content/uploads/2015/06/sourcelinks1.png

Programming Environment

HolyC? has no formal build system. You just compile a file and you’re done. If your project spans more than one file, you just #include all the files into one and compile that. The compiler can compile 50000 lines of code in less than a second.

When you hit F5 in the editor, the program is JIT compiled and run. The top-level statements are executed in turn, and your task is now loaded and ready. If your top-level statements included any sort of loop, it’ll stay running there. Otherwise you’ll be dropped back into a new shell. However, the shell exists within your task, so you can interactively use it as a REPL and start calling functions inside your program. Does your current IDE support links to documentation?

Or, perhaps you might place a call to Dbg() somewhere in your program, or hit Ctrl-Alt-D, and be dropped directly into the debugger at that point. The debugger of course, still functions as an interactive REPL. Does your IDE support a drop-in REPL?

How much support code does it take to open a window and draw graphics into it on your operating system? In TempleOS?, there is a one-to-one correspondence between tasks and windows, so a single call to DCAlias is enough to return the device context for your window. You can just do this as soon as your main function is called and draw into it. Making a window is surely the most important thing for a windowed operating system – why does it have to be hard? GDI, X11, DirectX? and OpenGL? could all learn something here.

...

There are no such things as threads in TempleOS?, as it doesn’t need them. Processes and threads are the same thing, because there’s no memory protection. If you need something in parallel, just spawn another process and let it share data with your own.

...

In many ways TempleOS? seems similar to systems such as the Xerox Alto, Oberon, and Plan 9; an all-inclusive system that blurs the lines between programs and documents.

In this video Terry gives a brief tour of some of the more interesting features of TempleOS?. At 5:50, he shows how to build a small graphical application from scratch. Now let’s just think about how you’d do this in Windows for a second. Consider for a minute how much code would be needed to register a windowclass, create a window, do some GDI commands, run a message pump, etc. You’d need to set up a Visual Studio project perhaps, and either use the resource editor to embed a bitmap, or try and load it from disk somehow. Now compare it to the tiny snippet of code that Terry writes to accomplish the same task. It certainly makes you wonder where we went so wrong. "

http://www.templeos.org/Wb/Doc/DolDocOverview.html

white-flame 21 minutes ago

While not very familiar to many, this is highly similar to the old Lisp Machine environments. Those were similarly a clean break from prior OSes, with:

S4M 2 hours ago

I like the idea of the shell being an HolyC? interpreter. Do we have some analogs with the mainstream languages - I know I could fire up, say, the Python REPL and use it as my shell but that wouldn't be convenient to manipulate files.

reply

ultimape 19 minutes ago

The C shell comes close: https://en.wikipedia.org/wiki/C_shell

For a while I was running Perl as my shell, it bakes in most common unix commands as top level functions and its easy to call ones that aren't built in using system calls. Perl is pretty nice for file manipulations out of the box if you know what you are doing.

reply

frou_dh 2 hours ago

I think even in HolyC? you'll be fully quoting all paths. With general purpose languages, they're going to need either a recognised-shorthand-expanding preprocessor, or unfettered "reader" macros to be able approach Bourne shell style unadorned text that does process and file wrangling.

reply

kayamon 1 hour ago

Presumably something more in the spirit of REBOL would be a good fit?

reply

http://www.templeos.org/Wb/Doc/HolyC.html

---

http://www.catb.org/esr/writings/taoup/html/plan9.html

" Plan 9 was an attempt to do Unix over again, better. ... They kept the Unix choice to mediate access to as many system services as possible through a single big file-hierarchy name space. In fact, they improved on it; many facilities that under Unix are accessed through various ad-hoc interfaces like BSD sockets, fcntl(2), and ioctl(2) are in Plan 9 accessed through ordinary read and write operations on special files analogous to device files. For portability and ease of access, almost all device interfaces are textual rather than binary. Most system services (including, for example, the window system) are file servers containing special files or directory trees representing the served resources. By representing all resources as files, Plan 9 turns the problem of accessing resources on different servers into the problem of accessing files on different servers.

Plan 9 combined this more-Unix-than-Unix file model with a new concept: private name spaces. Every user (in fact, every process) can have its own view of the system's services by creating its own tree of file-server mounts. Some of the file server mounts will have been manually set up by the user, and others automatically set up at login time. ... The single most important feature of Plan 9 is that all mounted file servers export the same file-system-like interface, regardless of the implementation behind them. Some might correspond to local file systems, some to remote file systems accessed over a network, some to instances of system servers running in user space (like the window system or an alternate network stack), and some to kernel interfaces. ... There is no ftp(1) command under Plan 9. Instead there is an ftpfs fileserver, and each FTP connection looks like a file system mount. ftpfs automatically translates open, read, and write commands on files and directories under the mount point into FTP protocol transactions. Thus, all ordinary file-handling tools such as ls(1), mv(1) and cp(1) simply work, ... Plan 9 has much else to recommend it, including the reinvention of some of the more problematic areas of the Unix system-call interface, the elimination of superuser, and many other interesting rethinkings. Its pedigree is impeccable, its design elegant, and it exposes some significant errors in the design of Unix. ... Some Plan 9 ideas have been absorbed into modern Unixes, particularly the more innovative open-source versions. FreeBSD? has a /proc file system modeled exactly on that of Plan 9 that can be used to query or control running processes. FreeBSD?'s rfork(2) and Linux's clone(2) system calls are modeled on Plan 9's rfork(2). Linux's /proc file system, in addition to presenting process information, holds a variety of synthesized Plan 9-like device files used to query and control kernel internals using predominantly textual interfaces. Experimental 2003 versions of Linux are implementing per-process mount points, a long step toward Plan 9's private namespaces. The various open-source Unixes are all moving toward systemwide support for UTF-8, an encoding actually invented for Plan 9.[ "

---

tomkinstinch 2 hours ago

 This is nice! On OSX, processes can be paused and continued too (by name via killall, and by PID via kill):
  kill -STOP 1234
  kill -CONT 1234
  killall -STOP -c "Pandora"
  killall -CONT -c "Pandora"
 vezzy-fnord 1 hour ago

SIGSTOP and SIGCONT are generic Unix signals.

Where it gets more interesting is using elaborate checkpoint/restore [1] mechanisms for processes to serialize their state (open fds, watches, IPC, etc.) into an image that can then be overlayed, remotely executed, debugged and so forth.

[1] https://en.wikipedia.org/wiki/Application_checkpointing

reply

---

"

mwfunk 371 days ago

I understand why ObjC?'s syntax makes some people bristle, but I've never felt that way myself. It's sort of like the people that really hate Python for no other reason than the meaningful whitespace. It's unconventional, but once you understand the rationale for it it makes sense in a way that is at least forgivable if not likable.

There have been a lot of C-based object-oriented APIs over the years. GObject has a C API. On the Mac, there's Core Foundation and a bunch of other OS X APIs that are built on top of it. For over a decade on X11, before gtk and Qt even existed, the closest thing there was to a standard graphical environment was Motif (the corresponding desktop environment was CDE), and Motif was built on top of Xt. Xt was yet another C-based object system, although it was specialized for designing UI components.

This is all well and good but you end up with a ton of boilerplate code that does nothing but manage the lifecycles of the object instances (retain/release for example), and lends itself to extremely verbose function calls in place of object methods.

One possible solution is to put together some really elaborate preprocessor macros to make it look like you have extended the C language to include special syntax for your object system, so you can at least replace this:

obj foo = obj_factory(); int c = obj_getNumberOfElements(foo);

...with something more compact like this:

obj foo = [Obj new]; int c = [foo numberOfElements];

(the second example is ObjC?-ish but the former is nothing in particular other than just what the typical C object APIs tend to look like)

The only catch is that the little mini-language you are extending C with using macros can't use existing C syntax, because you can only add to the language, not alter the behavior of existing operators. So, you can't just do method calls using a dot syntax on the instance (such as foo.numberOfElements()). So, you have to come up with something new. Maybe you always liked Smalltalk, and maybe you even based much of behavior of your object system on how Smalltalk objects behave and interact? If so, you might settle on the bracket notation. This has the added benefit of making it very clear when a chunk of code is run-of-the-mill C versus when the code is triggering the syntactic sugar you created with macros to add support for your object system to the C language.

C++ doesn't exist yet, or else you might've just gone with that instead of rolling your own thing. Eventually C++ does exist, and you start to feel a little primitive for sticking with the weird macro language. You eventually build your mini-language into a C compiler so you don't have use the macros anymore. You experiment with some new alternatives to the syntax that are more conventional, but no one uses them. Many developers like that the non-C-ish syntax makes it easy to distinguish between straight C code vs. interactions with the object system, which has its own set of rules and conventions.

Anyway, that's mostly speculation, but something like that story is how I've always thought Objective-C evolved over the years. I don't mind it nearly as much as long as I don't think of it as a separate programming language from C (like C++ or Java or pretty much anything else these days), but rather think of it as C with some useful syntactic sugar that gets rid of a ton of boilerplate code for a particular C-based object-oriented API.


austinz 371 days ago

According to http://en.wikipedia.org/wiki/Objective-C#History, that's actually almost exactly how it came to be. (Apple even experimented with changing the syntax: http://en.wikipedia.org/wiki/Objective-C#.22Modern.22_Object...) "

---

haskell stack vs cabal:

https://news.ycombinator.com/item?id=9687274

---

http://bropages.org/

---

Facebook Infer is a cool idea for a static analysis tool. It does incremental analysis and stores intermediate results in between runs so that you can run it on every commit and it doesnt take too long. Contrast with Coverity which catches more but takes hours to run on 10s of Ks of LOCs [1]. https://code.facebook.com/posts/1648953042007882

"

_shb 6 hours ago

Infer does bottom-up analysis: it starts at the bottom of the call graph and analyzes each procedure once independently of its callers. Analyzing the procedure produces a concise summary of its behavior that can be used in each calling procedure. This means that the cost of the analysis is roughly linear in the number of nodes in the call graph, which is not true for a lot of other interprocedural analysis techniques.

It's true that it a procedure changes that you may have to re-analyze all dependent procedures (and calling procedures!) in the worst case. However, in the bottom-up scheme you only need to re-analyze a procedure when the code change produces a change in the computed summary, and in practice summaries are frequently quite stable.

reply

cactusface 5 hours ago

... So... what do you do about cycles? ...

theblatte 5 hours ago

Infer computes fixpoints whenever there is a cycle in the call graph, until it reaches stable procedure summaries or timeouts.

reply

reply

Alupis 12 hours ago

Findbugs is also very good for Java apps, and is free. (Developed by the University of Maryland)[1]

[1] http://findbugs.sourceforge.net/

reply

pkaye 11 hours ago

Some of these tools (like Coverity) can be very comprehensive yet slow and expensive. We do our checking in layers like lint, memory check tools on every checkin and slower and expensive tools on a hourly/nightly basis.

reply

amirmc 12 hours ago

More OCaml code coming out of FB. Can add this to the list, which includes, Hack, Flow and Pfff [1].

The kinds of bugs it finds are listed at: http://fbinfer.com/docs/infer-bug-types.html

It's interesting to see how building tools with languages like OCaml can reduce bugs for teams, without them having to change the language itself. I do wonder what things would be like if such languages we're used directly more widely.

[1] http://ocaml.org/learn/companies.html

reply

ignoramous 11 hours ago

We use Pfff internally to generate code-graph of large polyglot code-bases (think AOSP). This graph powers the querying engine that finds all tests to be run given a diff. It is incredibly useful. My only gripe is that Pfff isn't being maintained. Pfff can barely support Java 7, let alone Java 8. C/CPP support isn't as extensive as is for PHP. How I wish Pfff was being actively maintained...

I've found Sourcegraph's srclib.org (Go) and Google's kythe.io (Cpp, Go) make some interesting strides in the static analysis field as well.

IMO, treating code as query-able data can open up a lot of possibilities, and OCaml suites the field like a glove.

reply

some dependencies it uses, and similar things: https://news.ycombinator.com/item?id=9701446

---

 nickpsecurity 7 hours ago

I love seeing this because it matches my recommendation for redoing the stack post-Snowden. I gave two options: (a) Wirth-style [1] with assembler -> high-level assembler -> Modula-2-like language -> safe Oberon-like language -> 4GL-like batteries included language; (b) VLISP-like [2] setup with assembler -> high-level assembler -> LISP interpreter -> PreScheme? compiler -> integrated PreScheme?/LISP/assembler system -> AOT or JIT compiler for full LISP.

This is kind of like a mix between the two. I like how the author illustrates each step well. The best illustration is showing how easily the core language can transform into a mainstream-grade language with extensible syntax and macro's. A strength worth copying in any new language albeit with guidelines on proper use. I bet it was all pretty fun, too.

[1] http://www.cfbsoftware.com/modula2/Lilith.pdf

[2] http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=E0F133C9131F36924AD1BD7A9E806B5A?doi=10.1.1.36.9989&rep=rep1&type=pdf

reply


structural equality is really just context-dependent equivalence:

This instance of a triangle and that instance of a triangle might have exactly the same properties as a shape, but they might be different instances. They do not have pointer equality, although they are equivalent under the homomorphism that discards all non-shape information.

not sure tho: see "'forms' vs non-form types" in myOntology.txt


C# in/out to remove the prob C++ has where the call-EE can control pass-by-reference without the call-ER's consent



BrendanEich? 2 hours ago

Right, because wasm is for AOT (ahead of time) languages first. To support dynamic languages that don't map to JS well, the future extensions including JITting wasm will be needed.

reply

wora 8 hours ago

Oberon language had a similar system called Juice back in 1997. It does exactly the same thing, e.g. using binary format to store a compressed abstract syntax tree as intermediate format which can be compiled efficiently and quickly. I think it even has a browser plugin much as Java applet. Life has interesting cycles. I don't have the best link to the Juice.

[1] https://github.com/berkus/Juice/blob/master/intro.htm [2] ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf

reply

gecko 5 hours ago

Honestly, everything we're doing recently feels like rediscovering Oberon.

Oberon had name-based public/private methods, like Go. It had ahead-of-time compilation of bytecode, as you pointed out. It had CSP-style concurrency, again like Go. The web for the last two years feels like we're rediscovering Smalltalk and Oberon and acting like we've just invented everything anew.

reply

BrendanEich? 3 hours ago

We didn't acknowledge a debt to Oberon (did Java? It owes one too, Bill Joy evaluated Oberon closely).

My pal Michael Franz at UCI studied under Wirth. Michael and a PhD? student, Chris Storck, did an AST encoder years back that influenced me and that I referenced re: wasm.

Oberon's great. JS was in the right place and right time. What can I say? I've been taking all the best ideas from the best projects and trying to get them incorporated, ever since 1995.

reply


" If you've only been exposed to OOP, and you don't have the bandwidth for learning an unrelated language, then start using a functional library like Underscore. Just use it everywhere, even if you think it's ridiculous. Don't use for-loops, use _.each. Try to think of every problem in terms of chaining map, reduce, zip, pluck, groupBy, etc. Do whatever you can to avoid side-effects. "

-

pluck:

pluck_.pluck(list, propertyName) A convenient version of what is perhaps the most common use-case for map: extracting a list of property values.

var stooges = [{name: 'moe', age: 40}, {name: 'larry', age: 50}, {name: 'curly', age: 60}]; _.pluck(stooges, 'name');

> ["moe", "larry", "curly"]

groupBy_.groupBy(list, iteratee, [context]) Splits a collection into sets, grouped by the result of running each value through iteratee. If iteratee is a string instead of a function, groups by the property named by iteratee on each of the values.

_.groupBy([1.3, 2.1, 2.4], function(num){ return Math.floor(num); });

> {1: [1.3], 2: [2.1, 2.4]}

_.groupBy(['one', 'two', 'three'], 'length');

> {3: ["one", "two"], 5: ["three"]}


the8472 9 hours ago

also important in my eyes: shared memory threading: https://github.com/WebAssembly/design/blob/master/PostMVP.md...

Threads themselves may be "ugly", but to build all the pretty abstractions you need them as foundation.

reply

BrendanEich? 9 hours ago

Yet also coming to JS, see https://blog.mozilla.org/javascript/2015/02/26/the-path-to-p.... Not a reason per-se to do wasm, at least in short term. Longer term, great to do full pthread things in wasm and not pollute JS.

reply

--- toread:

simd.js javascript sharedarraybuffer

---

webassembly

--- Why Mozilla Matters

    16 points BrendanEich 2 years ago comments 

"Get your tums out, pal. We're taking PNaCl? down for good this year with http://asmjs.org/. Cross-browser."

The State of JavaScript? - Brendan Eich

    8 points BrendanEich 3 years ago comments 

The bottom line is that whatever PNaCl? performance wins may lie in the future -- and I will believe them when Google does as shown by Chrome Web Store games being PNaCl?'ed not NaCl?'ed -- Pepper is the blocker for any cross-browser adoption in reality.

The State of JavaScript? - Brendan Eich

    56 points BrendanEich 3 years ago comments 

PNACL is a fine research project, but unfortunately both NaCl? and PNaCl? are tied to Pepper, a gargantuan API specified nowhere and implemented only in chromium.org code.

To say this is "Open Technology" is to reduce "Open" to the level of "Big company Big Bucks Open-washing." There is nothing open about an unspecified research project without a proven multi-party governance structure that's dominated from start to finish by Google, and which only Google could afford to staff and push -- including via big-money distribution deals with game developers and distributors.

As I said at Strange Loop and in past talks, don't shoot the messenger: Microsoft and Apple will never adopt NaCl?/Pepper. It is a non-starter as a web standard.

Why pray tell should Mozilla fall on Google's sword here? Why should we beg to be involved more "in the process" years after it started? Who are you to say that NaCl?/Pepper is better for developers or anyone else than a cross-browser approach targeting JS VMs, which are already there and getting fast enough with typed array memory models to compete with PNaCl?? (We aim to demonstrate this.)

NaCl?/Pepper looks like an incumbent power's technological folly, similar to Microsoft Active X or Google's Dart-as-a-native-VM. Just because a big company can pay for it does not make it "Open" or "Good" or good for the web.

BrendanEich? 977 days ago

You're doing better! But no, I don't "hate" Google. Big companies and big groups of people in general have inherent morally failure-prone properties. Google fights these, and in many ways still manages "don't be evil".

Heavy-handed and one-sided strategies and tactics may be odious, I may "hate" them -- you should too if they're not well-justified and likely to prevail -- but that's not the point of this exchange, which haberman started. Nice try deflecting, though.

The point of my slide, and of my comments here, is to make the case for what's best for the Web. So let's get back to that.

What is best for the Web? Not NaCl?, we all agree. PNaCl?? Not with Pepper as a mandatory API for all browsers to adopt. And PNaCl? has a JS back end.

Steve Jobs killed Flash. Plugins are in decline. However well Google, Mozilla, and others use NaCl? for native code safety, on the Web JS looks highly likely to continue, and to get super-fast for the well-typed and GC-free code produced by Emscripten.

This all points to a future where evolved JS is the PNaCl? format that works cross-browser. We're already working on this at Mozilla, but via Emscripten not PNaCl?. If Google aims its formidable engineers at the same goal and works in the standards bodies early and fairly, great. I'd love that.

Mozilla can produce near-native performance on the Web

    3 points BrendanEich 2 years ago comments ... The bit about "can be easily integrated into any browser" is false due to Pepper, the large new target runtime for *NaCl?. Pepper is a non-standard and unspecified-except-by-C++-source plugin API abstracting over both the OS and WebKit? -- now Blink -- internals.

To make such an airy assertion in an under-researched comment makes me suspect that you don't know that much about either PNaCl? or "any browser". So why did you make that confident-sounding claim?

These days, large apps are written in JS, even by hand. GWT is not growing much from what I can tell, compared to its salad days. Closure is used the most within Google, and Dart has yet to replace Google's use of GWT + Closure.

Outside Google, hundreds of languages compile to JS (http://altjs.org/). CoffeeScript? is doing well still. TypeScript? is Microsoft's answer to Dart, and more intentionally aligned with the evolving JS standard.

"Does Mozilla has any plans to improve the situation here?"

Have you heard of ES4? Mozillans including yours truly poured years into it, based on the belief that programming-in-the-large required features going back to "JS2" in 1999 (designed by Waldemar Horwat), such as classes with fixed fields and bound methods, packages, etc.

Some of the particulars in ES4 didn't pan out (but could have been fixed with enough time and work). Others are very much like equivalent bits of Dart. One troublesome idea, namespaces (after Common Lisp symbol packages) could not be rescued.

But ES4 failed, in part due to objections from Microsofties (one now at Mozilla) and Googlers. In a Microsoft Channel 9 interview with Lars Bak and Anders Hejlsberg, Lars and Anders both professed to like the direction of ES4 and wondered why it failed. _Quel_ irony!

As always, Mozilla's plans to improve the situation involve building consensus on championed designs by one or two people, in the standards bodies, and prototyping as we specify. This is bearing fruit in ES6 and the rest of the "Harmony era" editions (ES7 is being strawman spec'ed too now; both versions have partial prototypes under way). ... For programming in the large, ES6 offers modules, classes, let, const, maps, sets, weak-maps, and many smaller affordances. ...

Mozilla and Epic Announce Unreal Engine for the Web

    2 points BrendanEich 2 years ago comments 

Sorry, it was you, not me, who shifted the topic from "merit" to "doesn't work cross-browser". I called that out because I'm pretty sure int64 and uint64, along with SIMD intrinsics, are coming to cross-browser JS -- while NaCl? and PNaCl? and in particular Pepper are not going cross-browser. ... On NaCl? (not PNaCl?) minus Pepper, I think we agree. We're looking at zerovm and similar uses of SFI in the context of the Servo project, for safer native code. That is where NaCl? really shines, IMHO: safer native and OS/CPU-targeted code in native apps. It would be a shame if NaCl? doesn't cater to this better use-case and instead tilts at the PNaCl? cross-browser windmill.

Mozilla and Epic Announce Unreal Engine for the Web

    2 points BrendanEich 2 years ago comments 

We at Mozilla don't have $2M and a year to spare, but you're wrong: we've assessed doing Pepper with high fidelity (not porting chromium and duplicating all the DOM and other browser API implementations). It's more like $10M and multiple elapsed years to get to Chrome parity, assuming Chrome doesn't keep evolving and put us on a treadmill.

But then, I'm just the guy managing engineering and worrying about budget at Mozilla. Maybe you have greater skills. Where do you work?

Anyway, Pepper is big, with over a thousand methods among all the interfaces that are "specified" only by C++ implementation code we cannot port. We have a DOM implementation already, for example. So you cannot escape the fact that Pepper is "and also", not "instead of" -- there's no savings, it is purely added-cost, and significant cost.

I'm almost the only guy who will say this on HN, but as far as I can tell, Microsoft and Apple are on the same page. Maciej Stachowiak of Apple has agreed on HN, for what it's worth:

https://news.ycombinator.com/item?id=4648045

Enough whining about Mozilla not doing Pepper. Let's get back to asm.js.

---

Meanwhile, we are making JS a lot better as a target language, with things like ES5 strict mode, the ES-Harmony module system and lexical scope all the way up (no global object in scope; built on ES5 strict), and WebGL? typed arrays (shipping in Firefox 4).


brendan says: June 17, 2015 at 10:00 pm

@Foo: Yes, think of LuaJIT?2 ported to wasm. This is more than a cross-compile, and must support wasm via an LJ2 back end for all the JIT optimizations. PNaCl? folks did some work on how to support JITs, I believe. Same goal: support downloadable language engines that use classic JIT tricks (PICs, other self-modifying code structures).

/be