It's a minimalistic 'assembly language' based on combinatorial calculus, rather than Turing machines (like most assembly languages) or lambda calculus (like Haskell).

i did read the extant Hoon/Urbit stuff but i'm not recommending it; i mostly just like Nock (but when they get around to actually documenting Hoon, i'll check it out at that time).

Why is Nock interesting? Just as another member of the menagerie of minimalist languages, with Turing machines and lambda calculus and SKI combinatory calculus and Graham's Roots of Lisp lisp ( http://lib.store.yahoo.net/lib/paulgraham/jmc.ps ) and Shen's KL ( http://www.lambdassociates.org/specification/shen_1.7.htm#Kl ) and asm.js and Haskell GHC Core ( http://stackoverflow.com/questions/6121146/reading-ghc-core ) and various other small languages and VMs (i don't claim to know anything about most of those).

With Turing machines, you feel like the big thing is changing the state of memory locations; but with lambda calculus and SKI combinator and Nock, you are reducing expressions.

With lambda calculus, you feel like the big thing is substitution; but with Nock (and presumably with SKI combinators, i don’t know them that well), it doesn’t feel quite like substitution is the main thing.

Although Nock is functional, the first argument in the main Nock expressions (e.g. the ‘a’ in the expressions of the form (a SMALL_INTEGER b)) can be interpreted as a state, providing an intuitive bridge between the lambda-calculus-ish world of expression reduction and the Turing machine world of state mutation.

Unlike both lambda calculus and SKI combinators, there is a sort of ‘assembly language programming’ feel to Nock. It’s not quite as minimal as lambda calculus or SKI combinators but for the price of a little bit of extra initial complexity, you get a lot of bang for your buck in terms of more intuitive programming. And you might even think it’s more minimal than lambda calculus if you count the complexity of rules like alpha-conversion against lambda calculus.

Unlike both lambda calculus and SKI combinators, Nock gives you integers and lists (trees) as primitive data types, and HEAD and TAIL and GET-THE-NTH-LIST-ELEMENT and SUCCESSOR and ISEQUAL as primitive operations. Compare to e.g. lambda calculus where you don’t have the complexity cost of having to include integers in the language definition, but then when you want to program arithmetic you have to use Church numerals. Similarly, with the simpler Turing machine models, you just get a bunch of basic commands for moving a machine head on a tape and making and erasing binary marks, and you have to build arithmetic and lists out of that.

So, it’s more ‘full-featured’ than lambda calculus, SKI combinators, or Turing machines. Which is not surprising; Turing machines and lambda calculus and SKI combinatory calculus are all minimalistic for the purpose of writing proofs about them. For this reason, they don't introduce any more constructs than necessary, because this would introduce more cases into the proofs. By contrast, the Nock tutorial notes that Nock commands #6-#10 could be implemented in terms of #0-#5. So it is not minimalistic in the same way, to the same degree, or for the same reasons. But Nock is still much simpler than most VMs and programming languages, and is based on expression reduction, rather than variable or memory mutation.

when i say Nock doesn't feel like it's 'about substitution', when you are reducing Nock expressions you are certainly substituting in the definitions of the Nock operations; but writing programs IN Nock doesn't feel like you are programming with substitution

---

urbit 3 days ago

link |

You can get rid of the whole name reduction system. Which is hardly trivial. If you assume it, though, it's true that everything else is trivial.

Getting symbol tables, functions, environments, free and bound variables, etc, etc, out of the fundamental automaton, frees you up to design them right at the higher layer where they (IMHO) belong.

This philosophical argument has serious practical ramifications, I think, because it leads directly to the Question of Why Lisp Failed. Why did Lisp fail? Many people say, because it couldn't be standardized properly.

Why couldn't it be standardized? Because the Lisp way is not to start with a simple core and build stuff on top of it, but to start with a simple core and grow hair on it. So you end up with a jungle of Lisps that are abstractly related, but not actually compatible in any meaningful sense. This is because the lambda calculus is an idea, not a layer.

Basically the point of Nock is to say: let's do axiomatic computing such that it's actually a layer in the OS sense. The way the JVM is a layer, but a lot simpler. Lambda isn't a layer in this sense, so it doesn't provide the useful abstraction control that a layer provides.

---

tomrod 4 days ago

link |

I should rephrase my question: on what technology platforms would one use Nock and Hoon? Embedded systems? Anything (due to "portability")?

reply

upvote

urbit 3 days ago

link |

Here's one way to think about it.

You have a Facebook profile, right? That's essentially a special-purpose cloud computer - a cloud information appliance. But, wouldn't it be cool to have an actual personal computer in the sky? Where "Facebook" or whatever would just be... a program on that computer?

You can get a Linux box in the sky, easy. But... that's 15 million lines of code you're wrangling. On a network which makes Helm's Deep look like Mayberry.

So, if it's your home in the cloud, it's not a cute little bungalow in the cloud. It's a giant castle in the cloud guarded by experienced professional elf ninja warriors. And ya know, even experienced professional elf ninja warriors, when they're off duty, like to let their hair down and not be thinking about orcs every other minute.

So if you want the bungalow in the cloud, you need basically a new programming/execution environment, and oh also a new network. So basically a new everything. So, a person could despair, or he could go build a new everything...

reply

--

-- "

Adding a string type seems like a very simple improvement to nouns. In fact it would be a disaster. For instance, what character set are the strings in? Unicode? Great. You've just taken a data model defined in three sentences, and expanded it to a shelf-sized manual set. If you have a shelf that strong, why not just start with VMS? Or, of course, you could include some simple, broken, built-in string type, in which case you're making the same mistake that VM/370 made with its record formats: inserting a low-level facility which higher layers will (or should) work around, not with. " -- http://moronlab.blogspot.com/2010/01/urbit-functional-programming-from.html

---

an interesting thing about nock is thats it has no primitive (0-5) binary operators which combine information about two inputs to produce an output in one step (e.g. a+b) -- everything is via state