proj-oot-ootNotes24

haspok 2 days ago [-]

One thing that is completely missing from the Erlang side of the article are the OOB monitoring and operating capabilities.

An Erlang VM is a living system that has a shell which you can connect to, and control both the VM and the applications running in it. You can also remotely connect to another VM, execute arbitrary code, debug, stop processes, start processes etc. It really is an operating system in itself, that was _designed_ to be that way.

And the best part is that you get all this for free. Whether that is a good thing depends entirely on your needs. You probably wouldn't want to replace your bash scripts with Erlang programs :)

What Erlang is not really suited for is where you need multiple levels of abstraction, such as when implementing complex business logic. You would think that the functional nature of the language lends itself to that, but then you quickly realize that because the primary concern of an Erlang engineer is to keep the system alive, and for that reason you must be able to reason and follow the code as it is running on the system, all kinds of abstractions are very much discouraged and considered bad practice (look up "parameterized modules" for an example of a feature that was _almost_ added to the language but was discarded in the end).

I think that from this perspective Erlang and Go are actually very similar - both prefer simplicity over abstractions.

reply

julienmarie 2 days ago [-]

Totally agree. Erlang is quite against "magic" which improves greatly readability. Debugging is really straightforward 99.9% of the time.

reply

---

"Error handling is really the least of it. Attempting to replicate some of Erlang's high-level design principles at a quickly lead to roadblocks. For example, goroutines cannot be killed; this means it's impossible to fully implement supervisor trees in Go. It's possible that the runtime could support this at some point, but it's a big can of worms."

---

a Scala guy's complaints on Golang, excerpted from https://news.ycombinator.com/item?id=13485953 :

extempore 4 days ago [-]

Paul Phillips here. I searched my tweet archive for my tweets about Go. Draw your own conclusions.

---

 _Codemonkeyism 5 days ago [-]

Maintaining a Scala code base for some years, I've learned a lot. I would not go back to a language that does not support Option/Maybe and map/flatMap. These really changed my coding style.

My largest problems [1] are all still there after years, developers only payed lip service and that killed Scala I think.

The largest bad design decision was to support inheritance which leads to it's own problems with type inference. Sad that after Java devs already recognized how bad inheritance is that Scala also got inheritance.

The sticking out problems is how very very very slow Scala compiles. This makes web development (even with Play) and unit testing a huge pain (and the complicated syntax + implicits + type inference makes IntellJ? as an IDE very very slow on detecting problems in your code)

Concerning the article I do think Futures are a more powerful (and higher) concept compared to coroutines. They are easier to combine IMHO [2]

Now trying Kotlin for the faster IDE and compilation speed, sadly the Kotlin developers think Option is only about nullable Types (it's not and something differen!) and don't embrace it.

[1] http://codemonkeyism.com/scala-unfit-development/

[2] http://codemonkeyism.com/a-little-guide-on-using-futures-for-web-developers/

reply

adriaanm 5 days ago [-]

The only thing on your list on your blog [1] that's still true is that we care about PL research. Since 2.10, we've worked really hard on improving the migration between major versions, and the feedback has been very positive. We'll keep working on finding the right balance between ease of migration and fixing issues in the libraries. Scala 2.13 will be a library release, with further modularisation of the library (towards a core that we can evolve much more slowly, and modules that can move more quickly, but where you can opt to stay with older versions as you prefer).

We've also invested heavily in incremental compilation in sbt. Sbt is meant for use as a shell, and it's super powerful when used like that. When I'm hacking the compiler in IntelliJ?, recompiles of some of the biggest source files in the compiler (Typers.scala, say) take just a few seconds. I rarely have time for office chair sword fights anymore.

With Scala 2.13, half of my team at Lightbend is dedicated to compiler performance. We'll have some graphs to show you soon, but our internal benchmarking shows our performance has steadily improved since 2.10.

reply

---

some complaints about Golang:

 waps 676 days ago | parent | favorite | on: “Go’s design is a disservice to intelligent progra...

I don't get where people get the idea that Go is simple.

1) go has generic functions : http://golang.org/pkg/builtin/, they're just not accessible to you

2) go has generic types : slices, maps, and channels, all of which have slightly differing behaviour, including some highly counter intuitive and contradictory behaviour (example: slices are pass-by-reference, arrays are pass-by-value. In other words []int and [5]int behave entirely differently)

3) go has overloaded functions : http://golang.org/pkg/builtin/, they're just not accessible to you

4) go has exceptions (panic/recover), negating all the advantages of Go error checking, and providing zero fixes for the problems it introduces (finding the source line where an error happened and if/how multiple errors are related. Easy in Java/C++/..., hard in Go) (in C++ you have to be aware of whether any external code you use throws exceptions ... in Go you have to be aware if any external code you use panics. And if you say I haven't dealt with it, that means that you quite literally haven't dealt with it. Same as in C++. External libraries throwing exceptions is perfectly fine ... as long as they never actually throw an exception. Panicing standard library is fine ... as long as it doesn't panic ... If you're looking for correct code, it is of course not fine)

5) Go's "simple" threading and panics. Try crashing a Go program with shared data with a null-pointer derefence in the shared data. Someone please explain to me how the resulting output is simple.

6) golang's own compiler and standard library are not in fact idiomatic Go. This goes from small problems (like total lack of unit tests in quite a few places), to larger ones, like not using interfaces for logging.

7) interface{}. I just grepped a reasonable, thoroughly reviewed codebase I've written, of several tens of thousands lines of Go. Result: 2.7 uses of interface{} per 1000 lines of code.

Is this what people here call simple to think about code ? I would argue that you don't understand a codebase until you've read it. That's a certainty. So can we at least agree there's a point where verbosity no longer increases readability ? I hope you can see Go is far past that point.

Go's type system : riddled with exceptional behaviour. Literally it says "type system applies unless it's doing <-, cap, len, copy, new, append or delete calls", in which case we do something custom. The resulting behaviour, of course, is inconsistent with Go's type system (which is really the point of the code of course, unfortunately, there's surprises buried in there, and those calls are inconsistent with eachother as well).

---

to summarize "What PHP Deployment Gets Right" by Ian Bicking, Jan 12 2008 [1]

...

---

The JVM is not that heavy (opensourcery.co.za)

https://news.ycombinator.com/item?id=13543233

---

nikcub 7 hours ago [-]

Microsoft seem to have learned a lot from Java in designing their new .NET Core CLR. It has gotten almost everything right:

Java may have missed the window for fixing some of these issues in their platform - I feel that if Android were being developed today, they'd almost certainly take .NET Core as the runtime.

I've yet to commit to using .NET Core anywhere, but from what I know about it so far it is impressive.

[0] https://github.com/dotnet/coreclr

[1] https://raw.githubusercontent.com/dotnet/coreclr/master/PATE...

reply

---

rspeer 13 hours ago [-]

Another wart (in a separate post from my other wart so that replies are coherent):

I understand why &str and String are different, but why do they act like they've never heard of each other? Why do they implement such different sets of methods? Why can't they be compared for equality, so I don't always have to type "literal".to_string()?

Haskell has problems with too many string types as well (worse than Rust, because the type of their default string literals is best avoided entirely), but they fix much of the problem with the OverloadedStrings? extension, which uses the type checker as something that helps you coerce string literals, instead of arguing with you.

reply

dbaupp 12 hours ago [-]

String == &str works. Maybe you were encountering some other problem.

reply

rspeer 12 hours ago [-]

Oh, I guess the case of this I encountered most recently was actually Option<String> == Option<&str>.

I get why that's different, but it would be great if the type system could figure that out, so that the literal Some("foo") could be an Option<String> if necessary. Maybe I'm still being spoiled by Haskell's OverloadedStrings?.

reply

---

pjungwir 7 hours ago [-]

So I'm only a Rust amateur (I mean I don't get paid to use it), but I'm on my third round trying to build something with it, and I feel like I'm finally starting to get it. A couple warts I've encountered:

If the next push is going to be for ease-of-use, those would be two nice things to fix.

reply

---

in Rust:

kibwen 15 hours ago [-]

> I always joke String should have been called StrBuf?

Agreed, and Vec should just be Buf! Let's fork the language. :)

reply

---

zyxzkz 16 hours ago [-]

Man, if developing programming languages was this easy, we should have done it years ago!

reply

lmm 15 hours ago [-]

We did. The ML family has been around for about 4 decades now. Not sure why they never caught on until Rust.

reply

jeffdavis 8 hours ago [-]

It's about the runtime. I think that's probably the most important reason, even more so than memory management or performance.

If you need to write a library to implement the latest protocol, or render the latest image format, or parse the latest serialization format, then Haskell seems great. Unfortunately, the resulting library will be useless except to other Haskell programmers. Nobody wants to link in libXYZ.a and get the entire Haskell runtime, starting threads and doing GC and sending and catching signals.

I tried implementing a handler in postgresql so that you could write user-defined functions in Haskell. I made little progress, even with help on IRC and elsewhere. Any non-trivial function would need to define its own types and use some libraries, but it was far from clear how to do that and the best advice I got was to dig into ghci and try to use some ideas from that. I started down that path, ran into runtime issues, and that was the last straw and I ran out of steam. And that was only to get the most basic functionality: call into haskell to do some computation and return.

reply

---

" What's missing is help. A great command line needs to reach out and hug you in every way it can. It needs to assume you don't know anything at all about the system. It's there for you to solve your problems, but also to help you explore.

There's actually one exotic Unix shell that does it right: fish. fish jumps through some pretty amazing hoops, like parsing manpages, to give you context and completion. "

" How should it actually work? Let's say you're trying to put Urbit in control of your Twitter feed. That means you're creating a Twitter gateway on Urbit, which you can control with a

twitter command.

Before you type a character, Urbit offers you

(any command with side effects). (You could also add a ! prefix, which is like sudo -- it lets you do something dangerous.) Once you type , you can page through a list of commands. Once you type t, that list is much shorter. Once it can tab complete to twitter, it prompts you with that.

And once you get to

twitter, you're in a "meta-generator" that's helping you build the correct command line. At least in a browser, you'll be stepping through a form with rich entry. But the command line will show you the text of the command you're building, so next time you can just type it.

Building a command is just a case of data entry. In a command-line world, data entry is always serial. You answer questions serially, one at a time. The only navigation is forward and back.

A great Urbit console also has to be accessible both from a browser and a Unix terminal. That means a prompt needs to tell the console if it could be a radio button, even though a terminal can't have a radio button. We'll improve the interactivity of our terminal a little, but we're not rewriting Lynx. "

---

"Three of the four languages most often used at Google (Python, C++, Go) do not allow multiple versions of a library to be linked into a single process due to symbol conflicts. This is a limitation of those languages, "

---

https://fsharpforfunandprofit.com/posts/is-your-language-unreasonable/

---

" There are ~38 HTTP response codes. Below is a complete list. I’ve shortened a few titles in the interest of space: Method Method 200 OK 201 Created 202 Accepted 203 Not authorized 204 No content 205 Reset content 206 Partial content 300 Multiple choice 301 Moved permanently 302 Found 303 See other 304 Not modified 306 (unused) 307 Temporary redirect 400 Bad request 401 Unauthorized 402 Payment required 403 Forbidden 404 Not found 405 Method not allowed 406 Not acceptable 407 Proxy auth required 408 Timeout 409 Conflict 410 Gone 411 Length required 412 Preconditions failed 413 Request entity too large 414 Requested URI too long 415 Unsupported media 416 Bad request range 417 Expectation failed 500 Server error 501 Not implemented 502 Bad gateway 503 Service unavailable 504 Gateway timeout 505 Bad HTTP version "

" Consider, for example, when we might use the 200 OK response code. Should we use it to indicate the successful update of a record, or should we use 201 Created? It seems we really should use a code like 250 Updated but that code doesn’t exist. And can anyone out there explained to me what 417 Expectation failed really means? I mean besides Roy? "

" The REST vocabulary of methods and response codes is simply too limited to effectively communicate the wide variety of requests and responses required across all applications. Imagine we create an application where we want to send a “render complete” response back to an HTTP client. But we can’t do that using an HTTP response code, because (a) one doesn’t exist and (b) HTTP is not extensible " [2]

" int_19h 1 day ago [-]

What we really need is a set of verbs that allow to reliably distinguish between these three types of operations:

1. Pure read.

2. Impure (stateful) read.

3. Idempotent write.

4. Any other write.

There's no particular reason to separate inserts, updates, deletes etc as part of the protocol - they're all just different kinds of writes, and middleware doesn't derive any benefit from being able to distinguish them. Thus, this can be a part of the payload.

On the other hand, the difference between reads and writes, and the two subdivisions within each, do matter for purposes such as caching and automated error recovery (e.g. a proxy can repeat an idempotent write a few times before returning the error to the originator of the request).

In REST, we have GET for #1 and #2, PUT and DELETE for #3, and POST for #4. In practice, this is often simplified to just POST used for both #3 and #4, but that loses a valuable distinction (but is unfortunately often necessary because of the lack of support for other methods). On the other hand, the PUT/DELETE distinction is largely pointless.

reply "


" Waterluvian 21 hours ago [-]

When I was given the task of defining how our multi-robot server would interface with our user interfaces, I eventually settled on REST. Most of what I knew about REST had been obtained that week. I implemented something pretty vanilla with Django and it all felt pretty elegant. I didn't have to worry about defining a protocol, there was pretty much already one for me:

It was all nice and worked great (and still does, years later).

But over time numerous people started told me that no it's actually all wrong for one reason or another. I've heard that I should never use anything except GET and POST. That I should ALWAYS return a 200 and provide error metadata as a response if there actually was an error. That POST is actually meant for updates and PUT is meant for new entities. That the opposite is true. That neither is true and I should always use POST for any mutation of state. etc.

I feel like I had success because I approached it from a position of ignorance, meaning I just implemented a simple, sane REST API and was none the wiser that I was doing it wrong.

reply "

"

stymaar 18 hours ago [-]

> If it's independent of HATEOAS what is in REST?

Using GET/POST/PUT/DELETE with a defined semantic, the confidence that GET is idempotent, and the proper use of HTTP status codes.

Back in 2005, it was really common to have only GET routes even for update of deletion, or worst: to have a single url: http://example.org/action which concentrated all the API surface, different behavior being triggered by the type of the payload (JSON or even XML). Also, all the errors where `200 OK` but with a payload which contained the error. It was all done on top of HTTP but nothing was really using the HTTP tools (route + method + status code). "

---

legulere 1 hour ago [-]

> YAML is so much more complicated and harder to parse.

YAML has more complexity in syntax, but a data model that fits to data structures in programming languages (records/maps, arrays/lists, integers, floats, boolean, bytes, strings). As long as you do not implement a YAML parser yourself (why would you do that?) you should have easy times with YAML.

XML on the other hand is relatively easy to parse, but has a complex data model that maps well just to a DOM. For anything but markup text, you do not have a direct mapping to/from XML.

> Developers write hundreds of lines of XML all the time... its called HTML

There it makes sense, because it is marked up text you are writing and it is the right tool for the job.

reply

---

bad_user 6 hours ago [-]

You're missing the forest from the trees. Java libraries and frameworks are complicated because Java the language is not expressive enough.

As I like saying, the status quo in Java land is Spring and Hibernate and people bemoaning Scala's implicit parameters never wonder why such monstrosities like Spring or Hibernate happened. Sure, blame the culture, but that's only half the story.

The same thing happens with other inexpressive languages. One of them is Python. And here we disagree again, as I think Python is chock full of ugly parts.

It started with refusing powerful and multi-purpose language features, like multi-line anonymous functions. As a result Python has been patched repeatedly with multiple non-orthogonal features to supplant the need for multi-line anonymous functions and/or to make up for its statement oriented nature. Oh, now Python 3 got await/async as well, but somehow I don't think this will stop people from monkey patching the socket module. I'm going to make the claim that in fact Python is not a TOOWTDI platform, in spite of its much touted "spirit" and anybody making that claim never tried working with Gevent or Eventlet, trying to make it work with MySQL?'s client and being forced to use the native client because Django wasn't compatible with the pure one. Or with an N-th iteration of Ruby inspired libraries that hack into its meta programming features to make it smell like the DSLs in Ruby, but with limitations and leaks. I don't even want to talk about the build and dependency management tools in Python's land, because it should be self-evident how broken the Python ecosystem has been.

Another language in this category is Go. It's still young enough that people haven't used it much outside the niche it was created for and doesn't have much reuse going on in libraries. Give it time, you'll see the same pattern, same mistakes repeated again and again. Java was also a pioneer in dealing with concurrency, Java introduced generics really late and it was also built by famous engineers working for a sexy company, designed to appeal to beginners and managers.

reply

JustSomeNobody? 5 hours ago [-]

No. It's not the Java language. It is the developers. They are the ones who have to have 35 levels of abstraction. Why? Because for some reason they think this will make modifying the software they write easier. In reality, no, it doesn't because everyone gets frustrated wading through the abstractions. If I can hold ten items in my brain's working memory and all ten are taken up by keeping track of how many levels deep I'm in, then it is really hard to make any real changes to the code.

reply

dlwdlw 20 minutes ago [-]

I agree that there is a Java culture, but I also think that it developed into the way it is because it was so popular during a time when there werent very good engineering practices by non-technical people. The movement towards wanting managers with a technical background I think is fairly recent. The majority mass of java developers now probably "grew up" in the jaded horror story environments we seek to avoid.

Also, i think new grads tend to only know how to make things dry via abstraction (not considering the trade offs) and b/c of javas popularity and oop as a buzzword, always did it via inheritance.

reply

---

cle 2 hours ago [-]

The problem with Java+XML isn't XML itself, it's that the Java community for a long time was pushing more and more critical code into XML, because doing non-trivial dynamic things in Java itself can be a huge pain. And so over time, enterprise Java apps became giant balls of XML+Java that would break at runtime if you missed an XML parameter or had a stray character somewhere. At that point, what benefit are you getting from Java's static typing? You lose much of the benefits of Java, and suffer twice as much from its drawbacks.

This is slightly better with the push away from XML and toward annotations, but it's not much better. Your program's behavior still depends on tagging the right language constructs with the right magic tags, which is a horrible way to program IMO.

reply

sreque 40 minutes ago [-]

I agree with cle, and I care enough to give examples of painful XML coding in Java:

Java is not a dynamic language, sorry. The JVM bytecode is fairly dynamic, but the language certainly isn't.

Also, I think Java would have been better off never introducing annotations. They have their uses, but they've been so over-abused that I would consider them a net negative on the language as a whole. I've been thinking about starting a blog, and one of my initial blog posts may be why annotations are bad for Java.

reply

---

TheCondor? 4 hours ago [-]

Java libraries are complex because Java is inexpressive? Not because the libraries attempt to do everything for every situation? Are there some concrete examples you can point to?

It has been my experience and observation that something like a JMS library in Java are complex because of a different interpretation of "do one thing and do it well." In the Java world, possibly because of enterprise adoption, that means pluggable transport for the queue, pluggable aaa, pluggable security and encryption, pluggable durability storage, configurable delivery protocols and guarantees, aspect based hooks to support different ha patterns, pluggable dispatch on delivery and probably some other things I can't think of this early in the morning. Each JMS implementation is "the best damn message queue ever made" with tons and tons of features. That's not an expressiveness issue, unless it's too expressive. Java makes it easy to add a layer of abstraction at each of those decision points. The typical user here wants to get a message from one piece of software to another, maybe with ordering; JMS will do that more ways than you can imagine or care about and can use pieces of machinery you didn't even want to know about. Where does expressiveness play in to that?

reply

leoc 3 hours ago [-]

The last time I looked at Python (which was something like a decade ago) the impression I was left with was that there was a sort of cognitive dissonance or collective self-deception going on. There was a simple, clean language subset which was the one presented in the official tutorial, by and large; and lying underneath there was another language full of introspection and underscores, which seemed to be necessary to get (many) things done without a boilerplate explosion. The impression I got was that everyone writing Python was convinced that those other guys didn't need to think about these dangerous, advanced features, but they were in the 10× hacker elite for which the power was necessary and appropriate. Now probably that's an exaggeration and somewhat unfair, but I think it's probably not completely wide of the mark. ;)

reply

---

seagreen 1 hour ago

parent flag favorite on: The Unix-Haters Handbook (1994) [pdf]

Well, the browser already did replace Unix for most people=( Presumably we'd be trying to design something good and not awful though, so lets not stop there. But it does give us a starting point:

+ one click installation of applications

+ applications are always up to date

+ applications are sandboxed

Then, since we want users to be able to tinker:

+ the source for all software installed through this mechanism is available

+ optional binary caches NixOS? style so it's not slow

And while we're talking about NixOS?:

+ immutable package directory so you can rollback at will

And then on to data storage, which is my favorite topic:

+ defaults to tags instead of a hierarchy for organization

+ everything is schema'd, so a file saying it's a contact entry or whatever actually has to be that thing

+ data is immutable by default, so undo and versioning can be built-in (you could still turn this off for certain tasks like video-editing where immutability isn't feasible)

I have more thoughts on data storage here if anyone's interested: https://juniorschematics.com/

---

 vertex-four 6 hours ago [-]

The issue with all of these, of course, is that in order to get a system running you have to configure multiple "independent" tools, processes and daemons. Think setting up a web application - you have to configure the web application to listen on a certain port/UNIX socket, then configure your web server to go find it. You then need to scale this up across logical servers separated by a network - your web servers need to communicate with your database, they need some sort of authentication key/password, etc etc. You're never just configuring one thing.

The modern solution would be that there needs to be a network configuration tool which generates specific configurations for each component, is capable of encoding arbitrary invariants, and works consistently. Configuration also needs to be "push" based on events - when a DNS server dies, it should be able to figure out "we need at least 2 DNS servers, we have 1, fire up a new one - then update all systems to know about the new one".

Configuration management systems for Linux, by and large, suck. They're very good at starting from an Ubuntu Server install and building on that, and then get more and more fragile as the system lives on. Some of them (Saltstack, for example) do have some degree of event management - you can run certain commands on certain things happening, but it's not declarative or reactive in the way you'd hope - e.g. you can't just say "this system knows about all DNS servers" and expect it to work. The Docker/Kubernetes ecosystems claim to solve the network configuration problem (in a really awkward roundabout way), but not really intra-system configuration, and it still takes a lot of manual work.

NixOS? gets a lot closer - but it needs to be expanded with a constraints solver and an event processing system. It's Turing-complete, so you can encode pretty much whatever you want into it, while still being a reasonable configuration language (basic use of it looks a lot like JSON).

But the point is - the formats individual components use for configuration should be more-or-less irrelevant. They could be completely opaque, so long as it's possible to get from the network config to the individual component's config and it's possible to update that config on-the-fly. In fact, it'd be more useful to standardise on one library which can handle all that for you.

reply

zzzcpan 6 hours ago [-]

I agree with most of your post, but this is still more complex than just adding a constraints solver and an event processing system. Different things not just depend on each other, they also require different strategies for dealing with failures. Trying to squeeze everything into a single model will not work well. Maybe something like supervision trees for services might solve that, where supervisors for each service are part of the package and handle everything from automatic configuration to failures in any way they need.

reply

---

inkyoto 18 hours ago [-]

The most important thing about UNIX - no matter how simplistic (or not) it might appear or how significant (or not) the perceived flaws might seem - is that a move to UNIX back in 70s-80s was liberating with its simplicity and human friendliness for so many of those coming the world of closed-off, proprietary operating systems, walled gardens, development tools and kernel API’s.

Writing a mere string out to a file on a non-UNIX was nowhere near as easy as ‘fd = open (“a file”, O_WRONLY); write (fd, p_aString, strlen (p_aString)); close (fd);’ on UNIX. Many systems required either a block-oriented or a fixed-record (with the record structure to be defined first) to be opened, the block or the record to be written out and then the file to be closed. Your record-oriented file has grown very large? Brace yourself for a coffee break after you invoke the “file close” system call on it. Did you process get killed off or just died mid-way through? Well, your file might have been left open and would have to be forcefully closed by your system administrator, if you could find one. Your string was shorter than the block size, and now you want to append another string? Read the entire block in, locate the end of the string, append a new one and write the entire block back. Wash, rinse and repeat. Oh, quite a few systems out there wouldn’t allow one to open a file for reading and writing simultaneously.

Flawed make? Try to compile a 100 file project using JCL or DEC’s IND using a few lines of compilation instructions. Good luck if you want to have expandable variables, chances are there wouldn’t be any supported. You want to re-compile a kernel? Forget about it, you have to “generate it” from the vendor supplied object files after answering 500+ configuration related questions and then waiting for a day or two for a new “system image” (no, there were no kernels back then outside UNIX) to get linked.

Awkward UNIX shell? Compared to crimes a numbers of vendors out there committed, even JCL was the pinnacle of “CLI” design.

No matter how perfect or imperfect some things were in UNIX back then, hordes of people ended up running away screaming from their proprietary systems to flock to UNIX because suddenly they could exchange the source code with their friends and colleagues who could compile it and run within minutes, even if some changes were required. Oh, they could also exchange and run shell scripts someone else wrote etc. In the meantime, life on other planets was difficult.

reply

rbanffy 7 hours ago [-]

I remember people saying Unix was the gold standard of user-hostile operating system.

That was well before I met AS/400 and MVS.

And then I had contact with a Burroughs A-series and it's appropriately named OS, MCP.

OTOH, I love the 3270s.

reply

leoc 6 hours ago [-]

TRON's MCP was named after Burroughs', I'm quite sure. Guess who TRON/Alan Bradley was based on? :)

reply

rbanffy 5 hours ago [-]

I think it's for Alan Kay. Bonnie MacBird? is his wife.

reply

" I really hate how TAB is used in Makefiles"

Analemma_ 23 hours ago [+106]

Animats 23 hours ago [-]

There are many early UNIX design decisions that have outlived their shelf life by decades.

Probably the biggest one is that UNIX is, at bottom, a terminal-oriented multi-user time sharing system. This maps badly to desktop, mobile, and server systems. The protection model is a mismatch for all those purposes. (Programs have the authority of the user. Not so good today as in the 1970s.) The administration model also matches badly. Vast amounts of superstructure have been built to get around that mismatch. (Hello, containers, virtualization, etc.) Interprocess communication came late to UNIX/Linux, and it's still not a core component. (The one-way pipe mindset is too deeply ingrained in the UNIX world.)

reply

a list of legacy annoyances in Unix: https://kukuruku.co/post/the-collapse-of-the-unix-philosophy/

"Take a look at a prototype of a signal function in the form we see it in the C standard:

void (*signal(int sig, void (*func)(int)))(int);

Try to understand it."

---

issues with escaping in Unix shells, and how Lisp does it better:

"

    Let’s begin with a teaser. How can we recursively find all the files with \ name in a folder foo? The correct answer is: find foo -name '\\\\'. We can also do it like this: find foo -name \\\\\\\\. The latter way will cause lots of questions. Try to explain to a person who is not good at UNIX shell why exactly four backslashes are necessary here, not two or eight. We need to write four backslashes here as UNIX shell performs backslash expanding, and find does it too.
    How to touch all files in foo (and its subfolders)? At first glance, we could do it like this: find foo | while read A; do touch $A; done. Well, at first glance. Actually, we can come up with 5 things that can ruin it all (and lead to security problems):
        Filename can contain a backslash. Therefore, we should write read -r A instead of read A
        Filename can contain a slash. That’s why we should write touch "$A" instead of touch $A
        Filename can not only contain a space but also start with a space. So we need to write IFS="" read -r A instead of read -r A_
        Filename can contain a newline, so we should use find foo -print0 and instead of IFS="" read -r A use IFS="" read -rd "" (I’m not really sure here)
        Filename can start with a hyphen, so we need to write touch -- "$A" instead of touch "$A". The final version looks like this: bash find foo -print0 | while IFS="" read -rd "" A; do touch -- "$A"; done Cool, isn’t it? By the way, we didn’t take into account that POSIX does not guarantee that touch supports option --. Considering this fact, we’ll have to check each file on whether it starts with a hyphen (or that it does not start with a slash) and add ./ to the beginning. Do you understand now why configure scripts generated by autoconf are so large and difficult to read? Because configure needs to take into account all of this crap, including compatibility with various shells. In this example, I used the solution with pipe and loop. I could also use the solution with exec or xargs, but it wouldn’t be so eye-catching. (Well, okay. We know that the filename starts with foo, so it cannot start with a space of hyphen).
    Let’s say we need to delete a file on host a@a. The name of the file is in a variable A. How can we do it? Perhaps, like this: ssh a@a rm -- "$A"? (As you might have noticed, we have already taken into account that the filename can contain spaces and start with a hyphen) Never ever do this! ssh is not chroot, or setsid, or nohup, or sudo or any other command that receives an exec-command (meaning a command for direct transmission of the execve family by system calls. ssh (just like su) receives a shell-command, i.e. a command for processing by shell (the exec-command and shell-command are of my own). ssh combines all the arguments into a string, and passes the string to the remote side and performs by shell there. Okay, maybe like this: ssh a@a 'rm — "$A"'? No, this command tries to find variable A on the remote side. But it’s not there, as variables cannot be passed via ssh. Well, maybe like this: ssh a@a "rm -- '$A'"? Nope, this won’t work if the filename contains a single quote. Anyway, the correct answer is: ssh a@a "rm -- $(printf '%q\n' "$A")" Convenient, don’t you think?
    How to get to host a@a, and then to b@b from it, then to c@c, and then to d@d and delete the /foo file from it? Well, this one is simple: bash ssh a@a "ssh b@b \"ssh c@c \\\"ssh d@d \\\\\\\"rm /foo\\\\\\\"\\\"\"" Too many backslashes, huh? Well, if you don’t like it, let’s alternate single and double quotation marks: bash ssh a@a 'ssh b@b "ssh c@c '\''ssh d@d \"rm /foo\"'\''"' By the way, if we were to use Lisp instead of shell, and the ssh function would pass not a string but a parsed AST (abstract syntax tree) to the remote side, there wouldn’t be so many backslashes: lisp (ssh "a@a" '(ssh "b@b" '(ssh "c@c" '(ssh "d@d" '(rm "foo"))))) “Huh? What? Lisp? What Lisp?” Curious, aren’t you? Go read here. You can also refer to other articles by Paul Graham.
    Let’s combine the previous two paragraphs. A name of the file is in a variable A. We need to go to a@a, and then to b@b, then toc@c, d@d and delete the file in variable A. I’m going to leave it for you as an exercise. (I don’t know how to do it. :) Well, I might if I thought about it)
    echo is sort of designed for displaying strings on the screen. But the thing is, we can’t use it for this purpose if the string is a bit more complex than “Hello, world!” The only true way to print a random string (e.g. from variable A) is like this: printf '%s\\n' "$A".
    Suppose you want to direct stdout and stderr cmd commands to /dev/null. The riddle: which of these six commands perform the task? bash cmd > /dev/null 2>&1 cmd 2>&1 > /dev/null { cmd > /dev/null; } 2>&1 { cmd 2>&1; } > /dev/null ( cmd > /dev/null ) 2>&1 ( cmd 2>&1 ) > /dev/null Turns out, the correct answer is: the 1st, the 4th and the 6th. And the 2nd, the 3rd, and the 5th don’t. And again, I’m leaving it to you to figure out the reason as an exercise. :)" [3]

---

" Some people think that UNIX is great and perfect, and that all its basic ideas («everything is a file», «everything is text» and so on) are amazing and form the so-called ”UNIX Philosophy”. I guess you’re starting to understand that it’s not quite so. Let’s review this “Unix philosophy”. Have a look at some points below. I’m not trying to say that all of these things should be canceled, I’m simply pointing at some drawbacks. * “Everything is text”. As we’ve already seen in the example with /etc/passwd, the widespread use of plain text can lead to performance problems. UNIX authors have actually invented a format for each system config (passwd, fstab, etc.). With their rules of escaping special characters. Surprised? /etc/fstab uses spaces and line breaks as separators. But what if folder names include, say, spaces? For this case, the format of fstab provides special escape characters for folder names. Turns out, any script reading fstab should be able to interpret the escape character "

" . It would be much easier if we used JSON or XML for system configs. Or maybe some binary format. Especially for those configs that are constantly read by different programs. As a result, they need a good read rate (it’s higher in binary formats).

That’s not all I wanted to say about “everything is text”. Standard utilities provide the output in the form of a plain text. For each utility, we actually need a parser of its own. "

" Just imagine! Let’s say we need to delete all files in the current folder of size bigger than 1 kilobyte. Yes, I know that we can do this with find. But let’s suppose we definitely need to do this via ls (and without xargs). How to do it? Like this:

LC_ALL=C ls -l

while read -r MODE LINKS USER GROUP SIZE M D Y FILE; do if [ "$SIZE" -gt 1024 ]; then rm -- "$FILE"; fi; done

We need LC_ALL here to be sure that the date will take exactly three words in the output of ls. This solution not only looks ugly, but also has a number of drawbacks. Firstly, it will not work if the file name contains a line break, or begins with a space. Next, we need to explicitly list the names of all ls columns or at least remember where the ones we need (i.e. SIZE and FILE) are located. If we make a mistake in the order of columns, the error will become apparent only during the runtime. When we delete the wrong files. :)

How would the solution look like in the perfect world I’m suggesting? Something like this: ls

" https://blogs.gnome.org/alexl/2012/08/10/rethinking-the-shell-pipeline/
grep 'size > 1kb'rm. It’s short, and, most importantly, you can see the meaning in code and it’s impossible to make a mistake. Let’s see. In my world, ls always gives all the information. We don’t need a special -l option for this. When it’s necessary to delete all columns and leave the filename only , we can do this with a special utility we should direct the ls output to. Thus, ls provides a lift of files in some structured form, say, JSON. This representation “knows” names of columns and their types. i.e. that is a string, a number or something else. Then, this output is piped to grep that, in my world, selects the necessary strings from JSON. JSON “knows” field names, so grep “understands” what “size” means here. Moreover, JSON contains information about the type of size field. It contains information that it’s a number, and even that it’s not just a number but a file size. Therefore, we can compare it to 1kb. Next, grep pipes the output to rm. rm “sees” that it’s going to receive files. Yes, JSON also stores information about the type of these strings, that they’re files. rm deletes them. JSON is also responsible for correct special characters escaping. That’s why files with special characters “simply work”. Cool, right? I took the idea from here. It should also be mentioned that something of the kind is implemented in Windows Powershell.

---

[4]

---

yarper 2 hours ago [-]

After writing Rust in production for a while, the biggest bugbear I have is the naming/file structure.

I end up a lot with this;

    src/main.rs
    src/combobulator/mod.rs
    src/combobulator/tests.rs
    src/tests.rs
    src/somethingelse/tests.rs
    src/somethingelse/mod.rs

Because I find tests in the same file a bit confusing. It's really easy with maven-style layouts to know that "only things in main/java or main/scala get compiled and go into the jar". "src/test/*" and "src/main/resources" are for me. The same thing applies for cargo.tomls and resources - there's not really a way to see what goes into the executable from the file structure.

But this isn't the biggest problem with having things called "mod.rs". That would be if I open 5 mod.rs's in a text editor with tabs, I have no idea what goes with what.

I know that tests should go under tests/, but that's specifically for integration tests. Integration tests are an order of magnitude less likely to get written imo, and if they are they'll probably get written as unit tests anyway.

If anyone has any top tips for how to structure larger Rust projects while separating unit tests into different files, please let me know!

reply

---

spraak 2 hours ago [-]

> Right now, such a signature would be accepted, but if you tried to use any of map’s methods, you’d get an error that K needs to be Hash and Eq, and have to go back and add those bounds. That’s an example of the compiler being pedantic in a way that can interrupt your flow, and doesn’t really add anything; the fact that we’re using K as a hashmap key essentially forces some additional assumptions about the type. But the compiler is making us spell out those assumptions explicitly in the signature.

I feel this exact same way with Go. E.g.

    x := map[string]map[string]int{
        "key": map[string]int{
            "another": 10,
        },
    }

Given that the outer type signature says that the `value` of the map should be a `map[string]int` it's sometimes quite annoying to specify that inner type over again

reply

tibbe 21 minutes ago [-]

Type inference should solve that if Go ever gets that.

reply

---

macros and scripting and emulators

http://www.bluestacks.com/ https://www.google.com/search?q=switch+control+iphone&ie=utf-8&oe=utf-8 https://play.google.com/store/apps/details?id=com.x0.strai.frep&hl=en

---

chemag 7 hours ago [-]

I've been trying for a while to understand what org-mode gives you. I saw Dominik's tech talk at Google, and Bieber's discussion on dropping vim with emacs. I really like the approach taken in this article, showing some of the language features.

IIUC, what org-mode provides you is:

1. a (markdown-like) lightweight document markup language, with lots of syntax hooks ("#+") for different tools.

2. some (even lighter, i.e., no "#+" required) organization-based syntax hooks. These are the TODO/DONE/... labels (plus the "[ ]" tidbits), the table syntax, the metadata (e.g. AUTHOR). In fact, the idea of adding metadata to a lightweight markup language is very interesting.

3. some "programmy" syntax items, including things like tags, spreadsheet-like tables, properties, etc.

4. the agenda view. This is a horizontal search on multiple .org files to create a work agenda.

5. some emacs functionality related to automatic recognition and operation on some of the syntax items. For example, org-table-align will "Re-align the table and don't move to another field".

There are lots of other features, but nothing that other lightweight markup languages don't/can't have too.

My main concerns are:

1. it is inextricably tied to emacs. AFAICT, only (5) in the previous list is emacs-only. All the other functionalities are related to the markup syntax.

2. I wish the org-mode language was fully markdown compatible (I can barely remember the syntax of one, and now I need to use 2).

reply

---

" Avoid using language extensions and libraries that do not play well with your IDE. The impact they will have on your productivity will far outweigh the small benefit of easier configuration or saving a few keystrokes with more terse syntax.

Using ServiceLocator? is an example of design that leads to poor integration with most IDEs.:

 // Using magic strings will make it impossible
 // for the IDE to follow your code
 ServiceLocator.get('serviceName')

Another way to keep the “integrated” part of your IDE relevant is to avoid magic code. Most languages will provide ways for you to write more dynamic code. Abusing these features such as by using magic strings, magic array indexes and custom template language features will lead to a more disconnected codebase. Generally any features which only a human will know the meaning of will lead you down this road, and it’s a hard road to come back from, because if your IDE doesn’t understand the code, any refactoring features it has are going to be useless when you want to move to a more static architecture. "

---

" Now, clever is different than complex, or confusing code. Clever code is a neat way to do something. Complex and confusing code can be spotted immediately because it does one or many of the following things:

And I could go on. Those are the things I would watch out for and that really take a cognitive load on me.

Also, clever code is different than tricky code. To try to use a programming language quirk is crazy. You're asking for your code to be hard to read. Using a little known useful feature is good way to extend your team knowledge of a PL.

reply "

"Use names to convey purpose. Don't take advantage of language features to look cool."

 pimlottc 255 days ago [-]

Using intermediate variables is one of the most underrated tools to make code more understandable. It's the definition of something completely unnecessary from a technical standpoint that is all about conveying meaning and clarity to other programmers. And it can be used to help group and "modularize" chunks of code within a routine without necessarily going to the extreme of pulling out a separate subroutine, which can be overkill in some circumstances.

---

This study examined camelCase vs. under_score and concluded that camelCase is better:

http://www.cs.kent.edu/~jmaletic/papers/EMSE12.pdf

---

C# 7.0 allows arbitrary classes to be deconstructed if they have a 'deconstructor' method:

" class Point { public int X { get; } public int Y { get; }

    public Point(int x, int y) { X = x; Y = y; }
    public void Deconstruct(out int x, out int y) { x = X; y = Y; }} " -- [5]

in Oot, class constructor bodies, and deconstructor bodies, should be inferred from the constructor arguments -- and there should be a special __init method that is called by the inferred constructor to let you do initialization. This would take away the boilerplate of assigning the constructor arguments to instance variables.

---

excerpts (well actually this is most of it) from C# 7.0 release notes:

" ...

the ability to declare a variable right at the point where it is passed as an out argument:

public void PrintCoordinates?(Point p) { p.GetCoordinates?(out int x, out int y); WriteLine?($"({x}, {y})"); }

...

Since the out variables are declared directly as arguments to out parameters, the compiler can usually tell what their type should be (unless there are conflicting overloads), so it is fine to use var instead of a type to declare them: p.GetCoordinates?(out var x, out var y);

A common use of out parameters is the Try... pattern, where a boolean return value indicates success, and out parameters carry the results obtained: public void PrintStars?(string s) { if (int.TryParse?(s, out var i)) { WriteLine?(new string('*', i)); } else { WriteLine?("Cloudy - no stars tonight!"); } }

...

Pattern matching

C# 7.0 introduces the notion of patterns, which, abstractly speaking, are syntactic elements that can test that a value has a certain "shape", and extract information from the value when it does.

Examples of patterns in C# 7.0 are:

    Constant patterns of the form c (where c is a constant expression in C#), which test that the input is equal to c
    Type patterns of the form T x (where T is a type and x is an identifier), which test that the input has type T, and if so, extracts the value of the input into a fresh variable x of type T
    Var patterns of the form var x (where x is an identifier), which always match, and simply put the value of the input into a fresh variable x with the same type as the input.

This is just the beginning – patterns are a new kind of language element in C#, and we expect to add more of them to C# in the future.

In C# 7.0 we are enhancing two existing language constructs with patterns:

    is expressions can now have a pattern on the right hand side, instead of just a type
    case clauses in switch statements can now match on patterns, not just constant values

In future versions of C# we are likely to add more places where patterns can be used.

...

the pattern variables – the variables introduced by a pattern – are similar to the out variables described earlier, in that they can be declared in the middle of an expression, and can be used within the nearest surrounding scope.

...

Patterns and Try-methods often go well together:

if (o is int i

(o is string s && int.TryParse?(s, out i)) { /* use i */ }

...

Switch statements with patterns

We’re generalizing the switch statement so that:

    You can switch on any type (not just primitive types)
    Patterns can be used in case clauses
    Case clauses can have additional conditions on them

Here’s a simple example:

switch(shape) { case Circle c: WriteLine?($"circle with radius {c.Radius}"); break; case Rectangle s when (s.Length == s.Height): ... case null: ...

    The null clause at the end is not unreachable: This is because type patterns follow the example of the current is expression and do not match null. This ensures that null values aren’t accidentally snapped up by whichever type pattern happens to come first; you have to be more explicit about how to handle them (or leave them for the default clause).

Pattern variables introduced by a case ...: label are in scope only in the corresponding switch section.

...

Tuples ... Item1 etc. are the default names for tuple elements, and can always be used. But they aren’t very descriptive, so you can optionally add better ones:

(string first, string middle, string last) LookupName?(long id) tuple elements have names

Now the recipient of that tuple have more descriptive names to work with:

var names = LookupName?(id); WriteLine?($"found {names.first} {names.last}.");

You can also specify element names directly in tuple literals: return (first: first, middle: middle, last: last); named tuple elements in a literal

Generally you can assign tuple types to each other regardless of the names: as long as the individual elements are assignable, tuple types convert freely to other tuple types. Tuples are value types, and their elements are simply public, mutable fields. They have value equality, meaning that two tuples are equal (and have the same hash code) if all their elements are pairwise equal (and have the same hash code). ... Deconstruction ...

Deconstruction is not just for tuples. Any type can be deconstructed, as long as it has an (instance or extension) deconstructor method of the form: public void Deconstruct(out T1 x1, ..., out Tn xn) { ... }

...

Local functions

Sometimes a helper function only makes sense inside of a single method that uses it. You can now declare such functions inside other function bodies as a local function: public int Fibonacci(int x) { if (x < 0) throw new ArgumentException?("Less negativity please!", nameof(x)); return Fib(x).current;

    (int current, int previous) Fib(int i)
    {
        if (i == 0) return (1, 0);
        var (p, pp) = Fib(i - 1);
        return (p + pp, p);
    }}

Parameters and local variables from the enclosing scope are available inside of a local function, just as they are in lambda expressions.

As an example, methods implemented as iterators commonly need a non-iterator wrapper method for eagerly checking the arguments at the time of the call. (The iterator itself doesn’t start running until MoveNext? is called). Local functions are perfect for this scenario:

public IEnumerable<T> Filter<T>(IEnumerable<T> source, Func<T, bool> filter) { if (source == null) throw new ArgumentNullException?(nameof(source)); if (filter == null) throw new ArgumentNullException?(nameof(filter));

    return Iterator();
    IEnumerable<T> Iterator()
    {
        foreach (var element in source) 
        {
            if (filter(element)) { yield return element; }
        }
    }}

...

Literal improvements

C# 7.0 allows _ to occur as a digit separator inside number literals: var d = 123_456; var x = 0xAB_CD_EF;

You can put them wherever you want between digits, to improve readability. They have no effect on the value.

Also, C# 7.0 introduces binary literals, so that you can specify bit patterns directly instead of having to know hexadecimal notation by heart. var b = 0b1010_1011_1100_1101_1110_1111;

Ref returns and locals

Just like you can pass things by reference (with the ref modifier) in C#, you can now return them by reference, and also store them by reference in local variables.

public ref int Find(int number, int[] numbers) { for (int i = 0; i < numbers.Length; i++) { if (numbers[i] == number) { return ref numbers[i]; return the storage location, not the value } } throw new IndexOutOfRangeException?($"{nameof(number)} not found"); }

((my note: so it looks like "ref x" is like C's "&x"))

This is useful for passing around placeholders into big data structures. For instance, a game might hold its data in a big preallocated array of structs (to avoid garbage collection pauses). Methods can now return a reference directly to such a struct, through which the caller can read and modify it.

There are some restrictions to ensure that this is safe:

    You can only return refs that are "safe to return": Ones that were passed to you, and ones that point into fields in objects.
    Ref locals are initialized to a certain storage location, and cannot be mutated to point to another.

...

Generalized async return types

Up until now, async methods in C# must either return void, Task or Task<T>. C# 7.0 allows other types to be defined in such a way that they can be returned from an async method.

For instance we now have a ValueTask?<T> struct type. It is built to prevent the allocation of a Task<T> object in cases where the result of the async operation is already available at the time of awaiting. For many async scenarios where buffering is involved for example, this can drastically reduce the number of allocations and lead to significant performance gains.

There are many other ways that you can imagine custom "task-like" types being useful. It won’t be straightforward to create them correctly, so we don’t expect most people to roll their own, but it is likely that they will start to show up in frameworks and APIs, and callers can then just return and await them the way they do Tasks today.

...

More expression bodied members

Expression bodied methods, properties etc. are a big hit in C# 6.0, but we didn’t allow them in all kinds of members. C# 7.0 adds accessors, constructors and finalizers to the list of things that can have expression bodies: class Person { private static ConcurrentDictionary?<int, string> names = new ConcurrentDictionary?<int, string>(); private int id = GetId?();

    public Person(string name) => names.TryAdd(id, name); // constructors
    ~Person() => names.TryRemove(id, out *);              // destructors
    public string Name
    {
        get => names[id];                                 // getters
        set => names[id] = value;                         // setters
    }}

...

Throw expressions

It is easy to throw an exception in the middle of an expression: just call a method that does it for you! But in C# 7.0 we are directly allowing throw as an expression in certain places: class Person { public string Name { get; } public Person(string name) => Name = name ?? throw new ArgumentNullException?(nameof(name)); public string GetFirstName?() { var parts = Name.Split(" "); return (parts.Length > 0) ? parts[0] : throw new InvalidOperationException?("No name!"); } public string GetLastName?() => throw new NotImplementedException?(); }

" -- [6]

---

summary of the previous section (C# 7.0 release notes) for oot: yes, we want:

---

dep_b 1 day ago [-]

One feature F# has that would be great is that you can declare primitives as a a type. So you won't mix up your kilometer floats with your miles floats. And kilometers divided by hours ends up being a km/h value.

The other one we'll never see: Non null references as a default, optionals as a special case.

reply

d--b 1 day ago [-]

They specifically avoided that in the early days of designing the language. Typedefs, macros, meta-programmation, overloading operators like () or -> are common in C++ but lead many people to completely change the language. So much so that there are C++ codebases that cant be read by standard C++ developers. The C# guys avoided that pit. As a C# developer I can read any codebase and know what's happening. That's not so true in F# where people are prompt to create their own operators (like

>) and types.

reply

oblio 1 day ago [-]

Operators, I get, but what's wrong with creating your own types?

reply

profquail 1 day ago [-]

The F#

> operator is provided by the core library. Over-use of custom operators is something I always push back on in F# codebases; they can sometimes be useful in making code more expressive, but the downside is it makes the code significantly more difficult to comprehend until you're familiar with what each operator does. I prefer readability over concision, and even without the custom operators F# code is fairly terse.

reply

((my note:

> apparently means "Passes the tuple of two arguments on the left side to the function on the right side"))

---

F# 'operator reference' page -- probably a good idea for our documentation to have a page like this: https://docs.microsoft.com/en-us/dotnet/articles/fsharp/language-reference/symbol-and-operator-reference/

---

Pxtl 1 day ago [-]

The big impediment imho between c# and scripting language is the whole Visual Studio project experience. With Roslyn they introduced the csx format for c# scripts so you can have a stand-alone 1-file executable c# script, but the tooling hasn't caught up yet.

I'm using LinqPad? for all kinds of one-off scripting jobs and loving it. I was trying to learn PowerShell? for that task but I've finally given up on that. PowerShell? has some great features, but the overall chaotic syntax just kills me.

reply

RyanHamilton? 1 day ago [-]

FYI if any java developers are looking for a one file script running environment checkout http://jpad.io/ I made it to scratch my own itch. It lets you run java snippets and view results as HTML even.

reply

sljd 1 day ago [-]

There can be some confusion between pipeline syntax and imperative syntax in Powershell, custom objects don't help either (i.e. `{...}` works differently than `[pscustomobject]{...}` for many cmdlets), but I'm curious what you consider chaotic enough to give up?

reply

---

cmurf 2 hours ago [-]

Metadata writes can be considered atomic, so an integrated checksum is written at the time the metadata block is written, or overwritten. Whereas with data, you can't do overwrites of either data or checksum, it's not atomic and any kind of crash or powerfailure will result in mismatching data to checksum. So unless you have something really clever to work around this, you need a copy on write file system to do data checksums.

reply

---

" Stage 1 development in Swift 4 will prioritize source stability as well as resilience, enabling public APIs to evolve over time even with ABI stability. "For example, we don't want the C++ 'fragile base class' problem to ever exist in Swift," Lattner said.

Swift advocates were pleased with the ABI plans. "Stabilizing the ABI was originally a goal for Swift 3, but it got kicked down the road," said Aaron Hillegass, CEO of Big Nerd Ranch, which builds applications for Apple's iOS platform. "Stabilizing the ABI is very important because at this time, every Swift app includes the entire standard library, which adds at least 10MB to every app."

... Other capabilities planned for this stage include generics improvements for the standard library and string re-evaluation. "String is one of the most important fundamental types in the language," ... Jessup believes that string processing is imperative for writing almost any piece of software today. "Swift's Unicode-correct by default for string objects were a great move, but some of the APIs could be made more consistent and useful or powerful," he said. "Hopefully, after revisiting Swift strings, it will become easier to write parsers and other important tools using Swift." ... Apple is planning to add "first class" concurrency, including actors, async/await, atomicity, and memory model ... In the scripting space, regular expressions and multiline string literals also are considered stage 2 for Swift 4. Other capabilities for stage 2 include property behaviors, providing "powerful" abstractions over the existing property model; submodules; implicit promotions between numeric types; importing of C++ APIs; guaranteed tail calls; user-defined attributes; better SIMD (Single Instruction Multiple Data); and data parallelism. "

---

example of some cool Haskell code:

" We also use QuickCheck? for larger tests. A big part of our application can be summarized as a fold over events: Event -> State -> State. This means we can test invariants of our system by generating random events:

handleEvent :: Event -> State -> State

prop "the scheduler state is always consistent" $ \events -> let -- We do a scan so we observe all intermediate states. states :: [State] states = scanl (flip handleEvent) State.empty events isGood state = State.recomputeRedundantFields state === state in conjoin (fmap isGood states)

The State contains a few data structures to enable efficient lookup of jobs with certain properties. These are redundant and can be recomputed from other fields. With this test we ensure that the acceleration structures are always in sync with the main data. "

[7] ---

probs with Haskell (overall they liked it though):

"

---

[9]

" In one case, Haskell did not quite deliver on its promise. Haskell is one of the few languages that can encode effects. Things like “Can this function access Redis?” and “Does this function log to the console?” can be encoded in the type system. We ended up with a mix of free monads and monad transformers, where we should have picked one. In practice the monad transformers are imposed on us by various libraries, and we should probably rewrite our code in the same style. The real issue, however, is that all of this breaks down when used with threads, because functions such as forkIO and async operate in the IO monad. Suppose we have these functions (slightly adapted from our source):

runWorkerLoop :: (MonadBackingStore? m, MonadLogger? m, MonadIO? m) => WorkerState? -> m WorkerState?

runFetchLoop :: (MonadBackingStore? m, MonadIO? m) => m ()

runWorker :: (MonadBackingStore? m, MonadLogger? m, MonadIO? m) => m ()

Now in runWorker we want to start two threads: one that runs the worker loop, and one that runs the fetch loop. (The fetch loop retrieves jobs from the backing store and puts them in a TBQueue, and the worker loop drains this queue. The queue is used to sequence events; for example a job being completed also enqueues an event.) The issue here is that we cannot run runWorkerLoop and runFetchLoop with forkIO, because we don’t have the right transformer stack. We tried various solutions, but in the end we removed most of the MonadX? m constraints, and changed everything back to IO. "

---

wtbob 1 day ago [-]

> If you are really in a hurry and can't commit to learning the hundreds of forms in Common Lisp, Scheme is a minimalist alternative.

There are 25 special forms in Lisp, and the standard forbids the creation of more: http://www.lispworks.com/documentation/HyperSpec/Body/03_aba...

MIT Scheme has 30 special forms: https://www.gnu.org/software/mit-scheme/documentation/mit-sc...

R7RS doesn't seem to define special forms per se; it considers anything other than define-syntax, literals, variables, calls, lambda, if & set! to be derived expression types; I think quote & quasiquote probably have to be considered special forms in at least some senses.

Scheme itself is a minimalist language, but a practical Scheme must add in plenty of functionality not specified in the standard. A truly useful Scheme will thus be roughly as big as Common Lisp, or bigger — only much less of that bulk will necessarily be standardised & portable.

reply

---

https://medium.com/unbabel-dev/a-tale-of-three-kings-e0be17a16e2b#.4l8zexmz7

" In most scenarios, you can achieve the exact same end result with Python as you do with Go. With Python, you’ll probably be able to ship far faster, and have more developers available.

It’s also far more economical when it comes to the actual amount of code needing to be written; it took me around 100 lines of code in Go to do the same as with 20 lines with Elixir and 35 with Python.

That said, you shouldn’t really compare lines of code as a metric without strong context, because for scaling-sensitive scenarios, more lines of code is not relevant. In our testing, Go set the bar for performance very high, and while you may need to take the time to write better tests and keep an eye on code styling and documentation, it feels like it is worth the effort. "

---

fauigerzigerk 6 days ago [-]

The problem with "too clever" isn't just about cleverness that is actually in the code. It's also very much about cleverness that could potentially lurk behind any particular syntactical expression.

When you look at a line of code, what can you tell about its semantics without considering non-local information? What is invariant and what could potentially be redefined?

I think the answer to this question is extremely important for readability when you're not already familiar with a codebase.

reply

didibus 6 days ago [-]

That's a great call out, though I see this in most languages, Haskell and go equally have implicit non local behaviours, but I find the functional paradigm tend to have less, because of side effect free functions.

reply

lilactown 6 days ago [-]

I find laziness (coming from more imperative, non-lazy languages) to be a huge source of implicit non-local behavior. Trying to figure out how efficient my code is without specific knowledge of how certain functions in the standard library work/how the compiler interprets my code is a impossible.

reply

didibus 5 days ago [-]

You're right, the execution context is non local. I'm actually not a big fan of laziness as the default myself. I love the option of lazy evaluation, because sometimes, it makes things really easy, like for infinite sequences, but most of the time, it does add complexity in reasoning about the code.

There's functional alternatives to Haskell which adopt the non-lazy as default, such as Elm, Rust, SML, Lisps, Fantom, Elixir, Scala, etc.

I have to admit though, this is a bit of a trade off situation. Working with pure functions is very simple, but to map those to unpure behavior, like IO, you need something that isolate that, and without laziness, I'm not sure how you can get that to be practical.

reply

fauigerzigerk 6 days ago [-]

I'm not yet familiar enough with Haskell to be honest, but languages like Scala, C++ (to the extreme) and to a lesser degree C# and Swift have a lot more support for non local redefinition of syntactic expressions than Go.

I totally agree with you about pure functions. They can be a real simplification, but only if any deviation from purity requires special syntax at the call site. Otherwise you're back in the guessing game.

reply

chongli 6 days ago [-]

but only if any deviation from purity requires special syntax at the call site.

Haskell makes it easy to write data and function types that enforce purity at the call site, throwing a type error if you make a mistake. Not just purity though, you can restrict these types to any arbitrary set of methods of your choosing. This lets you do things like parsing a blob of JSON and having all functions that depend on the result be guaranteed not to have to deal with a failed parse or otherwise invalid data. The fact that the data is good has been encoded in the types, preventing you from passing bad data by throwing a type error.

reply

---

" Going back to dynamic languages, there are also languages like Clojure, which get by with sane defaults. For example in practice Clojure doesn't do OOP-style dynamic dispatching most of the time and functions are usually multi-variadic and capable of handling nil values gracefully. This is not enforced by the language, being simply a hygienic rule accepted by the community. However, relying on such conventions requires (a) capable developers that (b) agree on what the best practices should be. " ---