" If you like functional languages, you should be using Coffeescript instead of raw Javascript. And if you prefer promise based programming instead of callbacks, you should be using Iced Coffeescript. "
" Of that list, here’s what JS has:
Minimalism. Dynamic typing. First-class functions and closures.
Don’t tell me it’s got lexical scope, because JavaScript’s? scoping is an abomination in the face of God. Guy Steele isn’t even dead and JS scope makes him pre-emptively roll in his not-yet-occupied grave.
...
At the same time, we’re ignoring the things about JavaScript? that make it not Scheme. It’s got a much richer syntax including a great notation for data. I’m not a huge fan of prototypes (anymore), but it’s an interesting dispatch model that Scheme doesn’t have. "
-- http://journal.stuffwithstuff.com/2013/07/18/javascript-isnt-scheme/
"JavaScript's C-like syntax, including curly braces and the clunky for statement, makes it appear to be an ordinary procedural language. This is misleading because JavaScript has more in common with functional languages like Lisp or Scheme than with C or Java. It has arrays instead of lists and objects instead of property lists. Functions are first class. It has closures. You get lambdas without having to balance all those parens." -- http://www.crockford.com/javascript/javascript.html
"
Avatar danielparks • 4 days ago
Could you expand on your contention that JavaScript? isn’t lexically scoped? 3 • Reply • Share ›
Avatar Calvin Metcalf danielparks • 4 days ago
It is functionally scoped instead of block scoped and while it is mostly lexically scoped 'this' is dynamically scoped. 6 • Reply • Share › Avatar munificent Mod Calvin Metcalf • 4 days ago −
Not just that, but thanks to with and the global object, you always have dynamically scoped variables. 1 • Reply • Share ›"
"
Avatar Chris Howie • 4 days ago
(I mentioned this to you on Twitter already, but figured I'd repeat it here for the sake of discussion.)
Overall a very good read.
My only nitpick is that Java 8 does not have closures. It has these lambda things that resemble closures, but they are not any more closures than anonymous classes already were.
Anonymous classes can also reference variables in the parent scope if they are declared final. The only way in which lambda syntax differs meaningfully in this regard is that you don't have to declare the supposed upvalues as final -- but they *are implicitly final* and so are no different in that regard except that you don't have to go around slapping "final" on your locals. But just the same, you cannot modify them.
"For both lambda bodies and inner classes, local variables in the enclosing context can only be referenced if they are final or effectively final. A variable is effectively final if it is never assigned to after its initialization." -- JSR 335 http://cr.openjdk.java.net/~dl...
If one wants to call this "closure" then it's a pretty half-assed, mostly-useless form of it. 5 1 • Reply • Share ›
Avatar Alan Malloy Chris Howie • 4 days ago
Haskell's and Clojure's closures have the same limitation (because literally every local variable is final), and are pretty dang useful. Anonymous classes are tremendously verbose and painful to use, but they're perfectly serviceable, and I presume the syntax sugar in Java 8 will make much much more tolerable. 7 1 • Reply • Share › Avatar Chris Howie Alan Malloy • 4 days ago
Agreed, it is syntax sugar. But to suggest that "now Java 8 has closures" ignores the fact that Java 8 lambdas are simply syntactic sugar for anonymous classes (which themselves were already syntactic sugar).
My point is that Java 8 either had closures before lambda was introduced or it doesn't have closures at all. The addition of lambda syntax doesn't change anything in this respect, it is simply a syntax specialization for the "anonymous class with one method" case. Rules regarding upvalues have not changed. There seem to be a lot of people treating "lambda" and "closure" as synonyms when they are anything but. 3 • Reply • Share › Avatar Ricky Clarkson Chris Howie • 4 days ago
I avoid the overloaded term 'closure', but Java 8 will have lambdas, and the 'effectively final' restriction will make using lambdas to create new control structures harder, but will avoid some confusion.
var list = [];
for (var a = 0; a < 10; a++) list.push(function() { return a; });
list[4].apply() - what does this give? Confusion. • Reply • Share › Avatar Chris Howie Ricky Clarkson • 4 days ago
It gives 10.
The confusion here is on the part of the programmer not understanding (a) how closures work, (b) the lifetime of variables in JavaScript, or (c) all of the above.
As an aside: In the case of C#, they fixed this case in version 5 of the language by having the foreach iteration variable logically exist inside of the body of the loop. (This compiler logic is only triggered when there is a closure capturing the variable, so there is no performance hit when not using closures.) This demonstrates that this problem can be fixed at the language level without shooting down a very useful feature.
I do understand your point, and this is something that has long been debated. But even if the language can't be changed, this problem can be "fixed" by simply adopting a discipline: don't close over iteration variables. I think it's rather silly to cripple the value of this feature because those unaccustomed to it make mistakes.
Concurrency issues (deadlocking, race conditions, etc.) are considerably more complex than upvalue issues and are much harder to diagnose, and those can be solved by adopting disciplines (always acquire locks in the same order, for example). I fail to see how this case is so different. see more 2 • Reply • Share › Avatar Ricky Clarkson Chris Howie • a day ago −
The point of the example is to show that a small misunderstanding can lead to a big surprise. It's presumably a big enough problem if C# made a backward incompatible change to protect its programmers from some of the impact.
To look at it from another angle, when do you need to close over mutable local variables? Implementing a 'timeThis' function that runs a block and dumps some execution time diagnostics is the classic example:
var timeIt = function(block) {
var startTime = new Date().getTime();
var result = block.apply(); var endTime = new Date().getTime(); dumpSomeStuff(startTime, endTime); return result;
}
then in calling code it would be useful to be able to write to mutable variables, as otherwise you need to change code to be able to use timeIt. You might also be annoyed that 'return', 'break' and 'continue' don't work inside the profiled block, but let's concentrate on variables.
For Java, the earlier BGGA proposal found good ways to deal with such control structure uses of lambdas, but there were some valid and some immature reactions to that in the name of simplicity, and so the control structure uses disappeared from the scope as far as I can tell.
So assuming you're not writing something like timeIt or inventing a new kind of for loop using lambdas, I don't think Java 8's 'effectively final' restriction is going to hurt that much, and I expect it to prevent a number of bugs.
Mutability and flow control operators (return, continue, break) do cause some complications, and just saying no is probably about as good as trying to support everything. see more • Reply • Share ›"
" I’m happy that I chose Scheme-ish first-class functions and Self-ish (albeit singular) prototypes as the main ingredients. "
http://www.adequatelygood.com/JavaScript-Scoping-and-Hoisting.html
---
_pferreir_ 7 hours ago
link |
Every language has its quirks. Python is not perfect either. But JS shows clear signs of bad design decisions, such as the behavior of the == operator.
reply
woah 6 hours ago
link |
What == does is pretty simple and easy to understand. If you have a hard time with it, use ===. Problem solved.
reply
georgemcbay 5 hours ago
link |
The problem is that JavaScript? does not exist in isolation, there are other languages that use this operator. If you're familiar with any other C-derived language, the way == acts in JavaScript? is very unexpected
... Another example: the way 'this' scoping works is similarly busted in that while the rules for it are reasonably straightforward in isolation, it is different enough compared to other languages that share the same basic keywords and syntax that it should have been called something else.
---
jiaweihli 5 hours ago
link |
That one is somewhat reasonable (and relatively obscure code you'd probably never write).
A more realistic example: inner classes can't see class variables from their enclosing classes. (Why enclose classes? - builder pattern)
reply
-- " This is probably the reason for Javascript's popularity, but if you ignore some bad language design, Javascript is a very simple language with a very small number of concepts that you need to understand to start getting work done in it. Even Python (which I think is the simplest of the widely used dynamic languages) has a larger number of concepts that you need to understand than Javascript. "
--
Exactly like Javascript in this regard. I've been programming JS for over 15 years, and with the exception of cross browser support back in the day, there's nothing that had me startled after I learned the language -- except for a few issues related to floating point that I stumble upon now and then, but then that would be true for floating point math in any language.
--
samth 1 day ago
link |
You should have a look at Typed Racket [1] which is a type system for an existing language (Racket) built entirely with macros [2] that satisfies all of the criteria you want -- works with the IDE, safe interop with untyped code, etc.
[1] http://docs.racket-lang.org/ts-guide/ [2] http://www.ccs.neu.edu/racket/pubs/pldi11-thacff.pdf
reply
klibertp 20 hours ago
link |
There's also TypedClojure? (https://github.com/clojure/core.typed) which is based on Typed Racket (is what I read in previous discussions about it here).
reply
--
innguest 23 hours ago
link |
Maybe this will help see the full scope of the power of macros: macros receive their arguments as an AST (i.e. a list) and are free to transform it as they please; they also have full access to the host language and are free to implement a type system checker that walks code and checks that all the types match. There are no restrictions here - the type checker would be regular scheme code that would be wrapped into a macro simply to make it run at compile time. So there's nothing less powerful about macros at all. It's just code that runs at a different stage (compile-time instead of run-time). After macros, in terms of power, you have reader macros, that receive their arguments as raw text and are free to turn it into any AST.
reply
--
bad_user 1 day ago
link |
> For that matter, you could un-overload the + operator so that it only works with numbers
No you can't. First of all, you can't implement operator overloading, because Sweet.js is quite limited, as the macros have the form:
<macro-identifier> <expression...>
Also, because you don't have the types when Sweet.js compiles the code, it means that you can't distinguish between numbers and strings or what have you.
reply
aaronem 1 day ago
link |
Huh. This is what I get for talking something up before I've had a chance to play with it for myself -- you're right, Sweet.js doesn't allow for arbitrary syntax definitions, just those of the form keyword arguments body, with macro-expansion only on body, and no way of capturing the symbol prior to keyword. ...
disnet 1 day ago
link |
Actually sweet.js just landed infix macros [1] so you can match on syntax that comes before the macro name. This just happened last week so we haven't updated the docs or made an official release just yet.
[1] https://github.com/mozilla/sweet.js/pull/162
--
aryastark 1 day ago
link |
I barely trust Guy Steele with macros. You know who I don't trust with macros? Random JS coder dudes that haven't learned what true hardship is, when debugging a macro that is 5 levels deep and subtly changes the language because, hey, I have a macro! And having a macro means using a macro.
Language design is hard. Scheme has gotten it wrong. Over. And over. And over. They didn't get hygiene right for more than a decade. They still don't have it right. But people think they can easily invent new syntax that somehow doesn't do unexpected things. Your clever macro isn't so clever when some poor son of a bitch has been beating his head on a bug all day long just to find out your macro-that-looks-like-a-function isn't evaluating all its arguments because your macro-that-looks-like-a-function is really a function-that-is-a-macro. Don't get me started on missing IF branches and undefined behavior...
reply
skrebbel 1 day ago
link |
I strongly agree with everything the author writes, but I believe that sweet.js might not go far enough. For example, I am very fond of TypeScript?. I'm also very fond of React's JSX. Yet, I cannot mix the two, and neither JavaScript? extension could be expressed as Sweet.js macros.
reply
--
Dewie 1 day ago
link |
I think things like syntactic sugar is perfectly fine, as long as I can desugar it in a straightforward way. I want to be able to programmatically desugar some piece of code, not have to Google it each time I am curious.
I think that if it is easy to investiage things like syntactic sugar, and not have it be buried in something like a compiler or lang spec, then DSL/language implementers (and anyone else, if the language permits it) could get away with implementing things that objectively make the language more complex to deal with, because to decipher it is only a query away, anyway.
reply
--
Dewie 1 day ago
link |
I think things like syntactic sugar is perfectly fine, as long as I can desugar it in a straightforward way. I want to be able to programmatically desugar some piece of code, not have to Google it each time I am curious.
I think that if it is easy to investiage things like syntactic sugar, and not have it be buried in something like a compiler or lang spec, then DSL/language implementers (and anyone else, if the language permits it) could get away with implementing things that objectively make the language more complex to deal with, because to decipher it is only a query away, anyway.
reply
--
kaoD 1 day ago
link |
Don't be fooled by a legion of old-time lispers. Macros are a last resort in real-life Lisp (aka Clojure).
Hard to write (way harder than a function), hard to debug, hard to reason about, they're not functions (so fit worse in the rest of the language)... but powerful when you really need them.
"When you have a hammer...", and a desire to stand out from other languages made the lisp-macro myth flourish.
reply
6cxs2hd6 1 day ago
link |
Well actually, in a modern lisp (like Racket), you use macros in a variety of ways.
In addition to "deep" things, often you use them as practical, simple alternatives to using stuff like "snippets" or heavy IDEs. You can "DRY" annoying patterns, without resorting to external tooling.
Although macros used badly can be mysterious, so can any excessive pre-processing and build magic.
Macros provide "an API for the compiler", and since the compiler is involved, it can be smarter than external tools.
reply
kaoD 1 day ago
link |
Racket? I said real life! (just kidding :P)
As I said, when you really need them they're useful (e.g. in typed Clojure). Most (if not all) DRY patterns can be fixed using only functions.
The problem is, the macro mantra has been parroted for so long now it's part of the Lisp culture and its external image ("Lisp is homoiconic! You can modify code! Macros! MACROS!"), when it's actually one of the ugly (but powerful) parts of Lisp you should avoid most of the time. This confuses beginners and people interested in Lisp.
I see macros fitting mainly in DSLs (which is probably a code smell most of the time) and extending the language, as typed Clojure does. What other real use cases do you see for macros?
reply
6cxs2hd6 1 day ago
link |
The real use cases for macros boil down to 3 main areas:
1. Changing order of evaluation.
2. Creating new binding forms.
3. Implementing a data sub-language a.k.a. DSL
Although arguably #3 doesn't require macros, arguably it requires macros to do elegantly (Rubyists might argue not) and reliably (instead of monkey-patching, using hygiene and a macro-aware module system that can handle towers/layers of such systems).
reply
chc 1 day ago
link |
Well, 1 and 2 are the only cases where they are absolutely required, but I've seen other use cases. For example, the ClojureScript? templating library Dommy uses macros to optimize selector code at compile time, which gives some impressive speedups (IIRC Prismatic found it to be twice the speed of jQuery).
reply
--
aaronem 1 day ago
link |
Modern Lispers (and modern Lisp learners' resources such as Seibel's Practical Common Lisp) talk about macros in exactly this fashion. I'd be willing to argue that the lesson has been learned.
reply
--
inglor 11 hours ago
link |
Some mistakes there, will fix. `null` is most certainly not an object.
reply
notjosh 10 hours ago
link |
I got this in an interview once as a curveball. It's most certainly an object in Javascript!
> typeof null "object"
Contrast that with:
> typeof undefined "undefined"
Go figure :)
reply
saurik 10 hours ago
link |
typeof null returns "object", but I do not believe it is correct to claim that null "is an object". It is my understanding (maybe a misunderstanding, I will happily state) from reading a ton of comments from Brendan Eich in various places that typeof null is "object" for historical reasons involving the way reference types were implemented in the original JavaScript? VM.
reply
raganwald 5 hours ago
link |
It is correct to say that typeof(null) returns the string 'object' because of a bug that is now written into the standard to prevent old software from "breaking" if they fixed this.
It is not ever correct to say that null in JavaScript? is an object. Just ask javaScript itself:
Object.getPrototypeOf(null) //=> TypeError: Object.getPrototypeOf called on non-object
reply
inglor 10 hours ago
link |
Exactly, if people are interested I'll gladly find the reference to the relevant esdiscuss thread.
reply
---
inglor 10 hours ago
link |
That's a mistake in the specification. We tried to fix that experimentally but it broke too many sites.
If you check the language specification you can see that `null` is in fact not an object - it's something called a 'primitive value type', just like numbers, strings, undefined and booleans.
Here http://es5.github.io/#x8 :)
--
issues with javascript:
https://www.destroyallsoftware.com/talks/wat
---
masswerk 12 hours ago
link |
What I personally really do enjoy is "var self = this;". Now you just overwrote the system variable pointing to the global object. What was this good for? Oh, the other language doesn't use "self" as a system variable, so it must be worthless in JS? Hmm.
Now we just have to define "var global = (function() {return this;}).apply(null);" and we have a reference pointing to the global object! Pure magic! Really? Are you serious? Sorry to hear so. (No, I was not suggesting to use some kind of more elaborate pattern for this. See, there is "self" ...)
(When ever you see "var self = this;", take your things and run.)
reply
mattmanser 9 hours ago
link |
Err, that's totally ok. That's useful for keeping a reference to the object scope in a closure. You can't really write advanced javascript without it.
AFAIK in all the c-style languages `this` is the usual self reference. Python & Ruby are self. VB.Net is Me.
But javascript screwed `this` up and this is especially apparent when you use events where `this` ends up being the caller instead of the callee. Having a self reference in the closure allows you to fix the problem.
I think people ended up using self for a reason, it's strange enough in a C-style language that you're not expecting it to be anything, but familiar enough to be obvious. The other common ones over the years have been `_this` & `that`.
There are patterns in javascript and patterns. We need Crockford to write, "Javascript Patterns: The Good Ones".
reply
--
[–]kibwen 10 points 21 hours ago
No, let and var in Javascript don't convey any information about mutability. let just allows you to declare variables with a scope that you expect, as compared to var's ridiculous hoisted-function scoping.
permalink
--
https://github.com/codemix/fast.js
" What?
Fast.js is a collection of micro-optimisations aimed at making writing very fast JavaScript? programs easier. It includes fast replacements for several built-in native methods such as .forEach, .map, .reduce etc, as well as common utility methods such as .clone.
...
How?
Thanks to advances in JavaScript? engines such as V8 there is essentially no performance difference between native functions and their JavaScript? equivalents, providing the developer is willing to go the extra mile to write very fast code. In fact, native functions often have to cover complicated edge cases from the ECMAScript specification, which put them at a performance disadvantage.
An example of such an edge case is sparse arrays and the .map, .reduce and .forEach functions:
var arr = new Array(100); a sparse array with 100 slots
arr[20] = 'Hello World';
function logIt (item) { console.log(item); }
arr.forEach(logIt);
In the above example, the logIt function will be called only once, despite there being 100 slots in the array. This is because 99 of those slots are empty. To implement this behavior according to spec, the native forEach function must check whether each slot in the array has ever been assigned or not (a simple null or undefined check is not sufficient), and if so, the logIt function will be called.
However, almost no one actually uses this pattern - sparse arrays are very rare in the real world. But the native function must still perform this check, just in case. If we ignore the concept of sparse arrays completely, and pretend that they don't exist, we can write a JavaScript? function which comfortably beats the native version:
var fast = require('fast.js');
var arr = [1,2,3,4,5];
fast.forEach(arr, logIt); faster than arr.forEach(logIt)
By optimising for the 99% use case, fast.js methods can be up to 5x faster than their native equivalents.
Caveats
As mentioned above, fast.js does not conform 100% to the ECMAScript specification and is therefore not a drop in replacement 100% of the time. There are at least two scenarios where the behavior differs from the spec:
Sparse arrays are not supported. A sparse array will be treated just like a normal array, with unpopulated slots containing undefined values. This means that iteration functions such as .map() and .forEach() will visit these empty slots, receiving undefined as an argument. This is in contrast to the native implementations where these unfilled slots will be skipped entirely by the iterators. In the real world, sparse arrays are very rare. This is evidenced by the very popular underscore.js's lack of support.
Functions created using fast.bind() and fast.partial() are not identical to functions created by the native Function.prototype.bind(), specifically:
The partial implementation creates functions that do not have immutable "poison pill" caller and arguments properties that throw a TypeError upon get, set, or deletion.
The partial implementation creates functions that have a prototype property. (Proper bound functions have none.)
The partial implementation creates bound functions whose length property does not agree with that mandated by ECMA-262: it creates functions with length 0, while a full implementation, depending on the length of the target function and the number of pre-specified arguments, may return a non-zero length.
See the documentation for Function.prototype.bind() on MDN for more details.
In practice, it's extremely unlikely that any of these caveats will have an impact on real world code. These constructs are extremely uncommon.
"
---
danabramov 13 hours ago
link |
Reminds me of this comment by Petka Antonov on native V8 Promises being way slower than Bluebird[1]:
>I'd expect native browser methods to be an order of magnitude faster.
Built-ins need to adhere to ridiculous semantic complexity which only gets worse as more features get added into the language. The spec is ruthless in that it doesn't leave any case as "undefined behavior" - what happens when you use splice on an array that has an indexed getter that calls Object.observe on the array while the splice is looping?
If you implemented your own splice, then you probably wouldn't even think of supporting holed arrays, observable arrays, arrays with funky setters/getters and so on. Your splice would not behave well in these cases but that's ok because you can just document that. Additionally, since you pretty much never need the return value of splice, you can just not return anything instead of allocating a wasted array every time (you could also make this controllable from a parameter if needed).
[1]: https://github.com/angular/angular.js/issues/6697#issuecomme...
[2]: https://github.com/v8/v8/blob/master/src/promise.js
--
AshleysBrain? 5 hours ago
link |
This itself reminds me of similar performance issues hidden in C++. The float->int cast on x86 with Microsoft's compiler called a helper library function 'ftol'. Turns out this does loads of stuff to be spec compliant. Replacing it with a single not-spec-but-works-for-us x86 assembly instruction to convert was way faster.
So not just JS - it seems language built-ins are often slowed down by bureaucratic spec compliance, and hand-rolling code can help you get a speedup.
reply
--
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers
--
AshleysBrain? 5 hours ago
link |
Having written a major HTML5 game engine, I've ended up micro-optimizing JS code after small functions really did show up high in profiling measurements. One example: calculating a bounding box from a quad involved code along the lines of Math.min(a, b, c, d) followed by Math.max(a, b, c, d). Replacing that with a tree of ifs to determine both the minimum and maximum at once was faster and moved the bottleneck elsewhere.
reply
--
throwaway_yy2Di 15 hours ago
link |
I think in most cases where you'd worry about JS array performance you should use actual numeric arrays [0] rather than the kitchen sink Array(). Also, I think those function abstractions have a pretty significant overhead?
[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Type...
(edit): Yeah, the abstraction overhead is ridiculous. Here's the forEach() benchmark again, compared to an explicit for loop (no function calls):
// new benchmark in bench/for-each.js
exports['explicit iteration'] = function() { acc = 0; for (var j=0; j<input.length; ++j) { acc += input[j]; } }
Native .forEach() vs fast.forEach() vs explicit iteration ✓ Array::forEach() x 2,101,860 ops/sec ±1.50% (79 runs sampled) ✓ fast.forEach() x 5,433,935 ops/sec ±1.12% (90 runs sampled) ✓ explicit iteration x 28,714,606 ops/sec ±1.44% (87 runs sampled)
Winner is: explicit iteration (1266.15% faster)
(I ran this on Node "v0.11.14-pre", fresh from github).
reply
thegeomaster 14 hours ago
link |
I used this in a Firefox OS app, inside an implementation of Dijkstra's algorithm and an accompanying binary heap, and while I haven't run any rigorous benchmarks, I can say the runtime felt way better on my test phone when I rewrote the algorithm to use the typed arrays.
This is very often overlooked but extremely useful for implementations of fast algorithms in JavaScript? that should scale to a lot of input data.
reply
phpnode 14 hours ago
link |
regarding your edit, you're exactly right, of course a for loop will be faster. Sometimes you really do need a function call though, in which case fast forEach and map implementations become more useful.
The next step for fast.js are some sweet.js macros which will make writing for loops a bit nicer, because it's pretty painful to write this every time you want to iterate over an object:
var keys = Object.keys(obj), length = keys.length, key, i; for (i = 0; i < length; i++) { key = keys[i]; // ... }
I'd rather write:
every key of obj { // ... }
and have that expanded at compile time.
Additionally there are some cases where you must use inline for loops (such as when slicing arguments objects, see https://github.com/petkaantonov/bluebird/wiki/Optimization-k...) and a function call is not possible. These can also be addressed with sweet.js macros.
reply
throwaway_yy2Di 14 hours ago
link |
To be fair, it's not obvious if you're not a JS expert: coming from some other language, you could naively assume that function call gets inlined, with no overhead.
reply
--
throwaway41597 8 days ago
link |
> We could achieve similar things in Node with generators, but in my opinion generators will only ever get us half way there
I wish more people realized this. Generators seem to be billed (ironically by TJ amongst others with his Koa project) as the solution to callback hell but, having tried Go, I got the same felling that actors are much easier to reason about and debug (although concurrency is still hard).
On the client side, web workers allow parallelism but the API is inconvenient to use: you need to have a file for each web worker whereas Go's `go` routines are akin to a function call. In addition to this standardized conundrum, you have browsers discrepancies with the APIs available inside a worker varying between vendors [1].
On node, you have the cluster API, which allows managing several processes so it's even further from light threads. On the bright side, most (all?) APIs have both a sync and async version.
As a result, there's nothing you can use for multithreading in a module without coordinating with the whole project. I think JavaScript? needs language support for light threads, not APIs for multiprocessing.
[1]: https://developer.mozilla.org/en-US/docs/Web/API/Worker/Func...
reply
iambase 1 day ago
link |
Rather than cluster API, I guess you meant child process API but otherwise I agree 100%.
reply
---
" Go versus Node
If you’re doing distributed work then you’ll find Go’s expressive concurrency primitives very helpful. We could achieve similar things in Node with generators, but in my opinion generators will only ever get us half way there. Without separate stacks error handling & reporting will be mediocre at best. I also don’t want to wait 3 years for the community to defragment, when we have solutions that work now, and work well.
Error-handling in Go is superior in my opinion. Node is great in the sense that you have to think about every error, and decide what to do. Node fails however because:
you may get duplicate callbacks you may not get a callback at all (lost in limbo) you may get out-of-band errors emitters may get multiple “error” events missing “error” events sends everything to hell often unsure what requires “error” handlers “error” handlers are very verbose callbacks suck
In Go when my code is done, it’s done, you can’t get re-execute the statement. This is not true in Node, you could think a routine is completely finished, until a library accidentally invokes a callback multiple times, or doesn’t properly clear handlers, and cause code to re-execute. This is incredibly difficult to reason about in live production code, why bother? Other languages don’t make you go through this pain.
Personally I think it makes more sense for young startups to focus on reliability over raw performance, that’s what makes or breaks your relationship with customers. This is especially true with small teams, if you’re too busy patching brittle code then you can’t work on the real product.
Node’s sweet spot, to me at least, is that it’s written with JavaScript?, I think capitalizing on that with usability makes the most sense.
...
Future Node
I still hope Node does well, lots of people have heavily invested in it, and it does have potential. I think Joyent and team need to focus on usability — performance means nothing if your application is frail, difficult to debug, refactor and develop.
The fact that 4-5 years in we still have vague errors such as “Error: getaddrinfo EADDRINFO” is telling of where the priorities are at. Understandably it’s easy to miss things like that when you’re so focused on building out the core of a system, but I think users have expressed this sort of thing over and over, and we’re not seeing results. We usually get poor responses advocating that what we have is perfect, when in practice it’s anything but.
Streams are broken, callbacks are not great to work with, errors are vague, tooling is not great, community convention is sort of there, but lacking compared to Go. That being said there are certain tasks which I would probably still use Node for, building web sites, maybe the odd API or prototype. If Node can fix some of its fundamental problems then it has good chance at remaining relevant, but the performance over usability argument doesn’t fly when another solution is both more performant and more user-friendly.
If the Node community decides to embrace generators and can implement them to the very core of node, to propagate errors properly then there’s a chance that it would be comparable in that area. This would drastically improve Node’s usability and robustness. " -- https://medium.com/code-adventures/4ba9e7f3e52b
---
" JavaScript? has two sets of equality operators: === and !==, and their evil twins == and !=. The good ones work the way you would expect. If the two operands are of the same type and have the same value, then === produces true and !== produces false. The evil twins do the right thing when the operands are of the same type, but if they are of different types, they attempt to coerce the values. the rules by which they do that are complicated and unmemorable. These are some of the interesting cases:
'' == '0' // false 0 == '' // true 0 == '0' // true
false == 'false' // false false == '0' // true
false == undefined // false false == null // false null == undefined // true
' \t\r\n ' == 0 // true
The lack of transitivity is alarming. My advice is to never use the evil twins. Instead, always use === and !==. All of the comparisons just shown produce false with the === operator." -- Douglas Crockford's JavaScript?: The Good Parts, quoted in http://stackoverflow.com/a/359509/171761
my note: i bet js == is failing symmetry, not transitivity (the above examples don't show any failure of transitivity, for which you need two 'trues', unless you assume symmetry (even with transitivity, 0 == and 0 == '0' only imply == '0' if symmetry is assumed)
---
https://news.ycombinator.com/item?id=8713270
---
byroot 10 days ago
link |
It's already done often, especially for frontend gems. See execjs [0].
[0] https://github.com/sstephenson/execjs
reply
---
joev_ 9 days ago
link |
Obligatory:
Compiled through emscripten to build a JS interpreter in a browser... at only 1.5MB!
reply
---
grosskur 10 days ago
link |
There's also otto, a JavaScript? interpreter written in Go:
https://github.com/robertkrimen/otto
---
http://glenmaddern.com/articles/javascript-in-2015
https://news.ycombinator.com/item?id=8849907
peferron 1 day ago
link |
My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing.
Iteration:
for (var i = 0; i < y.length; i++) { ... } for (var x in y) { ... } for (var x of y) { ... } y.forEach(function(x, i) { ... }) Object.keys(y).forEach(function(x) { ... })
Comparison:
== === Object.is() (would have been a good laugh if introduced as ==== instead)
Of course, this doesn't matter much if you're a single developer. I've started writing a bit of ES6/ES7 and it's pretty cool. But it's going to be a PITA for projects built by many developers of varying experience levels. The nice things about smaller languages is that there's often only one way to do something, so when you write code or review other people's code, your mind is free from the minutiae and you can focus on the big picture instead.
It's a bit funny that it's when JS is, from the general consensus, finally getting "better" that I'm actually considering more and more switching to a small but well-built compile-to-JS language. I guess smallness and simplicity just matter a lot to me.
reply
serve_yay 1 day ago
link |
Object.is is a very silly addition to the language. It does the same thing as === except in the case of NaN? and positive/negative zero.
I mean if you read a polyfill for it, it's such a silly bit of "functionality". And of course the name is terrible. Argh.
reply
ender7 21 hours ago
link |
The new Map and WeakMap? classes use Object.is() to determine if two keys are the same (otherwise it would be impossible to use NaN? as a key to a map).
Whether this algorithm should have been exposed to users is debatable, but it exists for a good reason.
reply
lomnakkus 23 hours ago
link |
Just looking at your example:
for (var i = 0; i < y.length; i++) { ... } for (var x in y) { ... } for (var x of y) { ... } y.forEach(function(x, i) { ... }) Object.keys(y).forEach(function(x) { ... })
None of the "for" variations are considered good practice in ES6. You should be using "let" (or "const" if it's allowed here) to avoid var-hoisting of "i".
Personally, I'd advocate using "for" if you have a need for early return/break/continue -- otherwise I'd go for the first forEach() variant. Or, even better, use "map" and avoid side effects in the function you're mapping over the collection. Unless of course you're doing something imperative.
The fact that the last forEach() variant is possible is a good thing, though I wouldn't recommend its use in this case because it's needlessly complex -- it shows that the language/stdlib is becoming more compositional.
reply
peferron 19 hours ago
link |
Yes, "let" is better than "var". I could also have used a fat arrow in the forEach(). But my point was to list iteration variations, so outside of that I wrote traditional ES5.
This illustrates the issue though. "var" is like "let" but without block scoping, so you should almost never use "var", but it's still there to trip newcomers. The fat arrow is like the "function" keyword and most of the time you can use them interchangeably, but if you rely on "this" they're not interchangeable anymore.
This growing laundry list isn't exactly thrilling. I'm glad to have map(), filter(), every() and friends, though.
reply
lomnakkus 19 hours ago
link |
Have a pre-commit linter disallow "var". (etc. for everything else.)
In a language like JS you cannot have it all, but at least appreciate the improvements! ;)
reply
pcthrowaway 20 hours ago
link |
> My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing
To be fair though, this has been an issue with Javascript since its creation (and has been getting worse as the language has been expanded while maintaining backwards-compatibility).
Many other languages have similar problems (Ruby is an offender that comes to mind, though it's perhaps not on the level of ES6+)
I'm not much of a polyglot, but one language which seems to have "one obvious way" as part of its design choices that springs to mind is Python. I have a hunch that pure-functional languages would be less choicy as well, though I have no familiarity with any.
reply
frik 1 day ago
link |
I am unsure about some ES6 additions.
Thanks to Crockford we got a decent ES5. Remember that several syntax changes got postpones to ES6. And don't forget about "E4X", a beast that was supposed to be JavaScript? 2? http://www.ecma-international.org/publications/standards/Ecm... It got similar traction as XHTML 2. Both had no backwards compatibility - an insane idea. Some new features in ES6 look like Crockford "good parts" movement lost and Sun/Oracle Java evangelists won.
Hopefully Douglas Crockford updates his "JavaScript?: The Good Parts" in time for JavaScript? 6.
reply
riffraff 22 hours ago
link |
I really wish E4X had gotten traction. I wrote a firefox extension using it and it was awesome to do XUL + JS with it.
Years later
var foo = <foo>{item}</foo>;
is the new hotness in facebook's JSX.
reply
lomnakkus 19 hours ago
link |
AFAIUI from the React people E4X had a lot of incidental complexity and extraneous stuff relative to JSX. So there's that.
I'd argue that with ES6, JSX could just reserve the "jsx" prefix for interpolated strings and go with
jsx`<foo>blah</foo>`
but that's a typical hindsight-is-20/20-thing.
reply
insin 9 hours ago
link |
This blog post - "JSX: E4X The Good Parts" - covers what's in and out for JSX:
http://blog.vjeux.com/2013/javascript/jsx-e4x-the-good-parts...
reply
chrisdotcode 1 day ago
link |
What's interesting is that all of the 'niceness' seen is the result of a switch to the functional style.
Excluding the singleton class (which could have been a single function itself), you've got your map/filters in the gif processing, and encapsulation of async activities[0] through monads via promises.
Seems like JavaScript? got good when it started embracing what functional programmers have been drowned out saying for years.
Looking good indeed.
[0] Async actions as a language-level construct, as opposed to a syntactic abstraction have always been A Bad Idea. Promises should have always been the default, and manually writing callbacks should have never been a thing.
reply
lucian1900 1 day ago
link |
Except for immutability. It's still nowhere near the norm.
reply
nwienert 1 day ago
link |
I'm working on a JS stack that leverages React, es6, functional style programming, and immutable data. Check my profile for more info.
reply
_greim_ 1 day ago
link |
> Async actions as a language-level construct, as opposed to a syntactic abstraction has always been A Bad Idea.
Are you saying that the upcoming async/await features of JS are a bad idea? Or maybe I'm not following; can you give an example?
reply
chrisdotcode 1 day ago
link |
This:
images <- get "http://example.com/images.json"
is objectively easier to reason about than:
var images; get("http://example.com/images.json", (err, resp) => if err throw err; images = resp;
even though the former might be internally implemented as the latter.
In addition, the former doesn't give the programmer the 'opportunity' to cause a race-condition, and encapsulates the failure entirely for you (automatically and defaultly); If `images` ends up erroring, and you use it again, then it'll short-circuit without crash, very similar to `then()`.
reply
zak_mc_kracken 1 day ago
link |
It's mostly easier to read because it no longer deals with error cases. All code becomes easier to read if you ignore errors.
I can make my code arbitrarily short if it doesn't have to be correct.
reply
chrisdotcode 1 day ago
link |
That's the magic of monads: the errors are all handled for you (in an encapsulated, lossless way), and you can deal with them (if you choose) at the end of the chain, just the same as promises (because promises are a monad).
reply
zak_mc_kracken 1 day ago
link |
The same can be said of exceptions.
However, my comment above had nothing to do with how you handle errors, it was about the unsoundness of comparing code that handles errors to code that doesn't.
reply
chrisdotcode 1 day ago
link |
Right - What I'm saying is that the above former code does handle the error, in the exact same way as the latter code.
For clarification, the error is handled implicitly (but you'd know what kind of error it was due to the type signature), but you can always handle it in manner you choose to at any point.
reply
tel 1 day ago
link |
It's not really the same as exceptions in a typed language because it's forced to be delimited. Even in Javascript you'll have to do a little ceremony to "escape" the golden path driven by the monad, though you'll have many ways to take short-cuts and forget details.
reply
arcatek 1 day ago
link |
In their current states, Webpack is for the production applications, JSPM for development / little ES6 apps, and SystemJS? is used for Node.js.
Guy Bedford has made a great tool with JSPM.
reply
davedx 1 day ago
link |
This seems so fragmented, even before you consider all the other module loaders, transpilers and toolchains.
I'm a senior frontend developer and I find this side of JavaScript? truly bewildering.
reply
mattdesl 1 day ago
link |
Agreed on there being too many tools. Thankfully most of them are starting to embrace the same core feature: writing npm modules. Small bits of code that you publish once to npm, and then reuse across many projects and potentially many build/workflow systems.
reply
arcatek 1 day ago
link |
The main issue is really the lack of module concept in Javascript core. I hope things get better with ES6, because at this point it will be much, much easier to write a library to load them all, without relying on "proprietary" loaders.
reply
Touche 1 day ago
link |
It runs in the browser so less boilerplate to start a project, don't have to run a build daemon for every project or wait for rebuilds to finish, etc.
reply
jnhasty 1 day ago
link |
For anyone building projects utilizing GIFs, check out the GIPHY api.
Here's another project using GIFs and beat matching:
Some other cool projects:
reply
williamcotton 1 day ago
link |
What advantages does this offer over the more mature ecosystem that surrounds browserify?
reply
lhorie 1 day ago
link |
From what I can tell, it means you no longer need to write/maintain gulp scripts and you don't have a build step during development
reply
khalilravanna 1 day ago
link |
That's assuming you want all your scripts to be loaded on the front end asynchronously which seems to be only for development. For production I think you'd still want to compile this all down for a faster load time. From the jspm page: "For production, use the jspm CLI tool to download packages locally, lock down versions and build into a bundle." So I think you're correct that there'd be no build step for development but you'd still have some sort of script being run, whether it be gulp or otherwise, that builds it for production use.
reply
lhorie 1 day ago
link |
yep, the point though is that having a build step during dev adds some overhead (which can easily go into the several seconds range in my experience) into the type-save-reload cycle, which is exacerbated by hit-reload-before-build-finished-so-need-to-reload-again pattern.
My understanding is that this tool removes that entire class of annoyances, and gives a no-hassle live-reload ES6-enabled environment on top.
reply
mattdesl 1 day ago
link |
The build step is mitigated by incremental reloading, and the incremental reloading can be tied to a live-reload event.
So you end up with the same workflow as shown in this video; but most likely faster for many modules (browser requests 1 JS file rather than potentially hundreds) and also more realistic for production (aside from minification, the dev environment is the exact same as the production environment).
reply
williamcotton 1 day ago
link |
Substack's essay applies as much to this tool as it does to webpack:
https://gist.github.com/substack/68f8d502be42d5cd4942
Importantly, overloading require like this:
var collections = require('npm:lodash-node/modern/collections'); var $ = require('github:components/jquery');
... means that you can't publish this module to npm and have the require statements work as expected.
reply
agmcleod 1 day ago
link |
I think that's an okay compromise, especially if you typically have a CI process anyways.
reply
mattdesl 1 day ago
link |
You still need to maintain build scripts for production, though.
p.s. For prototyping like in the vid, you can use tools like beefy or wzrd to avoid setting up any browserify build step. :)
reply
williamcotton 1 day ago
link |
Who says you have to use gulp? Just use make.
https://github.com/williamcotton/makeify
reply
nawazdhandala 1 day ago
link |
Finally we have promises support built into JS.
reply
eskimobloood 1 day ago
link |
I wonder if there is a way to split the build in more then just one big file but in several smaller files then can be loaded on runtime.
reply
illicium 1 day ago
link |
Use WebPack? and set up code splitting
reply
atestu 1 day ago
link |
What is the difference between this and require.js for loading modules? I feel like I'm missing something
reply
onion2k 1 day ago
link |
require.js loads modules that are written to the CommonJS? and AMD standards. systemjs loads those too, but it also loads a few other things including modules that just dump things in to global scope and things written to the ES6 standard.
It's sort of like require.js on steroids.
reply
atestu 1 day ago
link |
gotcha, thanks!
reply
_broody 1 day ago
link |
Most of these WTF examples basically boil down to JS doing type coercion willy-nilly. This 'feature' makes writing conditionals slightly shorter, in exchange for introducing the possibility of massive bugs everywhere in your code at any moment. Seriously, f* JS type coercion.
The other misfeature I hate is that accessing undefined properties doesn't raise an error (then, but you can be sure it will make your program blow up a bit later).
Typescript helps to solve both.
reply
dccoolgai 1 day ago
link |
http://leftoversalad.tumblr.com/post/103503118002?utm_source...
reply
sarciszewski 1 day ago
link |
Some WTF moments in Javascript, courtesy of Gary Bernhardt:
var foo = ["10", "10", "10"]; foo.map(parseInt); // Returns [ 10, NaN, 2 ]
[] + [] // "" [] + {} // {} {} + [] // 0 {} + {} // NaN
var a = {}; a[[]] = 2; alert(a[""]); // alerts 2
alert(Array(16).join("wat" - 1) + " Batman!");
Press F12 and use the Console to verify these if you're skeptical.
reply
untog 1 day ago
link |
While these are WTF moments, when is anyone actually going to run [] + [] in a real project? The map to parseInt is the only one that's even close to something you'd actually write.
reply
lmkg 1 day ago
link |
Writing literal [] + []? No.
Writing x + y where x and y are both arguments to a function, and some call site was passed an (empty?) array of integers instead of an integer? Believable.
reply
path411 3 hours ago
link |
So the real problem is passing wrong parameters to a function? Sounds like you should swap to something like TypeScript? for strict typing then.
Or you know, stop acting like JavaScript? is unique in that improper function calling breaks your code.
I can't believe in 2015 there are still people who follow the "JavaScript? equalities are WTF" mentality. If you are running into equality operator problems in JS, you are probably going to run into a myriad of problems in any language.
reply
untog 1 day ago
link |
I've been writing JS professionally for, what, a decade now - and I've never once had this issue.
reply
emehrkay 1 day ago
link |
The first one is easy to understand though.
parseInt takes two arguments: $thing_to_change and $radix; map iterates over an array and feeds it $value and $index. You're getting parseInt("10", 0); parseInt("10", 1) and parseInt("10", 2);
The fix would be to partially apply parseInt with your defined radix;
var foo = ["10", "10", "10"]; var base10 = function(val){ return parseInt(val, 10); }; x = foo.map(base10) [10, 10, 10]
reply
sarciszewski 1 day ago
link |
A lot of the ones he presented are easy to understand. It's still a WTF when you run into it though.
reply
beejiu 23 hours ago
link |
All you have done is used a function without understanding what it was doing, or reading the documentation. Most JS developers know how parseInt works, and even if they run into this problem, would quickly discover the cause. I don't see how this is a flaw of Javascript; it could happen to a developer of any language, if their strategy is 'well, it looks like it'll work'.
reply
sarciszewski 22 hours ago
link |
> All you have done is used a function without understanding what it was doing, or reading the documentation
These aren't my examples. I haven't done anything. I credited the person who provided them: Gary Bernhardt.
https://www.destroyallsoftware.com/talks/wat
https://www.destroyallsoftware.com/talks/the-birth-and-death...
Next time before you make an accusation, reread the post before pressing the reply button.
reply
emehrkay 1 day ago
link |
I guess my question is: what would have to change, in the last example, to make it not WTF to you? To me it seems pretty straight-forward what is happening.
reply
Bahamut 1 day ago
link |
These are pretty bad examples. I can safely say that I have never run into these in the wild, or seen any other developer run into these.
reply
hfsktr 1 day ago
link |
I know what the batman example gives but I don't get how it's a WTF?
I assume the expected output is "wa" but why should a string less an integer produce that?
reply
squeaky-clean 1 day ago
link |
There is no expected output. It doesn't make sense to subtract the value 1 from the string "wat". But Javascript will cast both of them to numbers, then try to subtract 1 from NaN?. However, if you do "wat" + 1, Javascript will cast both of them to strings, and append "1" to "wat".
It's not just the odd behavior, but the inconsistency.
reply
hfsktr 1 day ago
link |
Ok that makes a bit more sense. I didn't even think about how it would be if you tried to add them.
I like javascript but I don't do anything so complicated (or maybe not the right types of things) that I run into many of these situations.
reply
moron4hire 1 day ago
link |
No, it produces NaN? (Not-a-Number), and then repeats it in a string 16 times. The WTF is that NaN? can be concatenated to strings as "NaN?".
reply
insin 1 day ago
link |
NaN? is passed to Array.prototype.join() as the separator to join strings with, so it gets coerced to String ("NaN?"). It's deliberately working backward from the behaviour of Array.prototype.join() to create a contrived example for giggles, not a WTF.
It's no more a WTF than this, which follows the same principle: https://gist.github.com/insin/1183916
reply
moron4hire 1 day ago
link |
Oh, I understand that. I was more trying to explain why someone might think it was a WTF.
And I'm pretty sure your example returns the decimal 15. Comma operator returns the last element in the list, and parseInt will truncate strings with non-number-like text to the number-like part. Here, the number-like part is a hexadecimal code, triggering another feature of parseInt that it can figure out the radix on the fly.
reply
virmundi 1 day ago
link |
I'm actually trying to write a book about that myself. It's called "JavaScript? es basura caliente: Learning JavaScript? in Anger".
The goal of the book is not to rag on JS. That's not new or really interesting in its own right. The goal is to walk people through the oddities of the language such as identity loss when passing a function from an object to something else (the good old this == window rather than self).
There are interesting, and annoying, things about JS that are non-obvious to people from a different language like Java or C#. There are other issues like testing and package management that are either assumed or glossed over. The JS community knows about them. Unfortunately for most of us, the responses to the language's weakness and ecosystem strengths are dispersed through the interblogs.
In fact from personal use of JS and some research on the book, I'm moving to a functional approach with JS. OO in Ecma5 is a pain. Pure (or Clojure like) functional can work with a bit of help from Underscore. That pardigm seems to fit the mentality of JS better too.
https://leanpub.com/javascriptesbasuracaliente
reply
_random_ 21 hours ago
link |
Here you go: http://wtfjs.com
reply
collyw 1 day ago
link |
Its well documented:
https://wiki.theory.org/YourLanguageSucks#JavaScript_sucks_b...
reply
andyhmltn 1 day ago
link |
You didn't link to it and it's still pretty irrelevant to the original discussion.
reply
---
http://blog.npmjs.org/post/101775448305/npm-and-front-end-packaging
---
Replace CoffeeScript? with ES6 https://news.ycombinator.com/item?id=8970081
---