proj-oot-ootNotes33

kccqzy 1 day ago [-]

One of the coolest things I've seen is to use Prolog with CLP(FD) to solve the 7-11 problem. The problem basically says the sum of the prices of four items is $7.11, and the product is $7.11 too (no rounding); find the prices of these four items.

This can be solved in two lines of code that gives the (unique) solution in a second. Not even my expensive Mathematica can do this!

    ?- use_module(library(clpfd)).
    true.
    
    ?- Vs = [A,B,C,D], Vs ins 1..711, A * B * C * D #= 711000000, A + B + C + D #= 711, A #>= B, B #>= C, C #>= D, labeling([ff, down], Vs).
    Vs = [316, 150, 125, 120],
    A = 316,
    B = 150,
    C = 125,
    D = 120 ;
    false.

Another great thing Prolog is good at is type inference. After all, type inference, in its simplest form, is just syntax-directed constraint generation and then using unification to solve constraints—exactly what Prolog gives you by default. You can write a type inference engine for simply typed lambda calculus in Prolog in 10 minutes. Google used to have a project to do type inference for Python written in Prolog, although they've since [switched away](https://github.com/google/pytype).

reply

triska 1 day ago [-]

Hah! I used exactly this example to motivate the arbitrary precision CLP(FD) implementation, published as The Finite Domain Constraint Solver of SWI-Prolog, FLOPS 2012, LNCS 7294:

https://www.metalevel.at/swiclpfd.pdf

The observations on type inference are also spot on. The project you mentioned was previously discussed on HN, but the page is no longer available:

https://news.ycombinator.com/item?id=12108041

reply

---

" if Prolog had been easier to understand – perhaps with some stronger typing and some greater degree of declarativeness (such as can be found in some experimental descendants of Prolog such as Goedel) it might have survived.

Then again, perhaps not – Ada, after all, is pretty much dead too and it had none of these problems. "

---

https://www.lazarus-ide.org/

---

paganel 2 hours ago [-]

> in a language frowned upon by many programmers today.

Adding to the list, lambda-the-ultimate.org itself is written in Drupal/PHP.

reply

---

 buserror 8 hours ago | parent | flag | favorite | on: Adobe Photoshop 1.0.1 Source Code (2013)

everything was written in pascal back then, not just Photoshop. I spent years writing pascal as a professional mac developer before Lightspeed C (which became THINK, then Symantec C later on) appeared... Then I never looked back.

MPW (Macintosh Programmer's workshop) was pretty awesome, if slow as hell -- Turbo Pascal was a lot better to work with -- the only issue is that it 'tokenized' your source code, so it wasn't plaint text anymore...

I still miss these 'one pass' compilers; I think it peaked with MetroWerks? toolchain (which kept a Pascal plugin for a long time!) that was IMO the best compiler/linker/debugger toolset ever made.

((my note: MetroWorks? made Codewarrior; i briefly googled it with terms like 'single pass' and 'one pass' and couldn't find anything))

---

" Some features of the language were omitted because of time but the omitted features do not directly relate to the analysis that is done to validate this work. The main features that are not supported yet are inheritance and exceptions. " -- [1]

---

chubot 23 days ago

parent [-]on: Python startup time: milliseconds matter

This is disappointing to me too, but I think there are some problems baked in to the language that make it hard.

The import code in CPython was a mess, which was apparently cleaned up by importlib in Python 3, through tremendous effort. But unfortunately I think importlib made things slower?

I recall a PyCon? talk where as of 3.6, essentially everything about Python 3 is now faster than Python 2, EXCEPT startup time!

This is a shame, because I would have switched to Python 3 for startup time ALONE. (As of now, most of my code and that of my former employer is Python 2.) That would have been the perfect time to address startup time, because getting a 2x-10x improvement (which is what's needed) requires breaking changes.

I don't think there's a lack of interest in the broader Python community, but there might be a lack of interest/manpower in the core team, which leads to the situation wonderfully summarized in the recent xkcd:

https://xkcd.com/1987/

FWIW I was the one who sent a patch to let Python run a .zip file back in 2007 or so, for Python 2.6 I think. This was roughly based on what we did at Google for self-contained applications. A core team member did a cleaner version of my patch, although this meant it was undocumented until Python 3.5 or so:

https://docs.python.org/3/library/zipapp.html

The .zip support at runtime was a start, but it's really the tooling that's a problem. And it's really the language that inhibits tooling.

Also, even if you distributed self-contained applications, the startup time is not great. It's improved a bit because you're "statting" a zip file rather than making syscalls, but it's still not great.

In other words, I have wondered about this "failure" for over a decade myself, and even tried to do something about it. I think the problem is that there are multiple parts to the solution, the responsibility for these parts is distributed. I hate to throw everything on the core team, but module systems and packaging are definitely a case where "distributed innovation" doesn't work. There has to be a central team setting standards that everyone else follows.

Also, it's not a trivial problem. Go is a static language and is doing better in this regard, but still people complain about packaging. (vgo is coming out after nearly a decade, etc.)

I should also add that while I think Python packaging is in the category of "barely works", I would say the same is true of Debian. And Debian is arguably the most popular Linux package manager. They're cases of "failure by success".

...

FWIW I think importing is heavily bottlenecked by I/O, in particular stat() of tons of "useless" files. In theory the C to Python change shouldn't have affected it much. But I haven't looked into it more deeply than that.

chubot 23 days ago [-]

EDIT: I should also add that the length of PYTHONPATH as constructed by many package managers is a huge problem. You're doing O(m*n) stat()s -- random disk access -- which is the slowest thing your computer can do.

m is the number of libraries you're importing, and n is the length of the PYTHONPATH.

So it gets really bad, and it's not just one person's "fault". It's a collusion between the Python interpreter's import logic and how package managers use it.

---

https://andrewkelley.me/post/full-time-zig.html

" Here are some of the bigger items that are coming up now that I have more time:

    remove more sigils from the language
    add tuples and remove var args
    self-hosted compiler
    http server and http client based on async/await
    decentralized package manager
    generate html documentation
    hot code swapping"

---

regarding removing sigils (see previous section), e says:

" andrewrk commented 12 days ago

    Replace a ?? b binary operator with a ifnull b. After this change, all control flow occurs with keywords only.
        other suggestions instead of ifnull are welcome. ifnull is nice because it is 2 lowercase words squished together so nobody will miss the identifier name.
    Remove ??x prefix operator. Replace it with x.?. After this change and #770, this symmetry exists:
        *T is used to make a pointer type, and x.* derefs
        ?T is used to make a nullable type, and x.? de-nullifies
        Both operations have assertions that are potentially unsafe - .* asserts the pointer is valid among other things and .? asserts the value is non-null.

Anecdotally, this will look a lot better syntactically, especially with C-translated code. "

---

" a lot of people using do not have a software development background and do not care that the language is not elegantly designed: they just want to get analytical work done. In that respect, R is far, far superior to Python. Even something as simple as installing a library is a conceptual leap for these people (why wouldn't the software just come with everything needed to work?). Have you ever tried explaining the various python package and environment management options to someone with a background in Excel/SQL? Just getting a basic environment set up can be days of frustrating effort (though Anaconda is getting better with this). Compared to R, where you install RStudio and are off to the races, with a helpful package installation GUI. Another great example: in R, data types are pretty fungible, everything is a vector, coercing things generally "just works". In pandas, it can be very confusing that you need to explicitly turn a 1x1 dataframe into a scalar value. Same thing with Python vs R datetimes. "

---

bokstavkjeks 2 days ago [-]

It's also worth noting that R becomes much more pleasurable with the Tidyverse libraries. The pipe alone makes everything more readable.

I'm also coming from more of an office setting where everything is in Excel. I've used R to reorganize and tidy up Excel files a lot. Ggplot2 (part of the Tidyverse) is also fantastic for plotting, the grammar of graphics makes it really easy to make nice and slightly complex graphs. Compared to my Matplotlib experiences, it's night and day.

...

That said, if anyone's interested in learning R from a beginner's level, I can recommend the book R for Data Science. It's available freely at http://r4ds.had.co.nz/ and the author also wrote ggplot2, RStudio, and several of the other Tidyverse libraries.

riskneutral 2 days ago [-]

Tidy features (like pipes) are detrimental to performance. The best things R has going for it are data.table, ggplot, stringr, RMarkdown, RStudio, and the massive, unmatched breadth and depth of special-purpose statistics libraries. Combined, this is a formidable and highly performant toolset for data analytics workflows, and I can say with some certainty that even though “base Python” might look prettier than “base R,” the combination of Python and NumPy? is not necessarily more powerful or even a more elegant syntax. The data.table syntax is quite convenient and powerful, even if it does not produce the same “warm fuzzy” feeling that pipes might. NumPy? syntax is just as clunky as anything in R, if not worse, largely because NumPy? was not part of the base Python design (as opposed to languages like R and MATLAB that were designed for data frames and matrices).

What is probably not a good idea (which the article unfortunately does) is to introduce people to R by talking about data.frame without mentioning data.table. Just as an example, the article mentions read.table, which is a very old R function which will be very slow on large files. The right answer is to use fread and data.table, and if you are new to R then get the hangs of these early on so that you don’t waste a lot of time using older, essentially obsolete parts of the language.

reply

roenxi 2 days ago [-]

> Tidy features (like pipes) are detrimental to performance.

Detrimental to the runtime performance; if you happen to be reading and processing tabular data from a csv (which is all I've ever used R for, I must admit), then you get real performance gains as a programmer. For one thing, it allows a functional style where it is much harder to introduce bugs. If someone is trying to write performant code they should be using a language with actual data structures (and maybe one that is a bit easier to parallelism than R). The vast bulk of the work done in R is not going to be time sensitive but is going to be very vulnerable to small bugs corrupting data values.

Tidyverse, and really anything that Hadley Wickham is involved in, should be the starting point for everyone who learns R in 2018.

> languages like R and MATLAB that were designed for data frames and matrices

Personal bugbear; the vast majority of data I've used in R has been 2-dimensional, often read directly out of a relational database. It makes a lot of sense why the data structures are as they are (language designed a long time ago in a RAM-lite environment), but it is just so unpleasant to work with them. R would be vastly improved by /single/ standard "2d data" class with some specific methods for "all the data is numeric so you can matrix multiply" and "attach metadata to a 2d structure".

There are 3 different data structures used in practice amongst the R libraries (matrix, list-of-lists, data.frame). Figuring out what a given function returns and how to access element [i,j] is just an exercise in frustration. I'm not saying a programmer can't do what I want, but I am saying that R promotes a complicated hop-step-jump approach to working with 2d data that isn't helpful to anyone - especially non-computer engineers.

reply

rpier001 2 days ago [-]

I think what you're saying is mostly on point. I wanted to share a couple possible balms for your bugbears.

For attach metadata to an anything, why not use attributes()/attr() or the tidy equivs? Isn't that what it is for?

It might not make you feel much better, but data.frame is just a special list, c.f. is.list(data.frame()). So, if you don't want to use the connivence layers for data.frame you can just pretend it is a list and reduce the ways of accessing data structures by one.

You can paper over the distinction between data.frames and matrices if it comes up for you often enough. E.g.

`%matrix_mult%` <- function(x,y) { if("data.frame" %in% class(x)) { x <- as.matrix(x) stopifnot(all(is.numeric(x))) } if("data.frame" %in% class(y)) { y <- as.matrix(y) stopifnot(all(is.numeric(y))) } stopifnot(dim(x)[2] == dim(y)[1]) x %*% y }

d1 %matrix_mult% d2

... but I'll grant that isn't the language default.

reply

roenxi 1 day ago [-]

...

For my own work I just use tidyverse for everything. It solves all my complaints, mainly by replacing apply() with mutate(), data.frame with tibble and getting access to the relational join commands from dplyr. I'll cool with the fact my complaints are ultimately petty.

> For attach metadata to an anything, why not use attributes()/attr() or the tidy equivs? Isn't that what it is for?

I've never met attr before, and so am unaware of any library that uses attr to expose data to me. The usual standard as far as I can tell is to return a list.

> It might not make you feel much better, but data.frame is just a special list, c.f. is.list(data.frame()). So, if you don't want to use the convenience layers for data.frame you can just pretend it is a list and reduce the ways of accessing data structures by one.

Well, I could. But data frames have the relational model embedded into them, so all the libraries that deal with relational data use data frames or some derivative. I need that model too, most of my data is relational.

The issue is that sometimes base R decides that since the data might not be relational any more it needs to change the data structure. Famously happens in apply() returning a pure list, or dat[x, y] sometimes being a data frame or sometimes a vector depending on the value of y. It has been a while since I've run in to any of this, because as mentioned most of it was fixed up in the Tidyverse verbs and tibble (with things like its list-column thing).

> `%matrix_mult%` <- function(x,y) { if("data.frame" %in% class(x)) { x <- as.matrix(x) stopifnot(all(is.numeric(x))) } if("data.frame" %in% class(y)) { y <- as.matrix(y) stopifnot(all(is.numeric(y))) } stopifnot(dim(x)[2] == dim(y)[1]) x %*% y }

I have got absolutely no idea what that does in all possible edge cases, and to be honest if the problem that is solving isn't actually one I confront often enough to look in to it.

It just bugs me that I have to use as.matrix() to tell R that my 2d data is all made up of integers, when it already knows it is 2d data (because it is a data frame) and that it is made up of integers (because data frame is a list of vectors, which can be checked to be integer vectors). I don't instinctively see why it can't be something handled in the background of the data.frame code, which already has a concept of row and column number. Having a purpose-built data type only makes sense to me in the context that at one point they used it to gain memory efficiencies.

I mean, on the surface

data %>% select(-date) %>% foreign_function() and data %>% select(-date) %>% as.matrix %>% foreign_function()

look really similar, but changing data types half way through is actually adding a lot of cognitive load to that one-liner, because now I have to start thinking about converting data structures in the middle of what was previously high-level data manipulation. And you get situations that really are just weird and frustrating to work through, eg, [1].

[1] https://emilkirkegaard.dk/en/?p=5412

reply

rpier001 1 day ago [-]

scale() for example uses attributes to hold on to the parameters used for scaling. Most packages that use attributes provide accessor functions so that the useR doesn't need to concern themselves with how the metadata are stored. I'll grant that people do tend to use lists because the access semantics are easier.

reply

com2kid 2 days ago [-]

> Tidy features (like pipes) are detrimental to performance.

But they are some absolutely amazing features to use. After helping my wife learn R, and learning about all the dypler features, going back to other languages sucked. C#'s LINQ is about as close as I can get to dypler like features in a main stream language.

Of course R's data tables and data frames are what enable dypler to do its magic, but wow what magic it is.

reply

wodenokoto 1 day ago [-]

I think your autocorrect mangled up your `dplyer`s :)

reply

curiousgal 2 days ago [-]

From your experience what makes data.table so useful?

reply

vijucat 2 days ago [-]

Answering questions in a rapid, interactive way (, while using C to be efficient enough that one can run it on millions of rows):

  1. Given a dataset that looks like this… > head(dt, 3) mpg cyl disp hp drat wt qsec vs am gear carb name 1: 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 Mazda RX4 2: 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 Mazda RX4 Wag 3: 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 Datsun 710
  2. What's the mean hp and wt by number of carburettors? > dt[, list(mean(hp), mean(wt)), by=carb] carb V1 V2 1: 4 187.0 3.8974 2: 1 86.0 2.4900 3: 2 117.2 2.8628 4: 3 180.0 3.8600 5: 6 175.0 2.7700 6: 8 335.0 3.5700
  3. How many Mercs are there and what's their median hp?
  > dt[grepl('Merc', name), list(.N, median(hp))]
     N  V2
  1: 7 123
  1. Non-Mercs? > dt[!grepl('Merc', name), list(.N, median(hp))] N V2 1: 25 113
  2. N observations and avg hp and wt per {num. cylinders and num. carburettors}
  > dcast(dt, cyl + carb ~ ., value.var=c("hp", "wt"), fun.aggregate=list(mean, length))
     cyl carb hp_mean  wt_mean hp_length wt_length
  1:   4    1    77.4 2.151000         5         5
  2:   4    2    87.0 2.398000         6         6
  3:   6    1   107.5 3.337500         2         2
  4:   6    4   116.5 3.093750         4         4
  5:   6    6   175.0 2.770000         1         1
  6:   8    2   162.5 3.560000         4         4
  7:   8    3   180.0 3.860000         3         3
  8:   8    4   234.0 4.433167         6         6
  9:   8    8   335.0 3.570000         1         1

I used slightly verbose syntax so that it is (hopefully) clear even to non-R users.

You can see that the interactivity is great at helping you compose answers step-by-step, molding the data as you go, especially when you combine with tools like plot.ly to also visualize results.

int_19h 1 day ago [-]

What a lot of people don't get is that this kind of code is what R is optimized for, not general purpose programming (even though it can totally do it). While I don't use R myself, I did work on R tooling, and saw plenty of real world scripts - and most of them looked like what you posted, just with a lot more lines, and (if you're lucky) comments - but very little structure.

I still think R has an atrocious design as a programming language (although it also has its beautiful side - like when you discover that literally everything in the language is a function call, even all the control structures and function definitions!). It can be optimized for this sort of thing, while still having a more regular syntax and fewer gotchas. The problem is that in its niche, it's already "good enough", and it is entrenched through libraries and existing code - so any contender can't just be better, it has to be much better.

reply

extr 2 days ago [-]

Completely agree. dplyr is nice enough but the verbose style gets old fast when you're trying to use it in an interactive fashion. imo data.table is the fastest way to explore data across any language, period.

reply

riskneutral 1 day ago [-]

I strongly agree, having worked quite a bit in several languages including Python/NumPy?/Pandas, MATLAB, C, C++, C#, even Perl ... I am not sure about Julia, but last time I looked at it, the language designers seemed to be coming from a MATLAB type domain (number crunching) as opposed to an R type domain (data crunching), and so Julia seemed to have a solid matrix/vector type system and syntax, but was missing a data.table style type system / syntax.

reply

ChrisRackauckas? 1 day ago [-]

Julia v0.7-alpha dropped and it has a new system for missing data handling. JuliaDB? and DataFrames? are two tabular data stores (the first of which is parallel and allows out-of-core for big data). This has changed pretty dramatically over the last year.

reply

maxander 2 days ago [-]

No, you are wrong. R is terrible, and especially so for non-professional programmers, and it is an absolute disaster for the applications where it routinely gets used, namely statistics for scientific applications. The reason is its strong tendency to fail silently (and, with RStudio, to frequently keep going even when it does fail.) As a result, people get garbage results without realizing, and if they're unlucky, these results are similar enough to real results that they get put somewhere important. Source: I'm a CS grad working with biologists; I've corrected errors in the R code of PhD?'d statisticians, in "serious" contexts.

rpier001 2 days ago [-]

many of the 'silent' failures are easily configured away (some examples, https://github.com/hadley/strict).

zrobotics 2 days ago [-]

While I kind of want to agree with you, I just don't see a better alternative. Do you really want biochemists to have to deal with the horrors of C compilation? In production code I'm very glad my makefile tells clang to fail on absolutely anything, but is that the best we can do? Other commenters have pointed out ways to avoid dangerous things like integer division, but if you think R is hostile then please offer a tenable alternative. The only ones I can think of are python and Matlab, and both are even worse for the intended use.

Yes, R is not my preferred language for anything heavy-duty, but I would guess ~95% of R usage is on datasets small enough to open in excel, and that is where the language truly shines (aside from being fairly friendly to non-programmers).

So yes, there are some problems with R, but what are your proposed improvements? Because if I have to analyze a .csv quickly, I'm going for R most of the time.

reply

teekert 1 day ago [-]

Python3: Pandas ans Seaborn?

I have very quick flows for data processing: load data, make long form, add meta data as categories, plot many things with seaborn one-liners. I use Jupyter lab and treat it Like a full lab notebook, including headers, introduction, conclusion, discussion. Works very wel for me.

reply

yread 1 day ago [-]

> install RStudio and are off to the races, with a helpful package installation GUI.

Unless the package needs a native component like libcurl of particular version then it can turn into couple of hours of blindly trying everything you can think of.

> Another great example: in R, data types are pretty fungible, everything is a vector,

Unless it's a dataframe or factor or string or s3, s4 or s5 or a couple of other things.

And the documentation will tell you the reference paper that you can read and some completely impractical example.

Ugh, feels better now, sorry for the rant.

reply

---

YeGoblynQueenne? 1 day ago

unvote [-]

Well, the "=" operator is eas to type since it's on pretty much every keyboard and probably has been for ever. And it's faster than typing a two-symbol operator like := or <- so.

If you want "=" to (more or less) mean what it used to mean "long before computer programming", try Prolog.

  ?- a = a.
  true.
  ?- a = b.
  false.
  ?- A = a.
  A = a.
  ?- A = a, A = b.
  false.
  ?- A = a, A = B.
  A = B, B = a.
  ?- A == B.
  false.
  ?- A = B, A == B.
  A = B.

Nice? Most programmers would pull their hair from the root at all this :)

(hint: "=" is not assignment, neither is it equality and "==" is just a stricter version thereof).

reply

---

" The Linux people ran into a similar problem in 2016. Consider the following code:

extern int _start[]; extern int _end[];

void foo(void) { for (int *i = _start; i != _end; ++i) { /* ... */ } }

The symbols _start and _end are used to span a memory region. Since the symbols are externalized, the compiler does not know where the arrays are actually allocated in memory. Therefore, the compiler must be conservative at this point and assume that they may be allocated next to each other in the address space. Unfortunately GCC compiled the loop condition into the constant true rendering the loop into an endless loop as described in this LKML post where they make use of a similar code snippet. It looks like that GCC changed its behavior according to this problem. At least I couldn’t reconstruct the behavior with GCC version 7.3.1 on x86_64 Linux. "

---

" in Ada where you can't just take the address of anything you want, you need to declare it as aliased to begin with). In fact Ada tries hard to make pointers redundant which is a great thing. This and range types and bound check arrays( Ada has a lot lot of great ideas, and "C style" Ada is a treat to use).

So a C replacement should keep the procedural nature of C, with closures perhaps, but not go further in terms of paradigm. "

tzs 2 days ago [-]

Note that if the two pointers are passed to a function, and the comparison is done in the function, the results are different:

  1. include <stdio.h>
  void pcmp(int *p, int *q)
  {
      printf("%p %p %d\n", (void *)p, (void *)q, p == q);
  }
  int main(void) {
    int a, b;
    int *p = &a;
    int *q = &b + 1;
    printf("%p %p %d\n", (void *)p, (void *)q, p == q);
    pcmp(p, q);
    return 0;
  }

That is giving me:

  0x7ffebac1483c 0x7ffebac1483c 0
  0x7ffebac1483c 0x7ffebac1483c 1

That's compiled with '-std=c11 -O1' as in the article. The result is the same of pcmp is moved into a separate file so that when compiling it the compiler has no knowledge of the origins of the two pointers.

I don't like this at all. It bugs me that I can get different results comparing two pointers depending on where I happen to do the comparison.

reply

JoeAltmaier? 2 days ago [-]

Yes, I agree. It makes far more sense to make guarantees in a language, that two values are compared using a metric that is invariant. Especially in a language like C, where we expect it to be a very thin abstraction over the machine.

reply

---

on Python:

sametmax 1 day ago [-]

It's going to be very interesting to see if things like:

That were BDFL-blocked, will go back to be debated in the mailing list in the next months.

And if yes, will the community stands by its root or create a new era ?

The consequences of which we will only really see in 10 years.

Guido as done an incredible job at being the boogie man, keeping the language simple and readable. It's a hard job.

Can we pull it off ?

reply

stormbeta 1 day ago [-]

The big one I want to see, because it's one of my biggest frustrations with Python, is to finally make lambdas work like every other contemporary language instead of being inexplicably limited to a single expression simply because Guido couldn't come up with a syntax that agreed with him.

There's so many cases (arguably including the problem this PEP was designed to solve!) where having a real inline closure is just far more readable than having to arbitrarily break every single thing that happens to need 2+ expressions out into named blocks out of sequence.

Other things in Python are either simply a result of the language's age and history, or have real technical pros and cons, but that one irks me because it's an artificial limitation chosen for no reason except aesthetics.

reply

---

https://yarchive.net/comp/linux/everything_is_file.html

first few comments are a good read: https://news.ycombinator.com/item?id=17531875

---

" Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code. "

---

on Python:

riazrizvi 1 day ago [-]

The thing I love most about the language is its conciseness, on several levels.

It's syntactically concise because of its use of whitespace and indentation instead of curly braces.

Expressions are often concise because of things like list slicing and list expressions.

It can be linguistically concise, because it is so easy to fiddle with the data model of your classes, to customize their behavior at a deeper level. For example, it is so quick to design a suite of classes that have algebraic operators, which leads to elegant mathematical expressions of your custom GUI objects for example. But you can also change the way classes are instantiated by overriding Class.__new__ which I've used to make powerful libraries that are a joy to use.

Further elegance can be added through creative use of the flexible argument passing machinery, and the lovely way that functions are also objects.

Your application architecture tends to be concise because there are so many existing libraries, you can often design your application at a more interesting higher level. For example, you don't have to worry about the size of int's.

My big regret with Python 3.0 is that the scoping rules in list expressions changed to prohibit modification of variables in surrounding scope. This degraded the utility of list expressions. The work around is so long-winded.

Besides the odd gripe here and there, the language really gives me a warm glow.

reply

---

small program:

https://www.quaxio.com/bootable_cd_retro_game_tweet/

---

doomjunky 1 hour ago [-]

Functional programming!

Functional programming languages have several classic features that are now gradually adopted by none FP languages.

Lambda expressions [1] is one such feature originating from FP languages such as Standard ML (1984) or Haskell (1990) that is now implemented in C#3.0 (2007), C++11 (2011), Java8 (2014) and even JavaScript? (ECMAScript 6, 2015).

Pattern matching [2] is another feature that is now^2015 implemented in C#7.0. My bet is that Java and other will follow in the next versions.

Here is a list of FP features. Some of which are already adopted by none FP languages: Lambda expressions, Higher order functions, Pattern matching, Currying, List comprehension, Lazy evaluation, Type classes, Monads, No side effects, Tail recursion, Generalized algebraic datatypes, Type polymorphism, Higher kinded types, First class citicens, Immutable variables.

[1] https://en.wikipedia.org/wiki/Lambda_calculus

[2] https://en.wikipedia.org/wiki/Pattern_matching

reply

---

xroche 17 days ago [-]

This is IMHO by far NOT C' biggest mistake. Not even close. A typical compiler will even warn you when you do something stupid with arrays in function definitions (-Wsizeof-array-argument is the default nowadays).

On the other hand, UBE (undefined, or unspecified behavior) are probably the nastiest stuff that can bite you in C.

I have been programming in C for a very, very long time, and I am still getting hit by UBE time to time, because, eh, you tend to forget "this case".

Last time, it took me a while to realize the bug in the following code snippet from a colleague (not the actual code, but the idea is there):

struct ip_weight { in_addr_t ip; uint64_t weight; };

const struct ip_weight ipw1 = {0x7F000001, 1}; const struct ip_weight ipw2 = {0x7F000001, 1};

const uint32_t hash1 = hash_function(&ipw1, sizeof(ipw1)); const uint32_t hash2 = hash_function(&ipw2, sizeof(ipw2));

The bug: hash1 and hash2 are not the same. For those who are fluent in C UBE, this is obvious, and you'll probably smile. But even for veterans, you tend to miss that after a long day of work.

This, my friends, is part of the real mistakes in C: leaving too many UBE. The result is coding in a minefield.

[You probably found the bug, right ? If not: the obvious issue is that 'struct ip_weight' needs padding for the second field. And while all omitted fields are by the standard initialized to 0 when you declare a structure on the stack, padding value is undefined; and gcc typically leave padding with stack dirty content.]

jcelerier 17 days ago [-]

> [You probably found the bug, right ? If not: the obvious issue is that 'struct ip_weight' needs padding for the second field.

No, the bug is thinking that hashing random bytes in your memory is correct. Why wouldn't you make a correct hash function for your struct ?!

kjeetgill 17 days ago [-]

I really have to second this one. It's still a rough point against C, but if padding isn't treaded as "semantically unreachable" idiomatically that IS the real gotcha.

The idiom here is to take the address of the struct and read the width of it's whole footprint in memory not just the field data. It's a weak idiom breaking under a common use case.

Too 16 days ago [-]

Maybe this is the biggest mistake of C, allowing users to access underlying raw memory so easily and misleading people with "convenient" functions like memcmp etc ?

Most "UB bugs" stem from users who think they know that a struct or data type will be laid out in a certain sequence in memory.

tonysdg 16 days ago [-]

Unless I'm mistaken, the low-level acess to memory is one of the defining features of C. It's basically designed to be human-readable assembly (which is just human-readable machine code).

If anything, I'd blame compilers here -- IMO, they should automatically throw at least a warning any time they need to pad/rearrange a struct to make it explicitly clear to developers what's happening.

Too 16 days ago [-]

Access to low level memory such as registers is always explicitly requested with the volatile keyword. All other memory is implementation details. C is far from being a human readable assembly and has never been, except accidentally.

---

regarding https://www.digitalmars.com/articles/b44.html which suggests "C can still be fixed. All it needs is a little new syntax:

void foo(char a[..])

meaning an array is passed as a so-called “fat pointer”, i.e. a pair consisting of a pointer to the start of the array, and a size_t of the array dimension. "

WalterBright? 17 days ago [-]

I happened to know the idea does work, and has been working in D for 18 years now.

> If nul-termination of strings is gone, does that mean that the fat pointers need to be three words long, so they have a "capacity" as well as a "current length"?

No. You'll still have the same issues with how memory is allocated and resized. But, once the memory is allocated, you have a safe and reliable way to access the memory without buffer overflows.

> If not, how do you manage to get string variable on the stack if its length might change? Or in a struct? How does concatenation work such that you can avoid horrible performance (think Java's String vs. StringBuffer?)?

As I mentioned, it does not address allocating memory. However, it does offer one performance advantage in not having to call strlen to determine the size of the data.

> On the other hand, if the fat pointers have a length and capacity, how do I get a fat pointer to a substring that's in the middle of a given string?

In D, we call those slices. They look like this:

    T[] array = ...
    T[] slice = array[lower .. upper];

The compiler can insert checks that the slice[] lies within the bounds of array[].

> Am I able to take the address of an element of an array?

Yes: `T* p = &array[3];`

> Will that be a fat pointer too?

No, it'll be regular pointer. To get a fat pointer, i.e. a slice:

    slice = array[lower .. upper];

> How about a pointer to a sequence of elements?

Not sure what you mean. You can get a pointer or a slice of a dynamic array.

> Can I do arithmetic on these pointers?

Yes, via the slice method outlined above.

> If not, am I forced to pass around fat array pointers as well as index values when I want to call functions to operate on pieces of the array?

No, just the slice.

> How would you write Quicksort? Heapsort?

Show me your pointer version and I'll show you an array version.

> And this doesn't even start to address questions like "how can I write an arena-allocation scheme when I need one"?

The arena will likely be an array, right? Then return slices of it.

---

m_mueller 17 days ago [-]

I’ll add to this that C having committed to this mistake ((context: the mistake is treating arrays as pointers rather than a separate type which is a 'fat pointer' with both the pointer and the array length)) is one of thr main reasons some people (scientific programmers) are still using Fortran. Arrays with dimensions, especially multidimensional ones, allow for a lot of syntactic sugar that are very useful, such as slicing.

Athas 17 days ago [-]

Hell, you don't even have to go to slicing for language-supported multidimensional arrays to make sense. Simply being able to index with a[i][j] is so much nicer than the manual flat addressing a[i*n+j] that you end up with in C. (a[i][j] does work in C, but only if the array dimensions are constants.)

---

" Julia was first publicly announced with a number of strong demands on the language:

    We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled.

...

All told, we have built a language that is:

    Fast: Julia was designed from the beginning for high performance. Julia programs compile to efficient native code for multiple platforms via LLVM.
    General: It uses multiple dispatch as a paradigm, making it easy to express many object-oriented and functional programming patterns. The standard library provides asynchronous I/O, process control, logging, profiling, a package manager, and more.
    Dynamic: Julia is dynamically-typed, feels like a scripting language, and has good support for interactive use.
    Technical: It excels at numerical computing with a syntax that is great for math, many supported numeric data types, and parallelism out of the box. Julia’s multiple dispatch is a natural fit for defining number and array-like data types.
    Optionally typed: Julia has a rich language of descriptive data types, and type declarations can be used to clarify and solidify programs.
    Composable: Julia’s packages naturally work well together. Matrices of unit quantities, or data table columns of currencies and colors, just work — and with good performance."

--

" For me, the tty/pty, shells, screen/tmux/..., ssh, and so on, are the things that make Unix so powerful. The fact is that Win32 is far superior in a number of areas (SIDs >> UIDs/GIDs, security descriptors >> {owner, group, mode, [ACL]}, access tokens >> struct cred), but far inferior in the things that really matter to a power user trying to get things done.

...

 zvrba 12 hours ago [-]

> Why would you want to do that? Use (RPC

shared memory some other IPC mechanism).

Yes, structured data exchange is the correct answer. When I have the opportunity to code something from scratch, this is the route I take. "

 pjc50 8 hours ago [-]

> I expected the simple encryption mechanism, over which whatever communicates

In the UNIX world, that's what it gives you - a stream of bytes. Hence things like rsync-over-ssh or git-over-ssh. It also has a port forwarding mode which has special support for X11, which gives you remote windowing over a stream of bytes too.

The main, huge, benefit is that the abstraction is pretty simple, it's discoverable, and you can use the same interface as a human. You can also plug any stream-of-bytes into any other stream-of-bytes, whereas API or RPC based systems have to be designed to interoperate.

reply

acqq 1 hour ago [-]

As I’ve tried to implement my minimal ssh client (just to connect, execute some command and get the result) I’ve had exactly opposite impression of the “just a stream of bytes” that you mention -- exactly the lack the abstraction. Can you point to any source that does ssh without having to care about a lot of weird terminal and console ancient stuff? I’d be really glad to see it! To me it looked as “everything and the kitchen sink” (that is, exactly the kind of things mentioned in the OP or the comments, like terminal signals and whatnot) has to be there.

SSL is straightforward compared that, at least, once the keys are set. But ssh... as seen in the OP even the console or the terminal or however that part it called has to be very special, and they are obviously proud they implemented that too. In 2018. Probably decades after the last single hardware terminal was sold.

reply

zadjii 1 hour ago [-]

So this is a confusing situation on Windows.

Commandline applications on linux rely on a TERM setting (with termcaps) to be able to know what VT sequences the terminal is able to support. On Windows, we only really have one terminal, conhost.exe, and our goal there is to be compatible with TERM=`xterm-256color`. That's why you'll see that WSL has that set as the default term setting.

Now even with ConPTY?, when a client writes VT sequences, they still need to be interpreted by conhost. This is because technically, a console application could use both VT and the console API, and we need to make sure the buffer is consistent. So clients should still assume that they should write out `xterm-256color` compatible sequences.

Now on the other side of thngs, the "master"/terminal side of conpty, we're going to "render" the buffer changes to VT. Fortunately, we'd dont really need a deep VT vocabulary to make this possible, so the VT that's coming out of a conpty is actually pretty straightforward, probably even vt100 level (or I guess vt100-256colors, as insane a termcap that would be).

It's definitely a future feature that we'd like to add to make conpty support multiple different TERM settings, and change the sequences we emit based on what the terminal on the other side is going to expect.

We haven't really gotten into the nitty gritty of all of this quite yet, so if you find bugs or have feature requests, we're happy to take a look at them. You can file issues on [our github](https://github.com/microsoft/console) and we'll add them to our backlog

reply

---

" In the *NIX world, this problem was solved by the introduction of the Pseudo Terminal (PTY).

The PTY virtualizes a computer's serial communications hardware, exposing "master" and "slave" pseudo-devices: Terminal apps connect to a master pseudo-device; Command-Line applications (e.g. shells like Cmd, PowerShell?, and bash) connect to a slave pseudo-device. When the Terminal client sends text and/or control commands (encoded as text) to the master, the text is relayed along to the associated "slave". Text emitted by the application is sent to the slave and is then routed back to the master and thus to the Terminal. Data is always sent/received asynchronously. Terminal to/from PTY to/from App/Shell Terminal PTY App/Shell

Importantly, the "slave" pseudo-device emulates the behavior of a physical Terminal device and converts command characters into POSIX signals. For example, if a user types CTRL+C into the Terminal, the ASCII value of CTRL+C (0x03) is sent via the master. When received by the slave, the 0x03 value is removed from the input stream and a SIGINT signal is generated.

This PTY infrastructure is used extensively by *NIX Terminal applications, text pane managers (like screen, tmux), etc. Such apps call openpty() which returns a pair of file descriptors (fd) for the PTY's master and slave. The app can then fork/exec the child Command-Line application (e.g. bash), which uses its slave fds to listen and return text to the attached Terminal.

This mechanism allows Terminal applications to "talk" directly to Command-Line applications running locally in the same way as the Terminal would talk with a remote Computer via a serial/network connection. " [2]

---

k__ 9 hours ago [-]

Can anyone explain why OCaml seems easier for people from non-FP backgrounds than Haskell?

I tried Haskell and PureScript? and didn't understand a thing.

I tried OCaml and Reason and it felt rather easy to learn.

reply

andrepd 9 hours ago [-]

There are two major differences between OCaml and Haskell.

First: Haskell is a pure functional language, meaning that side effects, IO, keeping state, all that must be dealt with inside the type system, which while undoubtly clean and powerful, it makes it often cumbersome to do things that should be quick and simple. OCaml, on the other hand, is pragmatic. It promotes and facilitates functional programming but it also let's you easily "drop down" (so to speak) to an imperative way to write code. You have mutable variables, imperative loops, you can perform side effects anywhere, etc. As a rule of thumb, you can write fully functional code, yet sometimes when the best way to write a certain thing is imperatively, you can do so with no hassle.

Second: Haskell is lazy, and OCaml is strict. This makes it much easier to reason about performance and makes for much more predictable code. At the same time, it has first-class support for laziness when you do need it, but you have to explicitly "opt in", so to speak. Haskell also lets you force strict evaluation, but you will find it's much easier to build laziness in a strict language than the other way around.

There are also other important differences (like typeclasses vs functors) but I feel these two are the biggest ones.

reply

TuringTest? 8 hours ago [-]

In Haskell, you have to opt-in to side-effects (monadic do-blocks).

In OCaml, you have to opt-in to lazy evaluation.

reply

the_duke 5 hours ago [-]

Do blocks don't allow side effects, they are just syntactic sugar that makes composing monadic operations easier.

Haskell does have escape hatches with unsafePerformIO and IORef/STRef

reply

cannabis_sam 7 hours ago [-]

In (GHC) haskell you also have opt-in strictness (per module or per variable)

reply

risto1 5 hours ago [-]

Haskell is a much more complicated language than Ocaml:

That's why Ocaml is easier to learn and use

reply

akavel 7 hours ago [-]

For me personally, the proliferation of custom operators/syntax is a serious obstacle. When reading, I seem to need to "hear/speak" what I'm reading in my mind, and seeing a line of operators that I have no idea how to read/spell "aloud" (and thus remember, even if I manage to find their definition, which is non-trivial too), cripples my mind and renders me unable to comprehend what's going on in the code.

reply

hellofunk 4 hours ago [-]

This was my #1 turnoff for Haskell as well. All these operators and functions, each with possibly varying precedence as well as different fixities. I was constantly having to lookup how the function was actually being called -- which of its surrounding entities were a part of the function call, and in which order did the calls take place? In addition to this complexity, the functions are often cryptic little characters (operators) that just make the readability worse. Programming languages should make our lives as developers easier, right?

reply

reply

shados 2 hours ago [-]

Ding ding, we have a winner.

A lot of languages support custom operators, but some really abuse the hell out of them. Googling for <%-/\-%> is very difficult.

reply

tylerhou 1 hour ago [-]

But you can always Hoogle for them: https://www.haskell.org/hoogle/?hoogle=%3C%24%3E

reply

---

 matharmin 2 hours ago [-]

This is why you (looking at frameworks) should never use a format that may contain code to store data, especially when the client has control over that data (even if signed). The same vulnerability has occurred in almost every language/framework that does this, including Rails and Java-based ones. Just use something like JSON, which completely avoids code execution vulnerabilities like this. Except of course for the early JavaScript? JSON parsers that just used eval for parsing...

reply

rmetzler 57 minutes ago [-]

And "something like JSON" could mean YAML, which had it's own share of RCE bugs. It's still better than these object deserialization bugs one can find in Java or Python Pickle, which seems to be even more permissive.

reply

---

nextos 9 hours ago [-]

GNU Parallel is the first thing I usually install on top of a standard Unix userland. It's almost a superset of xargs, with many interesting features. It depends on perl, though.

htop is also pretty much a great replacement for top. And ripgrep a great replacement for the find

xargs grep pattern.

Aside from that, I'm pretty content with the Unix userland. It's remarkable how well tools have aged, thanks to being composable: doing one thing and communicating via plain text.

I'm less happy with the modern CLI-ncurses userland. Tools like e.g. mutt are little silos and don't compose that well. I've migrated to emacs, where the userland is much more composable.

reply

AnIdiotOnTheNet? 9 hours ago [-]

I don't feel the plain text portion has aged well at all in regards to composability. It leads to a lot of headache as the complexity of the task grows because of the in-band signaling and lack of universal format. I think it is high time the standard unix tools were replaced with modern equivalents that had a consistent naming scheme and pipelined typed object data instead of plain text.

reply

jolmg 5 hours ago [-]

Plain text was chosen to interface the various programs of Unix OSes because it's the least common denominator all languages share. It also forces all tools to be composable with each other. You can take output text that was obviously not formatted for easy consumption by another program and still use all the information it outputs for input into another program. Programs that were only thought to have users handling its input and output (ncurses apps) can also be forced to be used by programs through things like the expect TCL program or Ruby's expect library.

If programs used typed data, they'd still need the option to output text to present results in a format the user can understand. To do this, a negotiation protocol could be established, like skissane said. This, in my opinion, is BAD, because then there's the possibility or probability that they'll be differences in the information conveyed in the different formats.

I believe that the use of plain text as the universal format of communication between programs is one of the greatest design decisions for Unix and CLI.

reply

skissane 8 hours ago [-]

If Unix pipes gained support for exchanging some kind of out-of-band signalling messages, then CLI apps could tag their output as being in a particular format, or even the two ends of a pipe could negotiate about what format to use. If sending/receiving out-of-band messages was by some new API, then it could be done in a backwards compatible way. (e.g. if other end starts reading/writing/selecting/polling/etc without trying to send/receive a control message first, then the send/receive control message API returns some error code "other end doesn't support control messages")

I have suggested this before: https://news.ycombinator.com/item?id=14675847

(But I don't really care enough about the idea to try to implement it... it would need kernel changes plus enhancements to the user space tools to use it... but, hypothetically, if PTYs got this support as well as pipes, your CLI tool could mark its output as 'text/html', and then your terminal could embed a web browser right in the middle of your terminal window to display it.)

reply

vram22 7 hours ago [-]

>your CLI tool could mark its output as 'text/html', and then your terminal could embed a web browser right in the middle of your terminal window to display it.

Ha ha, I had dreamed up something like this in one of my wilder imaginings, a while ago: A command-line shell at which you can type pipelines, involving some regular CLI commands, but also GUI commands as components, and when the pipeline is run, those GUIs will pop up in the middle of the pipeline, allow you to interact with them, and then any data output from them will go to the next component in the pipeline :) Don't actually know if the idea makes sense or would be useful.

reply

---

barbegal 8 hours ago [-]

I feel like these tools very much go against the Unix philosophy of "Write programs that do one thing and do it well". They try to do the pretty user interface and the underlying operation in a single tool.

I prefer PowerShell? in this respect where the output of each command is not text streams (as in the Unix world) but objects which can be operated on in a more object oriented way. You spend less time thinking about text parsing and more time thinking about the data you're working with.

I think the command line is great as a way of manipulating data streams but it is incredibly lacking as a user interface. There is very little consistency of the interface between commands and new commands and options aren't easily discoverable.

reply

---

" I switched to TypeScript? after two years of being both heavily invested in Flow and advocating it.

The reason I initially chose Flow was the fact that their goals were more ambitious (trying to build a sound type system for example). And there were features that Flow had and TypeScript? didn't (tagged unions for example).

The reason I ultimately switched to TypeScript? was that after a couple of years, it had simply caught up and surpassed in the one area it was behind Flow (i.e. expressiveness of the type system and type-level programming), and that it had widened the lead in the areas that it was always better at, like much better tooling, bigger community, core team being more engaged with the community, releasing RCs to smooth out the rough edges, etc. "

---

"My favorite languages are, for this reason, multi-paradigm: Common Lisp, Mozart/Oz, Scala and C++." nextos

'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.' [3]

---

https://www.info.ucl.ac.be/~pvr/Boftalk2005.pdf

"Complete set of concepts (so far)" in Oz/Mozart:

::=

skip (Empty statement) <x>1 = <x>2 (Variable binding) <x>=<record>

1 2 (Sequential composition) local <x> in end (Variable creation) if <x> then 1 else 2 end (Conditional) case <x> of

then 1 else 2 end (Pattern matching) {<x> <x>1 ... <x>n} (Procedure invocation) thread end (Thread creation) {WaitNeeded? <x>} (By-need synchronization)

<number><procedure> (Value creation)

{NewName? <x>} (Name creation) <x>1 = !!<x>2 (Read-only view) try 1 catch <x> then 2 end (Exception context) raise <x> end (Raise exception) {NewPort? <x>1 <x>2} (Port creation) {Send <x>1 <x>2} (Port send) <space> (Encapsulated search)

alternative to the two 'port' statements are:

{NewCell? <x>1 <x>2} (Cell creation) {Exchange <x>1 <x>2 <x>3} (Cell exchange)

---

Go vs C#:

pdeuchler 19 hours ago [-]

This is correct, but they quickly pivoted to targeting the Python codebases. Rob Pike has a talk where he talks about how Google used C++ and C to rewrite hot Python paths and how they wanted Go to be able to completely replace that whole pattern. Russ also has a blog post (maybe? it also could have been a comment in a github issue, tbh I can't remember) where he mentions converting python programmers was orders of magnitude easier within Google, as the C++ programmers often had rose colored glasses about their own abilities and the tradeoffs of C++.

It's interesting, as I don't think Go would have been successful without the pivot, but I also don't think it would have been as successful if they had started off trying to replace Python.

reply

lostmsu 16 hours ago [-]

Well, why did not they just use C#? It already had most, maybe even all of Go's current features.

Hell, the main thing of Go, the go op is basically await.

reply

---

ken 23 hours ago [-]

> I'm happy to go on record claiming that Go is the mainstream language that gets this really right. And it does so by relying on two key principles in its core design...

The unmentioned third principle that it relies on is: "Curly braces, so it looks almost like C if you squint". That's what makes a language "mainstream" these days.

It looks like Go is very good at concurrency, but from everything I've read, I don't see how it's any better than Erlang or Clojure. The only controversial part of Eli's claim is the implication that other languages that get concurrency right aren't "mainstream". That's not a well-defined term and so naturally this is going to irk many people.

Perhaps the title would have been more accurate as "Go hits the concurrency nail on the head, using the hammer of K&R style". :-)

reply

pythonaut_16 23 hours ago [-]

I don't think Go has better concurrency than Erlang, but go does have generally better raw performance. Ultimately I think both are good languages they just make different trade offs.

If I was designing a command line app I'd probably choose Go, if I was designing a web service I'd probably choose Elixir/Erlang. Of course those can easily flip; there are classes of command line apps where I might choose Elixir/Erlang and there are classes of web services where I might choose Go.

I can't speak to Clojure though.

reply

virmundi 23 hours ago [-]

Go doesn’t need a VM. That’s key. You get good concurrency with an easy to deploy binary that can target the major chips.

reply

ken 22 hours ago [-]

That's a good point. Modern programming languages are all pretty big and complex and whenever someone tries to nail down "this is why it's good/popular" there always seem to be other significant factors that got missed. After all, if it were just one factor that leads to programming language popularity, we could design the Next Big Language by just following that recipe!

In the case of Go, I can imagine many of its attributes are significant:

Actually, I think I'm changing my mind. I'd put "corporate backing" higher on the list. Some languages have gotten a huge boost by being backed by a major corporation (classic example: Objective-C), and I'm having trouble thinking of a general-purpose programming language backed by a major company that did not become popular (Dart would be my best guess but even that seems to be doing alright).

reply

weberc2 19 hours ago [-]

I like Go because everything is super simple. From dependency management to unit testing to performance profiling to its build/deploy story, everything just works. I don't need to learn a new configuration language and project configuration format and complex dependency management system just to build my project. I don't need to pick a unit test framework and test runner. I don't need to figure out how to wire said framework / test runner into my build tooling. I don't need to figure out how to ship my app along with its dependencies or make sure that my deployment target has the right version of a VM installed. I don't even need to worry about learning a new IDE. I could keep going, but it's things like this that matter to me even more than the language itself.

reply

matt_m 22 hours ago [-]

Go is also statically typed while Erlang/Clojure are dynamically typed. That seems like another pretty significant difference aside from the syntax.

reply

pjmlp 22 hours ago [-]

Modula-2 used to fit all those points, except backed by major corporation, unless we consider GM a major corporation.

reply

_ph_ 21 hours ago [-]

I consider Go as a Modula-2 successor. It shares most traits I liked a lot with Modula-2. I prefer the C-style syntax to the more long-winded one though. The familiarity is no surprise though, considering that Robert Griesemer is a student of Wirth.

reply

tralarpa 21 hours ago [-]

Oberon (which is a successor of Modula-2)

reply