proj-plbook-plChDynamics

Table of Contents for Programming Languages: a survey

Chapter: The dynamics of programming language design and adoption

The process of designing a programming language

"Even if you’re designing for professional programmers, in the end your programming language is basically a user-interface design." -- Alan Kay, queue.acm.org/detail.cfm?id=1039523

"Choosing the best notation is the entire problem of programming language design, and this problem is neither mathematical nor scientific. A programming language is a user interface for programmers, and if you can reduce UI design to math, science, or any formal process, call Apple, not me." -- http://unqualified-reservations.blogspot.com/2007/08/whats-wrong-with-cs-research.html

"Many times I have been asked how one "invents" a programming language. One cannot really tell, but it certainly is a matter of experience with the subject of programming, and of careful deliberation. Sometimes I answered: "Like one designs an airplane. One must identify a number of necessary building blocks and materials, and then assemble and combine them properly to a functioning whole". This answer may not be entirely satisfactory, but at least in both cases the result either flies or crashes." -- http://www.swissdelphicenter.ch/en/niklauswirth.php

"If there is one central theme is that the design of the core calculus should drive development, not the frontend language." -- http://dev.stephendiehl.com/fun/000_introduction.html

"There seems to be a long period of initial obscurity for any new language. Then after that comes a long period of semi-obscurity, followed by total obscurity." -- Paul Bissex

"There are just two kinds of languages: the ones everybody complains about and the ones nobody uses." -- Bjarne Stroustrup

"New languages are always claimed to be “simple” and to become useful in a wider range of real-world applications they increase in size and complexity." -- Bjarne Stroustrup

"When I read commentary about suggestions for where C should go, I often think back and give thanks that it wasn't developed under the advice of a worldwide crowd. C is peculiar in a lot of ways, but it, like many other successful things, has a certain unity of approach that stems from development in a small group." -- The future according to Dennis Ritchie (a 2000 interview)

http://people.cis.ksu.edu/~schmidt/705a/Lectures/history.html claims:

" When we look closely at the inventions of languages and description methods, we see certain points that stand out:

    A new language is developed to satisfy a need. For example, Fortran was developed to help physicists program their equations, Simula was developed to do simulations, Pascal was developed for portability, the C-languages were developed for systems programming, and Java was developed for embedded systems. There are many, many more examples, especially domain-specific languages (e.g., Javascript, Matlab, HTML, CSS, SQL, Ajax...) . When you work in an application area, it is inevitable you will want a language customized to the application area. You might end up inventing it yourself.
    A language is designed by a sole person or a small team with a common vision. The designer has both a deep understanding of the intended application area and its runtime (hardware) platform. The designer always has prior experience at implementing languages.
    The language's design begins with the runtime platform (runtime machine, virtual machine) that the language will manipulate. Indeed,
        the purpose of a programming language is to manipulate a machine 
    The language is prototyped (given a quick implementation) in terms of an interpreter coded in another language or in terms of a core subset of itself (``bootstrapping''). After case studies, efficient implementations are developed.
    Formal semantics techniques (operational, denotational, axiomatic) are used more for after-the-fact documentation and analysis and less for design and development." -- http://people.cis.ksu.edu/~schmidt/705a/Lectures/history.html claims:

Design by committee

There is a general sense that languages that are 'designed by committee' adopt an 'agglomerative' style and turn out too big (see below for discussions of 'agglomerative' and 'big' languages).

Some hypothetical reasons for this could be:

Evidence and opinions:

Counter-evidence:

Generally the antithesis of 'design by committee' is thought to be languages designed by one person, however languages designed by a small number of people, say two or three, are often not said to be 'designed by committee'. In addition even languages purportedly designed by one person often in reality have a small community of designers after the initial proposal is publicized but before a practical implementation becomes available, with one person having dictatorial control.

Later changes in a programming language

Method of adopting changes

Some languages get very conservative: the language is standardized, then new verisons evolve by asking the compiler writers, "what non-standard things are you already doing that we can all agree on"? If this is the only type of change, then many kinds of change are unlikely to occur ( https://lobste.rs/s/ts266k/single-member_structs_for_type_safety_in_c )

Other languages allow 'breaking changes' to varying degrees. Some strategies are:

Links:

Types of compatibility

Forward-compatibilty

Backwards-compatibilty

new syntax looks new

When new syntactic constructs are added, consider: if a reader familiar with a previous version of the language were to read code containing the new construct, would it be immediately clear to them that this was a new construct that they didn't know?

If a language change is to be (? backwards-compatible), then old code must mean the same thing in the revised language, so new syntactic constructs must not appear in any old code, therefore they must be selected to be things that were illegal in the old language's syntax. But it is still possible for something to be illegal but for this to be difficult for human readers to quickly see.

todo

where to put this: example of a protocol ignoring header fields it doesn't recognize (is that forward-compatibility? but it may not be backwards-compatible, see the new construct example above)

Tendency to grow in size

For reasons of backwards compatibility, there might be a tendency for languages to grow in size, rather than shrink. This is certainly the case for hardware ISAs (Instruction Set Architectures), which do not generally eliminate old instructions.

[1] suggests that in addition, the larger a language gets, the less resistance there is to making it even larger.

The dynamics of adoption of programming languages

Why some languages succeed

aside from the stuff covered in the chapter Criteria

Lack of buginess

Perhaps the most important attribute for a programming language to be adopted (in my opinion) is a lack of bugs; the language implementation should not crash, and it should behave correctly.

e.g. D has had some problems with this https://news.ycombinator.com/item?id=6532806 , https://news.ycombinator.com/item?id=6534390

libraries are the next most important thing

todo find quotes

"I have yet to meet a language feature that's more important than library availability, profiling, autocompletion, documentation, debugging, etc." -- https://news.ycombinator.com/item?id=7110629

(note about libraries; libraries are tied to languages: "Unfortunately, you can't just port your existing API to the new, higher-level language and runtime and expect it to be pleasing. One of the biggest benefits of moving to a higher-level language is the elimination of the most awkward and warty conventions, concepts, and entities from your previous API. " -- http://arstechnica.com/apple/2010/06/copland-2010-revisited/2/ )

budget

libraries, dev tools, marketing

e.g. java, c#

OS blessing

e.g. C and Unix, C++ and Windows

http://stackoverflow.com/questions/580292/what-languages-are-windows-mac-os-x-and-linux-written-in

"For instance, Basic would never have surfaced because there was always a language better than Basic for that purpose. That language was Joss, which predated Basic and was beautiful. But Basic happened to be on a GE timesharing system that was done by Dartmouth, and when GE decided to franchise that, it started spreading Basic around just because it was there, not because it had any intrinsic merits whatsoever." -- Alan Kay, http://queue.acm.org/detail.cfm?id=1039523

"Swift's popularity has undergone a meteoric rise for a young language...But that's not surprising given its ecosystem...language success has less to do about its theoretical benefits and more to do about what environments it allows the developer access to. Javascript is the case-in-point; I think few people would argue it is a well-designed language, but if you want to do web development, you're going to need at least a basic understanding of it, so it maintains brutal popularity." -- https://news.ycombinator.com/item?id=9681201

network effects (the rich get richer)

e.g. C

the elitist perspective

" One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.

You could think of it as putting a low-pass filter on some of the good ideas from the ’60s and ’70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture. .... I think there’s this very long lag between what you might call the best practice in computing research over the years and what is able to leak out and be adapted in the much more expedient and deadline-conscious outside world." -- Alan Kay, http://queue.acm.org/detail.cfm?id=1039523

worse is better

"Let’s say the adoption of programming languages has very often been somewhat accidental, and the emphasis has very often been on how easy it is to implement the programming language rather than on its actual merits and features." -- Alan Kay, http://queue.acm.org/detail.cfm?id=1039523

worse is better essay

open source

For example, the language D has been criticized at times for not having a reference implementation that was sufficiently open. (e.g. https://news.ycombinator.com/item?id=6532481 ).

Since open-source became popular, it is rare to find a new popular programming language which does not satisfy at least one of the following:

http://blog.jacius.info/2012/05/29/a-personal-lisp-crisis/ is one example of someone who disregarded using language because it's license was too strict (Racket, LGPL).

which factors are most important?

http://alarmingdevelopment.org/?p=826

http://web.archive.org/web/20130914000840/http://alarmingdevelopment.org/?p=826

The dynamics of adoption of programming languages: how long does it take?

" Scott Lawrence 8/14/10 Re: [go-nuts] Re: GO Vs D OTOH, taking a few quick glances around (http://www.complang.tuwien.ac.at/anton/comp.lang-statistics/ mainly), it looks like most popular programming languages of today spent a good deal of time in obscurity. In 1997, 8 years after the python project was begun, it was at level of obscurity comparable to ada today (if these statistics are to be trusted). Java did better - it was announced in 1995, and JDK1.0 was released in 1996. It rose out of obscurity within the next two years. C seems to have done about as well/poorly as python - it was working well in 1973 (when the Unix kernel for PDP-11 was written in it - I'm reading from ((dead link; see probable replacement https://www.bell-labs.com/usr/dmr/www/chist.html )), I wasn't alive then), but C wasn't "spreading widely" until the 1980s (again, semi-quoting). Of course, B was around for a good bit before C (two years at least), and the BCPL description was around in 1969, 4 years before any part of it was ready for handling the Unix kernel.

So counting the years elapsed seems a poor predictor of language adoption. "

Why do some language communities fragment less than others?

Lack of powerful metaprogramming

http://www.winestockwebdesign.com/Essays/Lisp_Curse.html

A single person or organization in charge

cite http://programmers.stackexchange.com/a/143240

A single implementation

http://programmers.stackexchange.com/a/147481

Young languages fragment less

cite http://programmers.stackexchange.com/a/143344

Free tools

http://www.ianbicking.org/where-smalltalk-went-wrong.html claims that commercial tools don't count towards the adoption of programming languages because many individual hackers don't buy them, and most corporations, who do pay for tools, don't use new/small languages.

Evolution in the programming language landscape

Over time, higher level languages become more popular

e.g. in the past we see machine language, assembly language, fortran/C/etc, C++, Java

probably because over time hardware becomes more capable, making the expense of stuff like automatic memory management more tolerable (e.g. machine time becomes less valuable compared to developer time)

however there are still places where this doesn't hold, generally in embedded systems

others who agree with this hypothesis:

Process Case Studies

misc

"

    A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." -- https://en.wikipedia.org/wiki/John_Gall_%28author%29#Gall.27s_law

" The next big programming language will be slower than what it replaces

Why?

    The incumbent language will have been optimized relentlessly
    To replace it, the new language must offer something new that will be valuable even if slow.

" -- http://www.reddit.com/r/programming/comments/pylx/ask_reddit_why_is_ocaml_faster_than_haskell/cpzqf

Minimalism

Minimalism in a programming language has various benefits:

However, there is a cost; a language which is very minimalistic can be feasibly reimplemented many times by different people. In addition, features which are desired by programmers but not in the standard will be realized in different ways in the various implementations. This hurts 'communal gravity'. The result is that many libraries which are created for one or another implementation will tend to use one or more of these implementation-specific features, meaning that they cannot easily be reused in another implementation of what is supposedly the same language. This means that needed libraries must be recreated multiple times for the various language implementations, causing duplication of effort which (assuming a fixed amount of labor) leads to less comprehensive libraries than would otherwise have been. In addition, the different libraries are themselves a further kind of divergence in the implementation, as applications written on top of those libraries cannot easily be ported to a different implementation.

When implementations divergence in this fashion, they might more properly be considered, not so much as implementations of the same language, but rather as a family of loosely related dialects.

This appears to have happen in the case of Scheme [2] [3].

I hypothesize that this danger is present whenever:

Links


" ...a number of authors have not have financing, plans for new contributors, etc. in the beginning and have been successful; these are not requirement for success in the beginning.

For example, here is the story of how Python got its start from Guido van Rossum (quote from 1996):

"Over six years ago, in December 1989, I was looking for a "hobby" programming project that would keep me occupied during the week around Christmas. My office ... would be closed, but I had a home computer, and not much else on my hands. I decided to write an interpreter for the new scripting language I had been thinking about lately: a descendant of ABC that would appeal to Unix/C hackers. I chose Python as a working title for the project, being in a slightly irreverent mood (and a big fan of Monty Python's Flying Circus)."

And the history of how Ruby got its start by Yukihiro Matsumoto (quote from 1999) is similar:

"I was talking with my colleague about the possibility of an object-oriented scripting language. I knew Perl (Perl4, not Perl5), but I didn't like it really, because it had the smell of a toy language (it still has). The object-oriented language seemed very promising. I knew Python then. But I didn't like it, because I didn't think it was a true object-oriented language — OO features appeared to be add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for but couldn't find one. So I decided to make it." "

-- kangar00

---

(speaking of Clojure and Elixir, both languages run on top of virtual machines originally made for other languages):

" pjmlp 1 day ago [-]

I will be blunt and state that neither of them matter.

Some people might hate Java and search for alternatives on the JVM, but 90% of the production code on the JVM, its commercial variants, and Android is plain Java.

Also ClojureScript? is not 100% Clojure and most web shops across the world only care about JavaScript?.

Likewise, the majority of developers targeting BEAM will be using Erlang.

If something I learned from my Turbo Pascal/Delphi, Oberon experience, is that programming languages that aren't considered a first class experience from the platform owners never manage to get a significant market share, long term.

reply

geodel 1 day ago [-]

I agree and add even 90% sounds like conservative estimate. One reason I think is Java is good enough and the incredibly loud communities of smaller JVM languages have not produced enough libraries, tooling etc to attain a 'just works' title which is needed for most common type of development done by salaried workers.

reply "

" city41 1 day ago [-]

But the problem is you really do need a "paredit" for your editor when developing in Clojure. It's a double whammy for beginners, and a huge one at that. Pretty much every other language ever is perfectly editable in everyone's editor as it exists today.

reply "

---

The C# Design Process

---

Some advice from Stroustrup to the C++ committee, in peparing for C++17:

" Bad (committee) habits to avoid, to avoid giving critics more ammunition: •Make something a library because that's easier to get accepted in the committee than a language feature (even if there is good argument that what is provided is fundamental) •Given a choice between two alternatives, choose both, add a third, and modify the first two "to please everybody who could affect the vote" (this is pure design-by- committee) •Oppose proposals seen as competing with your favorite proposal for time/resources •Push hard for the immediately useful (only) •Oppose proposals not relevant to your current job, stalling an improvement that would benefit others •Focus on the Working Paper (WP) text and choose among technical alternatives based on what fits best with the current text, rather than giving precedence to user needs •Think that more syntax equates to safety and ease of use for the majority of programmers •Think that “no keywords” implies simplicity and usability •Serve the library writers and other experts while ignoring the majority of current and potential C++ programmers •Chase “current fashion” and push for features from other popular languages without considering their impact on C++ programming styles or interactions with existing C++ facilities •Try to make a proposal self-contained to avoid interaction with existing facilities •Consider only solutions that are obvious or fashionable, rather than addressing root causes of programmers’ problems •Ignore other proposals being considered •Present “principles” as non-negotiable absolutes •Accept nothing that is not perfect •Consider minor improvements inherently of minor importance •Try to do “everything” •Add something just because we can

Try not to do any of that! " -- Thoughts about C++17 by Bjarne Stroustrup

---

" Keep simple things simple for the majority of programmers. Note that auto and range-for loops are invariably near the top of people’s list of useful C++11 features. They are also among the simplest facilities we provided. " -- Thoughts about C++17 by Bjarne Stroustrup

---

http://programmers.stackexchange.com/questions/143236/how-were-some-language-communities-eg-ruby-and-python-able-to-prevent-fragmen

discussion: https://news.ycombinator.com/item?id=3935251

---

" Overall strategy

In the abstract, reaching the kind of adoption we need means bringing people along a series of distinct steps:

    Public perception of Rust
    First contact
    Early play, toy projects
    Public projects
    Personal investment
    Professional investment

We need to (1) provide "drivers", i.e. strong motivation to continue through the stages and (2) avoid "blockers" that prevent people from progressing.

At the moment, our most immediate adoption obstacles are mostly about blockers, rather than a lack of drivers: there are people who see potential value in Rust, but worry about issues like productivity, tooling, and maturity standing in the way of use at scale. The roadmap proposes a set of goals largely angled at reducing these blockers.

However, for Rust to make sense to use in a significant way in production, it also needs to have a "complete story" for one or more domains of use. The goals call out a specific domain where we are already seeing promising production use, and where we have a relatively clear path toward a more complete story. " --- https://github.com/aturon/rfcs/blob/roadmap-2017/text/0000-roadmap-2017.md

---

"Languages generally become popular when they bring innovative runtime, or when they have runtime exclusivity. The quality of the language itself is secondary." -- https://matklad.github.io/2020/09/13/your-language-sucks.html

---

" debugnik 1 day ago

root parent prev next [–]

If only the CLR had followed CLR footsteps as well, but they didn't even try after the DLR. Most non-Roslyn languages can barely interact with modern C# or the newer build configurations; even F# is playing catch-up and they're part of the official toolchain.

reply

pjmlp 1 day ago

root parent next [–]

That is the sin of guest languages, that is why C rules on UNIX, JavaScript? on Web, Java on the JVM, C# on the CLR, ....

Every guest language means additional IDE plugins with platform knowledge, since most communities want idiomatic libraries, an ecosystem on top of the ecosystem, as the actual platform is only implemented in a main "systems" language, mastering it is required anyway for all the leaky abstractions on the platform, additional build toolchains,...

In the end, each polyglot platform achieves a global maximum of main language, and possibly a winner among all guest languages, even if it takes a couple of years with projects fading away until this happens.

It sucks, however so it is the outcome of human nature attention span, and not being able to keep momentum for all languages across the whole lifetime of a given platform.

reply

DylanSp? 1 day ago

root parent next [–]

It seems like the JVM has been more successful than most platforms at supporting other guest languages. Java is still dominant, but Kotlin's pretty popular (as other comments have mentioned) and Scala has a nontrivial amount of usage. I think JRuby's also used some, but I'm not sure.

reply

WorldMaker? 1 day ago

root parent next [–]

Arguably a perspective here is not that the JVM is more successful but that simply that Java as a language is less successful/more deficient language than C# and leaves more evolutionary niches available for other languages to coexist in its ecosystem, whereas C# checks more boxes generally. One reading of the history of Kotlin is that is almost directly the reason for Kotlin's existence in the first place because Java itself fell so far behind C# state of the art.

The DLR made it much easier for dynamic languages on .NET to interact and share infrastructure on top of the CLR, but it also gave C# directly just enough dynamic language powers that filled most people's use cases for that niche.

Even in the current race between C# and F#, unless you are using features such as Units of Measure or Type Providers, there is often a sense of "you don't need to use F# because that feature will be eventually in C# anyway" and "F# is just a proving ground for C# ideas". The F# community continues to try to find marketing messages that aren't that and better highlight some of the niches that F# can fill that C# won't. The most recent attempts have been trying to position F# as a "data sciences" language useful for Notebook-driven development in direct comparison/competition to Python/R/Julia. C# actually is finally useful in Notebooks now and has some alright REPL support, but F# likely will always better fit that niche. It will be interesting to see if that marketing approach helps.

reply " -- https://news.ycombinator.com/item?id=38126263