proj-oot-ootDevelopmentProcessNotes2

terminal adds upon package install sound like a great idea to me! we should allow them, unlike npm:

https://www.zdnet.com/article/npm-bans-terminal-ads/

---

[1] lists 4 kinds of docs:

tutorial explanatory how-to/recipe reference

[2] adds

README

i would also add

things like Python's tutorial, or the Ruby book,

which are mixtures of tutorial and explanatory, while also being comprehensive (to a point)

--- license

" I have talked to GCC developers about integrating a Rust frontend and they said, the main blocker is the fact that the language specification isn't stable and a fast moving target. ... I think the instability the GCC devs are concerned about is the release schedule. Rust has 9 releases per year while GCC has 4-5 releases. Unless you update your compiler quickly, you won't be able to use much of the crates.io ecosystem as crates are quick at requiring a new compiler version. Often the implementation of a new language feature is still getting last-minute fixes up to 6 weeks before the release, and sometimes even after that beta period. ... GCC can't miss out releases. The rustc 1.36.0 frontend needs at least a 1.35.0 frontend to compile itself. And most Rust programs in the ecosystem work with rustcs compiled with older LLVM releases, but most Rust programs that have dependencies do need newer rustc releases.

... As for stability of MIR: currently I think MIR is serialized to disk in a glorified mmap way. Basically following the Rust memory representation. It's great if both the creator of the MIR as well as the part that reads the MIR are written in Rust. Furthermore, there are libraries about how memory layout should look like that are provided to codegen backends. Those libraries are written in Rust and not really usable outside of Rust. So currently unless someone serializes MIR using e.g. bincode and provides C bindings for those layout libraries, there are good reasons to write the codegen backend in Rust itself, at least the part that translates MIR to the next stage. "

---

There's an organization called Blue Oak Council in which 3 FOSS lawyers work on licensing-ish stuff:

https://heathermeeker.com/2019/03/07/blue-oak-council-and-the-permissive-license-list/

Kyle Mitchell (executive director), Heather Meeker, Luis Villa. Kyle authors a zillion new licenses, so i'd like to see that this stuff was actually supported by the others, and it seems it was. According to that post by Heather all actively participated in the license list project and in the creation of the Blue Oak license.

Luis also publicly supports this work: " Luis Villa argues that the list of OSI-approved licenses isn't a comprehensive list of usable open source licenses. It should therefore be avoided in contracts or license clauses. But if not that, what is the purpose of the list? Would it make sense to create a smaller list of useful licenses? Villa points to his Blue Oak project as a list of useful permissive licenses. " -- [3]

"I'd happily submit the Blue Oak Model permissive license as an initial guinea pig for such a process...I have no current plans to submit the license primarily because I am too busy to have a massively inefficient discussion on license-review" -- http://lists.opensource.org/pipermail/license-discuss_lists.opensource.org/2019-March/020279.html

"Agree there definitely doesn’t need to be a flood of new licenses, of course, but the correct number is > 0 (or we should nuke everything other than Apache. Or Blue Oak Winking face)" -- https://twitter.com/luis_in_brief/status/1215503399648514048

"(I should disclose here that Kyle and I work together quite a bit, and I co-authored a license with him in 2019—the Blue Oak Model License—that I volunteered to submit to the OSI as a guinea pig to test any improved processes.)" -- https://blog.tidelift.com/open-source-licenses-2019-year-in-review

Luis Villa is the Apache 2.0 guy, i think.

Richard Fontana seems to like the Blue Oak License too: "Blue Oak is, as to its content, both extremely simple and extremely non-controversial" -- Richard Fontana (Mar 19 2019)

however earlier here he says:

"new putative FOSS licenses should be drafted in public and collaboratively, not in private as I gather Blue Oak was" (https://twitter.com/richardfontana/status/1104811186665721856) and mentions that here: https://lists.fedorahosted.org/archives/list/copyleft-next@lists.fedorahosted.org/thread/2YJ4COON2V33J7SF7B2DXE3EYUHVZXWA/ (Mar 16 2019)

The group of Meeker, Villa, Mitchell also works on the Polyform Project, a set of non-FOSS licenses: https://polyformproject.org/licenses/

" A group of attorneys also published a set of standard, but again very explicitly commercial/not-open, licenses as the Polyform Project.

While the lawyers involved in these (including Kyle, Heather Meeker, and me) would be the first to tell you that these licenses aren’t open source, I include them here because they bear two key similarities to open source licenses. " -- https://blog.tidelift.com/open-source-licenses-2019-year-in-review

and Villa claims that Mitchell's non-FOSS "License Zero" has some adoption [4]

https://licensezero.com/ (seems like the actual license is Parity) related forum: https://forum.artlessdevices.com/top

Villa is a little chillier towards other licenses there:

" Commercial “open source”

In late 2018, Mongo submitted the Server Side Public License to the OSI, intended to replace the AGPL with a license that was more aggressive and protected their business from cloud vendors. In 2019, this trend accelerated and turned into a movement of a sort, with Redis using a new source available license. These discussions eventually snowballed into something calling itself Commercial Open Source Software, centered around an Open Core Summit.

The entire thing was a little odd, given that open source has (from literally the time the phrase was coined!) been pro-commerce, and that also since the nominally pro-commerce COSS folks appeared to be arguing primarily for licenses that...oppose commercial use.

While I think some of the readings of this have been uncharitable, suffice to say that this messaging has been at best very confusing and at worst perceived as an active attack on the definition of open source by venture capitalists who want Red Hat-like returns without putting in the effort.

Regardless of the confused messaging, I expect we’ll see more of this in 2020—in both good and bad faith. "

The rest of the post speaks somewhat admiringly about CAL (a broader-than-GPL license that extends freedom to data as well as code) and Parity (another extremely broad license, but one with simplified wording; from Mitchell). These are noted on Blue Oak's copyleft guide: https://blueoakcouncil.org/copyleft

interestingly Blue Oak Council also has a model FOSS usage policy for use by businesses. Cool! https://blueoakcouncil.org/starter-policy for small companies and https://blueoakcouncil.org/company-policy for larger ones. And here's a policy for contractors: https://blueoakcouncil.org/development-use

here's a blog post about how blue oak council came to be: https://writing.kemitchell.com/2019/03/07/Blue-Oak-Council.html

interestingly Bruce Perens has split from OSI:

https://news.slashdot.org/story/20/01/05/208249/open-source-initiative-co-founder-bruce-perens-resigns-citing-move-toward-license-that-isnt-freedom-respecting#perens_coherent

and he now advocates for everyone to use one of AGPL 3, LGPL3, or Apache 2, to reduce confusion.

github suggests GPLv3 and MIT: https://choosealicense.com/ but also is cool with AGPLv3, Apache 2.0, MIT: https://choosealicense.com/licenses/

GNU suggests Apache 2.0 for permissive: https://www.gnu.org/licenses/license-recommendations.html

here's mitchell's take on a minimal set of licenses: https://writing.kemitchell.com/2019/03/17/License-Utopia.html

" Stephen Paul Weber occasionally sees such “any OSI-approved license” terms in contests or in aggregators. Sometimes, any OSI- or FSF-approved license is allowed, to avoid choosing “sides”. Fontana thinks that approach is clever: both OSI and FSF are respected neutral authorities, unlike e.g. Blue Oak or Fedora. As a historical point, Fontana remembers that Fedora did not rely on the OSI license list because OSI was then seen as too commercially influenced. Nowadays, OSI criticism seems to be the opposite. Henrik Ingo thinks the OSI is now much more important than back then, because Linux distros no longer have the role of kingmakers: whether the software is packaged for Debian or Fedora is no longer crucial for an open source project. "

here's the FSF list:

https://www.gnu.org/licenses/license-list.en.html

here's the OSI list:

https://opensource.org/licenses

https://www.kiuwan.com/blog/comparison-popular-open-source-licenses/ says that whitesource says that the most popular permissive licenses are MIT and Apache 2.0

here's a License Picker on Meeker's website:

https://heathermeeker.com/license-picker-2-0/

here's wikipedia's comparison page:

https://en.wikipedia.org/wiki/Comparison_of_free_and_open-source_software_licences

here's blue oak's permissive license:

https://en.wikipedia.org/wiki/Comparison_of_free_and_open-source_software_licences

here's blue oak's permissive license list:

https://blueoakcouncil.org/list

the only 'Gold' rated license is BSD-2-Clause Plus Patent License (BSD-2-Clause-Patent) (and it looks like they consider their own license Gold or better than Gold). Apache 2.0, MIT are both rated 'silver'.

afaict BSD-2-Clause-Patent isn't well known and isn't even on FSF's list at https://www.gnu.org/licenses/license-list.en.html.

https://spdx.org/licenses/BSD-2-Clause-Patent.html https://opensource.org/licenses/BSDplusPatent https://opensource.org/licenses/BSDplusPatent links to :

GNU License List Wikipedia License List OSSWatch License Diff Choose a license (by Github)

here's some discussion on Blue Oak (Mitchell thinks it's better than MIT and BSD): https://writing.kemitchell.com/2019/03/09/Deprecation-Notice.html https://blueoakcouncil.org/2019/03/06/model.html https://news.ycombinator.com/item?id=19347898

Mitchell seems to think that Blue Oak is the plain language version of Apache, at least in some respects: "Licenses like Apache 2.0 show how lawyers do this in legal terms for private deals every day. Blue Oak shows the same job done in everyday English, without long lists, run-on sentences, or complex scope rules." That blog post is generally complimentary of Apache 2.0, except that he thinks the rules around contributors are too complex.

here's something else he says about apache 2.0:

 kemitchell on Mar 10, 2019 [–]

Both Blue Oak and Apache are relatively modern permissive licenses, but they differ intentionally in design. In the short to medium term, I foresee that many projects will continue to choose Apache where contributions from large software patent holders are essential, because of Apache's complex rules on patent scope. At the same time, I think large software patent holders will prefer to receive code under the uncomplicated Blue Oak patent grant, even as they insist on Apache's mazelike compromise for outbound contributions. -- https://news.ycombinator.com/item?id=19347898

so my summary:

---

user 'skade' on lobsters volunteers to be contacted by and give advice to programming language community organizers, is particulary happy to give advice on conference organization and on surveys:

https://lobste.rs/s/xgquet/2020_state_haskell_survey#c_hp7nfs

---

regarding PL community surveys, user 'skade' on lobsters suggests starting by copying the battle-tested survey from Rust.

An example of some tricky stuff he gives is that rather than asking about trans-ness, for various reasons it may be better to just ask:

    “Do you consider yourself a member of an underrepresented demographic in technology?”
        I won’t list them all here, but it lists 14 characteristics that we are aware of, such as gender identity, race, but also language skill + a free form field
    “Do you feel your situation makes it difficult for you to participate in the Rust community?”
        Yes
        No
        Maybe

---

some more ideas for release methodology:

some excerpts from my old 'release staging' workflow:

"

Project roles

Releases owned by dev

Releases owned by QA/ops

Note that, until the previous version has made its way through preview and beta and is deployed to production, another alpha release won't be made.

Git Repositories

We will be using the git-flow workflow. Most of this page needs to be updated to be consistent with that terminology.

Following the git-flow branching model, we have three active git branches at any one time for each project:

The develop branch keeps on growing. When a ffreeze release is made, the dev repo is branched into a release branch. When the version in the release branch is deemed stable enough for production, the release branch is merged into "master".

In addition, developers are encouraged to maintain OtherRepos? for personal development and/or for developing features which do not yet qualify for acceptance into dev. Please see the page OtherRepos? for a list of these.

Versions

We will use dotted version identifiers with 3 numbers. The first numbers is the major version number (used to signify a substantial overhaul of code or additional functionality), the second is the minor version number (incremented each time a feature freeze is instituted, and the develop branch is branched into a new release branch), and the third is the revision number (incremented most times that a new alpha, preview, beta, or production version is deployed).

These internal version numbers are not the same as the public API version numbers.

Example

For example, development will begin in the "develop" branch until the dev team deems it time for a feature freeze, at which time the current version will be tagged in the develop branch as version 0.0.0. It will be branched into a release branch named 0.0.

In the release branch, the feature frozen version will be improved until the dev team deems it ready for alpha. These changes will generally be merged into the develop branch also.

When it is time to make an alpha the current version of the release branch will be tagged (presumably as 0.0.1), and also tagged as "alpha". The alpha version will be used internally. It will be improved until it is deemed ready for "preview" (closed beta), at which point it will be tagged with a version number, perhaps 0.0.2, and also with "preview". Then it will be offered to outside preview testers. At some point it will be deemed ready for "beta" (open beta), tagged with a version number, perhaps 0.0.3, and "beta". Then it will be offered to anyone who wants to try it out. At some point it will be deemed ready for "production", tagged with a version number, perhaps 0.0.4, and "production". At this point it will be merged into the branch "master".

Hopefully there will rarely need to be changes made to the branch "master" aside from merging in new release branches. Such changes will be made on so-called "hotfix" branches. "Hotfix" branches will be merged into the develop branch also.

While version 0.0.x is making its way through the release branch on its way to master, additional features may be being developed in the develop branch. These are being targeted at future alpha release 0.1.0. When a production release is released (i.e. when a release branch is merged into master), the dev team will try to make another feature freeze as soon as feasible, at which point the current contents of the develop branch will be tagged as 0.1.0 and branched into a new release branch, and the process will repeat.

Note that every release is associated with an increment of the revision number, and the minor version number is incremented when a feature freeze takes place and the contents of the develop branch is branched into a release branch. The major version number is subjective.

Demo and sandbox: other repos/version identifiers for special purposes

"

---

ok so my current synthesis/best guess for release methodology:

version numbers: marketing, break, normal, hotfix

prerelease tags: -next, -alpha, -preview, -beta, -rcN

repos: dev, next, testing, stable - dev: 'committers' have access to this. There may be branches for issues, features, and lieutenants. The point of this branch is for all of the committers to be able to show their work to each other and merge their work into each others'. There is a 'master' branch which is only writable by the tech lead. Merge commits may be used within this repo. The 'master' branch is always based on the head 'next' version but otherwise may have history rewritten. Any version number tags in this repo should have a prerelease suffix, which should be other than 'next/alpha/preview/beta/rcN'. The 'master' branch should always build but otherwise may have not work. The 'master' branch is a throw-away integration branch; work should not be based on it. - next: only tech lead (or whoever has the final signoff on development) has access to this. The 'master' branch is the only one defined by this workflow; any version number tags have the staging tag '-next'. This branch has a linear history (rebase and then fast-forward merges only), however this history is rebased on stable after each release. When the testing repo is unoccupied and the tech lead wishes, they tag a version '-alpha', and cloned into the 'testing' repo. - testing: only the stable maintainer and the tech lead have access to this. An 'alpha' version is cloned from the 'next' repo and eventually contain versions progressing thru tags '-alpha', (possibly '-preview'), '-beta', '-rcN'. The 'master' branch is the only one defined by this workflow. This branch has a linear history. A release is merged from this repo into 'stable'. - stable: only the stable maintainer and the tech lead have access to this. A release is fast-forward merged from the 'testing' repo into the 'master' branch of stable. This branch has a linear history. If bugfixes/hotfixes are required to old releases, a bugfix branch is made from the release.

approval dispositions:

roles:

Workflow:

Analogies to other workflows:

Note: if you prefer branches to repos, then 'stable/master' could just be the 'master' branch, 'testing/master' could be a 'testing' branch, 'next/master' could be a 'next' branch, and 'dev/master' could be a 'dev' branch. Alternately, you could have two repos, the main one and a '-dev' one, and put all of 'next', 'testing' and stable (called 'master') in the main one -- that way everything that the community shares control over is in -dev, and everything that the tech lead controls is in the other one.

You could further simplify by having 'stable' releases just be tags within the 'master' branch of the main repo, so things would look like this:

hmm, i kind of like that simplification. You could go even further, consolidating everything that is tech-lead-only into one repo, and then just having personal repos outside of that:

so the next, one-repo-plus-personal-repos system would be:

I bet ppl will just submit their pull requests to 'master' however, and develop from 'master', which isn't what we want (we want new development off 'next'). So how about:

i like that. Maybe 'try' is a confusing name tho. Mb 'trial' is better. Or 'edge'. So:

One repo. Two branches: 'master' and 'edge'. And then one branch for each release.

Eh, just make the releases in master. So sorta back to git flow. How about:

One repo. Two branches:

all of master/test/dev have linear histories, but dev is rebased off of each new release. master < test, master < dev.

Problems:

oh, i see, yes that's it:

" Decide what to base your work on.

In general, always base your work on the oldest branch that your change is relevant to.

    A bugfix should be based on maint in general. If the bug is not present in maint, base it on master. For a bug that’s not yet in master, find the topic that introduces the regression, and base your work on the tip of the topic.
    A new feature should be based on master in general. If the new feature depends on a topic that is in seen, but not in master, base your work on the tip of that topic.
    Corrections and enhancements to a topic not yet in master should be based on the tip of that topic. If the topic has not been merged to next, it’s alright to add a note to squash minor corrections into the series.
    In the exceptional case that a new feature depends on several topics not in master, start working on next or seen privately and send out patches for discussion. Before the final merge, you may have to wait until some of the dependent topics graduate to master, and rebase your work.
    Some parts of the system have dedicated maintainers with their own repositories (see the section "Subsystems" below). Changes to these parts should be based on their trees." [7]

So:

version numbers: marketing, break, normal, hotfix

prerelease tags: -0, -alpha, -preview, -beta, -rcN

repos:

branches:

approval dispositions:

roles:

Workflow:

Analogies to other workflows:

Alternatives:

---

i spent some time recently checking into the recent arguments for an email-driven git flow instead of something like GitLab?. I am not persuaded.

The most persuasive argument for an email-driven git workflow is that some famous people do it:

sr.ht also popularizes an email-driven git flow (i dunno if sr.ht is as famous though).

However i can't find arguments for it that persuade me:

https://kernel-recipes.org/en/2016/talks/patches-carved-into-stone-tablets/ (discussion: https://lwn.net/Articles/702177/ https://www.reddit.com/r/programming/comments/73gpys/why_linux_kernel_development_still_uses_email/ ) says:

this link from the LWN discussion disagrees with many of the points from the previous article: https://lwn.net/Articles/702357/

https://ipfs.io/ipfs/QmdA5WkDNALetBn4iFeSepHjdLGJdxPBwZyY47ir1bZGAK/comp/linux/git_basic.html suggests that email is simpler

https://blog.brixit.nl/git-email-flow-versus-github-flow/ (discussion: https://lobste.rs/s/kevlgd/git_email_flow_vs_github_flow ) says:

https://drewdevault.com/2018/07/02/Email-driven-git.html (discussion https://news.ycombinator.com/item?id=17441060 ) describes the email git workflow but doesn't give many reasons for it, besides 'email clients benefit from other work' above.

In the discussion though the author makes a good point, which is that sr.ht is merely trying to offer the best of both worlds, so that contributors who want a gitlab-style experience can have it, and contributors who want an email-driven workflow can have that too. He opines that technically, gitlab etc should have been based on and extend email behind the scenes, rather than reinventing email (presumably he means via issues/pull request discussion threads), and he is merely trying to do it that way. I agree with this.

https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html gives some better reasons for the email-driven flow:

i suspect that the real reason email is used is just that gitlab/github didn't exist back in the day, and now these early projects have a workflow that works for them (with customizations that work for them e.g. the 'patchwork' tool, so no reason to change.

---

on whether or not to have a linear commit history:

For this reason, when I merge PRs I avoid the GitHub? UI, and use "git rebase -S" locally followed by "git push". This does what the PR rebase button should do. " - someone else describes it as: "We use one merge commit per pull request, without squashing. However, we also rebase before merging, which results in a commit graph that looks like a cactus:

        o-o-o   o-o   o-o-o
       /     \ /   \ /     \
      o-------o-----o-------o-->
      " [https://news.ycombinator.com/item?id=27723435]
    - apparently Azure DevOps calls this a "semi-linear merge" [https://news.ycombinator.com/item?id=27725011]
    - someone else suggested in .gitconfig:  l = log --graph --abbrev-commit --date=relative. Dunno what that does. [https://news.ycombinator.com/item?id=27727474]
    - ppl note that you have to execute the merge while in the main branch, not the feature branch, in order for the --first-parent convention to hold. And that for reasons like that, you should have tooling to do the merge to enforce convention/prevent mistakes. [https://news.ycombinator.com/item?id=27724229]

---

so i just wrote the previous section. I don't have time to read the earlier sections right now, but here's what i'm thinking (to summarize the prev section, and integrate with what i remember of the prev ones):

(note: i got rid of what i used to call the 'dev' branch and renamed what i used to call 'unstable' to 'dev') (note: however if you want something like gitflow, where the 'master' branch only contains releases and 'dev' is the actual head; then you may want to reserve the word 'dev' for that and call what i call dev 'next'. But why would you want a master ('main') branch that only contains releases? Perhaps to make it convenient for tooling, e.g. autodeployment of commits to master sort of thing. But then maybe the branch in between 'main' and 'next' should be called 'testing', not 'dev'. Hmm. gitflow calls this a 'release branch', but imo that's a little confusing because the commits on a gitflow release branch aren't full releases, they are things like alpha releases)

---

" "feature proposals" rather than "feature requests". This is a subtle but important nuance.

A 'request' puts the onus on someone else to "fulfill" a request. Whereas a 'proposal' puts the onus on the initial person writing the issue. " [17]

(btw we don't want to 'start with an issue', we want to 'start with a wiki post'; issues are only for (a) remembering bugs/issues, and (b) MAYBE planning the smaller details of work that has already been committed to)

i note also that Hintjens's radical code review process accepted all correct changes that solved a previously agreed-upon problem; i guess the discussion about the problem is like a feature request.

So stuff might have a few stages for us:

---

community votes on libraries that should be considered to be added to stdlib/language changes that should be considered; score -1/0/1 voting; any library or other proposal that gets >1/3 is at least considered by the core team

---

mb the core team operates a world-read-only Clubhouse instance? Doesn't really seem needed tho; plaintext roadmaps on wikis, plus milestones in gitlab issue tracker, are probably enough, unless the project gets huge

---

Ruby has 'gemified standard library' which means in addition to stdlib, there are:

---

in addition to having core-team-certified-good libraries, open this up to arbitrary 'certification authorities', of which the core team is only one.

---

still want a way to make it so that most libraries must be in 'strict' mode. Maybe don't promote libraries not in strict mode?

---

how to square our goal of 'gravity' with freedom?

for an example of a community that takes an extreme approach on the 'gravity' side, consider Elm. Apparently Elm used to have a policy of only allowing pure Elm in their packages repo, and then later they even modified the compiler to disallow custom native code in local projects. A guy who wanted to use native code proposed to fork the compiler and a core dev said this would felt like an attack [18].

Elm users are afraid of getting banned from community forums for moderate criticism (see edit at bottom of [19]) and for advocacy of un-idiomatic solutions ("Within my first week of starting to contribute I had a post deleted and was blocked from contributing for a week. My offense was that, after the core team had announced their plans to restrict native modules in Elm 0.19, I posted a solution to someone’s problem that made use of native modules." [20]).

i don't know about the Rust community in general but on HN some prominent Rust members feel that civil but harsh criticism of senior members' work should be discouraged (see reply to this comment [21]; you may also want to look at ancestor comments for context). I wonder if perhaps Elm feels the same way (that is, that this sort of criticism needlessly saps the time and energy of contributors, and should be considered incivil). I don't know how Clojure handles this sort of thing, but Rich Hickley's response to one criticism (which contains some obscenity) [22] notes that "Every time I have to process such a diatribe and its aftermath, and its effects on myself, my family, and my co-workers, I have to struggle back from "Why should I bother?", and every time it gets harder to justify to myself and my family that it's worth the time, energy and emotional burden. Every time a community engages with such a diatribe without calling it out, and decrying its tone, the civility of our discourse and treatment of others heads further down the drain.".

There is (apparently, according to this blog post [23]?!) no defined governance process or channel for dissent and "there is no meta-process; any comments or ideas from the community about how things could be better are not wanted or appreciated.".

and on the other end of the spectrum, you probably have stuff like Perl, where the leader decided to use a light touch for Perl 6 and it took so long that the world forgot about Perl almost completely; and Haskell, where there was no blessed implementation and eventually one implementation (GHC) took over and now some ill-defined mishmash of GHC's extensions is effectively standard. And i wonder if there's some example of a project in which some more optimized implementation displaced the reference implementation and led to a loss of power over language direction by the founders?

and in another context, on a forum, i used to be very pro-freedom and supported a 'fork' but half the community (including the founder) considered the fork an attack, and it caused bad feelings and was (imo) a bad thing. Otoh before the fork a lot of everyone's time and energy was spent on consensus-building within the community on issues that were important, yes, but imo still did not deserve the amount of time being spent to build consensus on them. Also i used to support criticism even if it came off as troll-y and i think i ended up forcing the founder to take too much of their time and energy dealing with trolls, which i also regret.

so what do i think about all of this?

---

Red / yellow/green system for library status in oot: Green means either pure oot or reviewed by oot project; red means known serious issues or not recommended by oot project; yellow means contains native code and is not reviewed by oot project

---

rkangel on April 14, 2020 [–]

> C's charter is to standardize existing practice (as opposed to invent new features)

Passing a pair of arguments (pointer and a length) is surely one of the more universal conventions among C programmers?

cperciva on April 14, 2020 [–]

When they say "existing practice" they mean things already implemented in compilers -- not existing practice among developers.

apotheon on April 14, 2020 [–]

This seems like a poor way to establish criteria for standardization. It essentially encourages non-standard practice and discourages portable code by saying that to improve the language standard we have to have mutually incompatible implementations.

---

so the previous section suggests; when dreaming up language additions, priority should be given to mere standardization of 'existing practice', further prioritized as:

1) stuff already implemented in multiple implementations, and widely used 2) language additions to make stuff in libraries that almost everyone uses more ergonomic 3) std libraries to standardize stuff that a lot of libraries already do 4) libraries to standardize stuff that a lot of projects already do

---

we SHOULD respect 'security' issues as greater than other issues, however:

we should have something like 'npm audit', but:

see also https://overreacted.io/npm-audit-broken-by-design/

note: i read some critiques of CVEs:

---

Everything is subject to change (unless there is an explicit backwards-compatilibilty policy to the contrary). If you spend time creating a major library or development tool, even if you are engaging with the core team, we might later decide to do things a different way.

---

you can avoid having a 4th number in the version to track backports as long as LTS releases are always followed by releases of the form x.y.0 (where 'LTS' means 'anything you might release an update for after it is no longer the lastest release'). That could be annoying tho if there are no breaking changes and you bump the Y just b/c you want to make an LTS release. But in practice it probably works, because you probably don't need to provide LTS for a version except when there is a breaking change after this version, and i think the complexity of avoiding having 4 numbers in the primary version is worth it.

to clarify, in this system, LTS updates must never be breaking changes, and they take the form of jump bumping the last number. One issue might be if a security LTS update forces a breaking change.

---

like Ada, we should have a Reference Manual, an Annotated Reference Manual, a Rationale document, and a Conformity Test Suite " Ada proves itself in reliability with a track record of nearly four decades of usage in embedded, safety, and critical systems. Over this timeframe, Ada was updated three times, each time with a new Reference Manual, a more in-depth Annotated Reference Manual, and a Rationale document, describing the reasoning for each feature. Backing each of these changes is the Ada Conformity Assessment Test Suite (ACATS), a battery of freely available tests to help Ada compilers or interprets properly interpret the standard. Ada 2012 takes reliability further, by supporting inline usage of an Ada subset called SPARK, which provides functional specification and static verification. "

---

"If A won't connect to B, you check the spec. If A isn't compliant with the spec, A is wrong and the vendor of A has to fix it. Same for B. If you can't tell who's wrong from the spec, the spec is wrong." -- https://news.ycombinator.com/item?id=31227880 someone replied " The problem of software specification in that comprehensive sense is unsolved. " -- https://news.ycombinator.com/item?id=31230613

---

A goal of the reference implementation of oot is to be relatively quickly and easily understandable by developers new to the project who do not have a ton of programming language implementation experience. The level of expertise we are targeting is someone who knows what a parser but not necessarily what an LL(1) grammar is, who knows about concepts like concrete and abstract syntax trees and compiler optimizations and compiler front ends and back ends, but not necessarily about any particular compiler optimization. In the reference implementation we are willing to make a less efficient compiler in order to make it easier to contribute to.

we also aim to provide a 'production' implementation for each officially supported platform. For reasons of gravity, we want both the reference and production implementations to be part of the main project; we dont want to be like haskell, where an externally produced implementation becomes the de facto production implementation on commonly used platforms.

---

nice todos/project organization here:

https://web.archive.org/web/20220723205832/https://hg.sr.ht/~icefox/garnet

---

sounds like maybe a good license to consider in addition to apache 2 is 0BSD:

https://lwn.net/Articles/902410/ https://lobste.rs/s/tyz9lt/fedora_disallow_cc0_licensed_code

---

9 jmillikin 19 hours ago

link flag

The issues with library (“crate”) organization are already apparent, and unless something is done about it relatively soon I think we’ll see a fracturing of the Rust ecosystem within 5 years. IMO the fundamental problem is that crates.io is a flat namespace (similar to Hackage or PyPI?).

For example, the other day I needed to create+manipulate CPIO files from within a Rust tool. The library at https://crates.io/crates/cpio has no documentation and limited support for various CPIO flavors, but it still gets the top spot on crates.io just due to the name. There’s also https://crates.io/crates/cpio-archive, which is slightly better (some docs, supports the “portable ASCII” flavor) but it’s more difficult to find and the longer name makes it seem less “official”.

~ mtset 4 hours ago

link flag

Personally, I think this is a great use case for a social-web system; we’ve already seen this with metalibraries like stdx and stdcli, though none have stood the test of time. I think a namespacing system with organizational reexports could really shine; I’d publish cpio (sticking with the same example) as mtset/cpio, and then it could be included in collections as stdy/cpio or embedded/cpio or whatever. Reviews and graph data would help in decisionmaking, too.

---