proj-oot-ootDevelopmentProcessNotes1

Of course, Python has a great dev guide:

https://docs.python.org/devguide/index.html

---

http://www.reddit.com/r/haskell/comments/2if0fu/on_concerns_about_haskells_prelude_favoring/cl1mv9k

[–]aseipp 32 points 1 day ago*

RE last point:

    More importantly, the wiki helped summarize and explain a discussion that was extremely laborious to read and impossible to understand by looking through a mail list.

Yes yes yes yes. I really think this is needed more that I noodle about it. I believe it has helped immensely with GHC and I think it's crucial for several reasons to help keep things moving.

Look at the Python Enhancement Proposals (PEP) index: http://legacy.python.org/dev/peps/

These are beautiful, straightforward, have a relatively standard format, are indexed for historical records, are revision controlled, and go through a standardization process like ours. Some are big, some are small.

As it stands, we have a 'process' (the last part), but it falls flat in several ways. We frequently hit points like:

    What proposals occurred before?
    What status is a current proposal in?
    Did a specific proposal occur before - why, and what happened to it?
    What discussion started the idea?
    What objections or concerns do people have about it?
    What refinements did a proposal have?

A mailing list just doesn't scale for recording these particular things in an accessible way, honestly.

libraries@haskell.org is great for discussion and open forums, when we want to have an open discussion about what might happen, and let people voice concerns. But it is awful as a historical index/catalogue, and an awful place to get an overview - many times the discussion happens first, but rarely do refined abstracts of a proposal appear. Reading the full mail thread is almost always required to get a 'full, updated understanding' of what state something might be in, just as you said.

I think adopting a process like the PEPs - with the primary motivation being a better way to document and evolve proposals - would be a great idea.

    permalink
    save
    report
    give gold
    reply

[–]tibbe 11 points 1 day ago

We do have a PEP process for the Haskell platform, modeled after the Python PEP process:

    http://trac.haskell.org/haskell-platform/wiki/AddingPackages
    http://trac.haskell.org/haskell-platform/wiki/Proposals

I must say it hasn't been a major success however.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 3 points 1 day ago

It kind of sounds like no one knows about it. Could that be why it hasn't been successful or do you think there is something wrong with the proposal idea itself?

    permalink
    save
    parent
    report
    give gold
    reply

[–]tibbe 4 points 1 day ago

I think it's most likely a combination of issues, one of them is that it's not widely known.

    permalink
    save
    parent
    report
    give gold
    reply
    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 16 points 1 day ago*

We have a proposal.

It is the Foldable/Traversable Proposal / Burning Bridges Proposal a year and a half ago. It garnered over 100+ comments, across 2-3 threads, heavily biased in the positive on the topic of Foldable/Traversable generalization.

A large part of the reason for the formation of the committee was to manage the sense of frustration that folks in the community had that nothing could ever change without complete universal agreement on any package that didn't have a dedicated maintainer. As more and more of the platform fell nominally into GHC HQ's hands there was nobody there who felt responsible for making decisions.

Mind you, the proposal itself just said "swap out the monomorphic versions of combinators in base with the ones from Foldable and Traversable" and after SPJ formed the committee, and once we collectively figured out how to work with GHC HQ we did.

That part is done.

It was easy except for a few knock-on effects that we found when we went to implement it. Data.List and Control.Monad re-exported monomorphic functions, and we had to restructure parts of base to put the code in the right place to allow the re-exports. Finally, since we wanted to do it without changing semantics in user code, we needed the raw material from the "extra members in the Foldable class" proposal from a few weeks back.

If we just want that proposal, it is summarized in that sentence directly, and already implemented.

Anything around haskell2010 or trying to make things better for folks who would prefer a more monomorphic Prelude is actually an extension to the scope.

If you want us to go farther and try to make it easier for folks to try to work with a monomorphic Prelude, these are the things we need help with, but they are technically out of scope of the original proposal.

I personally want that smoother upgrade path, but we could live without it.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 2 points 1 day ago

    It is the Foldable/Traversable Proposal / Burning Bridges Proposal a year and a half ago. It garnered over 100+ comments, across 2-3 threads, heavily biased in the positive on the topic of Foldable/Traversable generalization.

Where was it posted? Was it easy to miss?

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 6 points 1 day ago*

It was a proposal on the libraries@ mailing list under "Burning Bridges", "Foldable/flexible bridges", and several other titles that raged for a couple of months, and more or less overwhelmed the mailing list in May 2013.

The formation of the core libraries committee spun out of the "Making decisions" thread that SPJ formed in response to the furor.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 2 points 1 day ago

Ok, thanks

    permalink
    save
    parent
    report
    give gold
    reply

[–]rwbarton 4 points 1 day ago

The proposal "Swap out the monomorphic instances in base with the ones from Foldable and Traversable" doesn't say anything about exporting new names from Prelude like foldMap, mappend, and traverse. I don't see how it is logically necessitated by generalizing the functions which are already exported by Prelude. Someone who actually wants to write a Foldable or Traversable instance can import the necessary modules.

Was this simply a mistake? If not it looks exactly like committing the code into the repo before writing the proposal.

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 8 points 1 day ago

I would argue that it was a decision.

We have a Libraries Submission process.

    The maintainer is trusted to decide what changes to make to the package, and when. They are strongly encouraged to follow the guidance below, but the general principle is: the community offers opinions, but the maintainers decide.

The core libraries committee acts as a collective maintainer for the portions of the Haskell Platform that aren't maintained by anyone else.

We had a nuance to the proposal that required a decision, and so we made one.

Admittedly we also have,

    API changes should be discussed on the libraries mailing list prior to making the change, even if the maintainer is the proposer. The maintainer still has ultimate say in what changes are made, but the community should have the opportunity to comment on changes. However, unanimity (or even a majority) is not required.
    Changes that simply widen the API by adding new functions are a bit of a grey area. It's better to consult the community, because there may be useful feedback about (say) the order of arguments, or the name of the function, or whatnot. On the other hand few clients will actually break if you add a new function to the API. Use your judgment.

The precise space of names we want to add to the Prelude is still open for discussion. It is in the space of things we need to talk about -- we have as-yet unmerged changes into GHC around this question, but I do personally think the balancing act of utility vs. complete non-breakage is best served by making the classes that are exported from the Prelude implementable as given to you.

    permalink
    save
    parent
    report
    give gold
    reply

---

i find the Scheme development process to have various flaws.

footnote 1: "In 1971, largely in response to the need for programming language independence, the work was reorganized: development of the Data Description Language was continued by the Data Description Language Committee, while the COBOL DML was taken over by the COBOL language committee. With hindsight, this split had unfortunate consequences. The two groups never quite managed to synchronize their specifications, leaving vendors to patch up the differences. The inevitable consequence was a lack of interoperability among implementations." -- http://en.wikipedia.org/wiki/CODASYL

---


maybe use semantic versioning.

At any one time, development is going on in two ways; in previous major version, and in the experimental/development version (versioned via pre-release identifiers). At some point, open-ended development stops in the pre-release version, and it transitions into testing (alpha, beta, rc, etc) and then into release; at this point, (a) new releases stop being developed on the old major version (although whatever new release was currently in development is finished, and maybe the next one, if development had already started on it), (b) a new experimental/development version for the next major release is opened.

The experimental/development versions' prerelease numbers are in two parts, a 'minor revision' number and a 'patch number'. The minor revisions are incremented to indicate significant new ('public API'-changing, e.g. language changes, not just toolchain improvements) functionality, and the patches are incremented for everything else. Note the presence of 'significant' there; in the experimental/development versions, generally the minor version number should only be incremented when at least one of the changes since the previous minor version has been discussed in a PEP (and, any change that has been discussed in a PEP warrents a minor version bump in the prerelease).

So e.g. you'd have Oot 2.2; and simultaneously Oot 3.0.0-22.3.

In the released series, you'd have the same thing going on with the next minor release.

So e.g. you'd have Oot 2.2; and simultaneously Oot 2.3.0-11.2; and simultaneously Oot 3.0.0-22.3.

At some point Oot 3.0.0 would start moving towards a release, at which point an 'alpha.' prefix would be introduced, a minor and patch pre-release would still be there (because in alpha and beta we still would probably have occassion to change things, not just fix bugs, although we'd try not to, trying not to harder in beta than in alpha), then it would go thru e.g. Oot 3.0.0-alpha.0.0, Oot 3.0.0-alpha.2.11, Oot 3.0.0-beta.0.0, Oot 3.0.0-beta.1.11, Oot 3.0.0-rc.0, Oot 3.0.0-rc.3, and finally Oot 3.0.0. When Oot 3.0.0-alpha.0.0 was created, creation of new minor development versions of Oot 2.x would not occur anymore (e.g. Oot 2.3.0-11.2 would still be developed into Oot 2.3.0, and that would still be patched, but no Oot 2.4.0-* would be created); that activity would move to Oot 3.1.0-*.

?: to which of these versions is a PEP-like process (sorta) strictly applied? i'm guessing, only the the minor releases after the major release. A PEP-like process is sloppily applied to the next major release? Perhaps we should also have a 'sandbox' version without any PEP process being applied (more of a pure BD (benevolent dictator/dictatorial) process)? Or perhaps instead add another grouping to the pre-release versions that can only be incremented when everything included has been documented in a PEP-like thing?

A: the solution i like best is to have a 'bayle's sandbox' branch off of the experimental version; because the latest 'bayle's sandbox' will always include the changes from the latest experimental version. Then require at least a short PEP-like thing before inclusion of new stuff from bayle's sandbox into the actual experimental version. Recall that one of the ideas of a PEP is to alert the wider community that a change is being considered, so that people who have criticism can state it. Note also that stuff can be put in in one experimental pre-release version and then taken in in a later experimental pre-release version; so it's not crucial for the PEP discussion to terminate before the decision is made to put it in there. Instead, there is a lower standard: something goes from the sandbox into the experimental prereleases when the BD thinks the change is sufficiently likely to remain so as to make it worthwhile for contributors to experimental to merge with it now.

---

for major version 0, semver is silent. But we could use an (unevenly applied) system that mimics semver, but in multiple microcosms, in two layers. The previous sentence is gibberish, but what i mean is this: versions of the form 0.0.0-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0 (so this is a 'microcosm' of the semver system). Similarly, versions of the form 0.0.1-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0. Etc (that's the first 'layer') Similarly, versions of the form 0.1.0-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0 (this is the second 'layer').

So, in other words, we'd start with version 0.0.0-0.0.1. The 0.0.1 part (the "pre-release version" according to semver) is interpreted using semver as if 0 was a major version above 0 (that is, major version 0, minor version 0, patch version 1). We keep on going with this until we feel that some sort of stability threshold has been breached, at which point we increment the patch version, and get to 0.0.1-0.0.1. Then we keep going with that, incrementing the patch version, until we get the urge to release a version 1.0. Instead, we increment the minor version.

Part of the motivation here is to defer our urge to release a version 1.0. Why? Because i see that Rust declared that they are moving towards a 1.0, but it seems to me that they are still making substantial changes in the language. So, the system i proposed has sort of a design-time variant of the idea of 'release candidate'; each minor version under major version 0 is sort of a 'release candidate' for the language design. This would allow a language to, like Rust, say "ok we're trying to move towards a 1.0 here, but we still have to finish the design in areas X and Y", but then instead of actually releasing a 1.0 when X and Y are finished, releasing a 'release candidate' with a version like 0.1.0; this would allow more time to actually evaluate the decisions made in X and Y as work on the language goes on. Once design changes seem to have settled down (determined by empirically looking backwards, not just because it seems like it will be time for a 1.0 soon), only then do you release a 1.0.

The motivation for the second layer is that you probably want a way to indicate progress towards a 1.0 before you hit these 'design release candidates'. The motivation for using the pre-release version as another semver is so that, while you are iterating, you can still express to people whether some change is a major backwards-incompatible change in how the language works, a significant backwards-compatible addition, or a minor bugfix or implementation change. We would interpret semver sloppily with respect to this pre-release microcosm, in that we might sometimes allow 'minor' backwards-incompatible changes to increments the prerelease-minor-version rather than the prerelease-major-version, or 'minor' additions to be prerelease-patch-version increments rather than prerelease-minor-version increments.

A defect with the idea of using the major version 0's minor and patch version in this manner is that it makes the rules for major version 0 different from the experimental development on later major versions; even though we probably want to apply a similar process in each case.

Perhaps we should just expand the above experimental pre-release versioning idea to have a prerelease-major, which fills the 'design release candidate' role noted here. That makes the minor at patch versions for major version 0 unused. However, for major version 0, we could still use them in the way described above, for publicity (e.g. to broadcast when changes are settling down for the purpose of attracting more users and contributors when we feel like we're almost done).

---

i think we need to modify this slightly: at any given point in time, there needs to be potentially 4 points of development, not 2 or 3:

the next major version is distinct from the experimental version because you might want to isolate a subset of the ideas from the experimental version and release them as a new major revision of the language. Eg in the experimental version you try all sorts of crazy stuff; at any given point in time, you'll probably only be sure about some of the stuff you've tried, while much of it seems good but you want more confidence before you bake it into a major version of the language, forcing adopters to actually use it for real. So you put the stuff you're confident about into the next major version, and leave the rest in the experimental branch.

This isn't too different from the 'Bayle's branch' idea above, but it is different in that you want to officially encourage the language developer community to all be trying it out, on a synchronized version; which is why now i call it an experimental version, not a branch. There might still be a Bayle's branch off of the experimental version, which is not being officially encouraged for everything to try it out.

---

the last minor version of the previous major version of the language receives security fixes, bug fixes, and possibly other small changes such as deprecations and helpers that could help incrementally convert code towards the next major version.

the last minor version of the second-previous major version of the language receives security fixes.

---

after each minor version is released, we immediately try to start a new release based on HEAD; we don't wait for sufficient changes to pile up

---

rust moved to a 'train' model:

http://blog.rust-lang.org/2014/10/30/Stability.html

and they provide some elegant justifications for not stabilizing specific things and for not allowing ppl to opt in to unstable features:

" ...we cannot stabilize syntax extensions, which are plugins with complete access to compiler internals. Stabilizing it would effectively forever freeze the internals of the compiler; we need to design a more deliberate interface between extensions and the compiler. So syntax extensions will remain behind a feature gate for 1.0. ...

What about stability attributes outside of the standard library?

Library authors can continue to use stability attributes as they do today to mark their own stability promises. These attributes are not tied into the Rust release channels by default. That is, when you’re compiling on Rust stable, you can only use stable APIs from the standard library, but you can opt into experimental APIs from other libraries. The Rust release channels are about making upgrading Rust itself (the compiler and standard library) painless.

Library authors should follow semver; we will soon publish an RFC defining how library stability attributes and semver interact.

Why not allow opting in to instability in the stable release?

There are three problems with allowing unstable features on the stable release.

First, as the web has shown numerous times, merely advertising instability doesn’t work. Once features are in wide use it is very hard to change them – and once features are available at all, it is very hard to prevent them from being used. Mechanisms like “vendor prefixes” on the web that were meant to support experimentation instead led to de facto standardization.

Second, unstable features are by definition work in progress. But the beta/stable snapshots freeze the feature at scheduled points in time, while library authors will want to work with the latest version of the feature.

Finally, we simply cannot deliver stability for Rust unless we enforce it. Our promise is that, if you are using the stable release of Rust, you will never dread upgrading to the next release. If libraries could opt in to instability, then we could only keep this promise if all library authors guaranteed the same thing by supporting all three release channels simultaneously.

It’s not realistic or necessary for the entire ecosystem to flawlessly deal with these problems. Instead, we will enforce that stable means stable: the stable channel provides only stable features.

What are the stability caveats?

We reserve the right to fix compiler bugs, patch safety holes, and change type inference in ways that may occasionally require new type annotations. We do not expect any of these changes to cause headaches when upgrading Rust.

The library API caveats will be laid out in a forthcoming RFC, but are similarly designed to minimize upgrade pain in practice. "

---

i don't like how Rust's 'alpha' release claims to be feature complete but also plans further breaking language changes:

http://blog.rust-lang.org/2015/01/09/Rust-1.0-alpha.html

---

"

heydenberk 8 hours ago

I'm impressed with how thoughtful the entire Rust development cycle has been. They've managed not just to strike a balance between stability and language redesign, but to approach to API and syntax standardization in a way that made tracking the changes relatively painless. Some of this is done in the language itself, with opt-in statements and stability annotations in the API, but the tooling around compatibility in the ecosystem is quite useful as well. Now they're going to doing CI on their own nightlies against packages in the ecosystem:

> To help ensure that we don’t accidentally introduce breakage as we add new features, we’ve also been working on an exciting new CI infrastructure to allow us to monitor which packages are building with the Nightly builds and detect regressions across the entire Rust ecosystem, not just our own test base.

reply

idunning 8 hours ago

CI against package ecosystems is a really great idea. We do it for Julia too [1], and it can identify some really subtle issues that would otherwise take longer to become apparent.

http://pkg.julialang.org/pulse.html

reply "

--

from the Rust Beta release announcement:

" The Beta release also marks a turning point in our approach to stability. During the alpha cycle, the use of unstable APIs and language features was permitted, but triggered a warning. As of the Beta release, the use of unstable APIs will become an error (unless you are using Nightly builds or building from source).

The Rust ecosystem continues to grow. The crates.io repository just passed 1 million downloads and has over 1,700 crates available. Many of the top crates in crates.io can now be built using only stable Rust, and efforts to port the remainder are underway. Therefore, we are now recommending that new users start with the Beta release, rather than the Nightly builds, and the rustup script will be modified to install Beta by default. (However, it is easy to switch to the Nightly build if some of your dependencies aren’t updated yet. See the install page for details.) What happens during the beta cycle?

The final Rust 1.0 release is scheduled for May 15th – exactly six weeks from now. In the interim, we expect to put most of our effort into fixing bugs, improving documentation and error messages, and otherwise improving the end-user experience. We don’t plan on making functional changes to stable content, though naturally we may make minor corrections or additions to the library APIs if shortcomings or problems are uncovered (but the bar for such changes is relatively high).

While we don’t expect to add any new features (or major new APIs) for the 1.0 release, that doesn’t mean we’re going to stop working on them altogether. In fact, quite the opposite! Per the train model, the plan is to continue development on new features on the master branch, in parallel with the beta. And of course, we’ll be issuing the beta for 1.1 release at the same time as we issue the final 1.0 release, so you shouldn’t have to wait long to start putting that work to use.

To help ensure that we don’t accidentally introduce breakage as we add new features, we’ve also been working on an exciting new CI infrastructure to allow us to monitor which packages are building with the Nightly builds and detect regressions across the entire Rust ecosystem, not just our own test base. This infrastructure is still in the development phase, but you can see a sample report here. "

---

rust's new governance and RFC process:

discussion:

summary:

previous governance structure:

the proposer of the new governance process thought the use of RFCs is very helpful. The proposer thinks it's important for all to understand that "... people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer."

the proposer of the new governance process thought the following things needed improvement:

the new system:

core team + several topically-focused subteams

initial subteams:

subteam composition:

subteams do these things (within their topical area):

subteam leader (core team member):

subteam deliberations must be transparent/public

composition of core team:

the core team does these things:

subteam/RFC mechanics:

moderation:

comparison to Mozilla's 'module' system:

the proposer of the current system says it can be seen as "an evolution of the current core team structure to include subteams", or alternately as "an evolution of (Mozilla's) module system where the subteam leaders (module owners) are integrated into an explicit core team"

my opinons:

i dont feel that this process would be disasterous, just that it may produce (a) a conservate rejection of big language changes, both de jure from the consensus requirements, and de facto from the separation of the language design and implementation team, and (b) a creeping lack of coherency, as subteam visions diverge and approve RFCs for the nightlies with no central direction.

instead, i would recommend a system where (a) the 'subteams' are explicitly merely subordinate delegates of the core team: the subteam discusses RFCs, filters out the ones it doesn't like, and then presents the core team with the ones it does like for approval; the core team would directly appoint subteam members, (b) the subteams are divided by language design topic, not by language design/library/compiler; and (c) less than consensus, especially if outside experts and users are being given a formal seat at the table.

---

boatzart 1 day ago

I'm so happy they finally moved the official documentation to Doxygen (http://docs.opencv.org/master/index.html), which appear to be a huge step up from their old Sphinx docs (http://docs.opencv.org/).

reply

---

not sure if i agree with this or not, i havent read it yet, but here's OpenCV?'s very detailed (old? it talks about Sphinx but i thought they used Doxygen?) style guide: http://code.opencv.org/projects/opencv/wiki/Coding_Style_Guide?version=1


this person has an interesting proposal for a 'rest period' for any potentially breaking changes, even 'minor' ones, after 1.0, after noting that eis company decided not to use Rust after seeing the discussion about a possibly breaking change, https://internals.rust-lang.org/t/pre-rfc-adjust-default-object-bounds/2199/52 .

MetaCosm? 6 minutes ago

> Tell me what you wanted Rust to do instead...

Well -- if we are going into what I would have preferred. I would have preferred a rest period after release. No ongoing changes across 3 trains on a 6 week cycle. Give the community time to accept, adapt, bitch, moan, whine, create workarounds, and then better workarounds. Let enterprises have time to buy in, test, accept or reject and give feedback. Deal with critical issues and focus on stability as a deliverable... and don't break anything during the rest period (even if that means being on a single version of LLVM and no new traits or modules or functions or whatever).

---

A comment on https://internals.rust-lang.org/t/pre-rfc-adjust-default-object-bounds/2199/52 suggests (criticizes, actually, but i like the idea they're criticizing) an interesting middle group between 'no breaking changes (with minor caveats)'

the idea is to not count something as a breaking change if it doesn't breaks any code that is on crates.io (Rust's CPAN)

btw what Rust said when they annouced 1.0 was:

"The 1.0 release marks the end of that churn. This release is the official beginning of our commitment to stability, and as such it offers a firm foundation for building applications and libraries. From this point forward, breaking changes are largely out of scope (some minor caveats apply, such as compiler bugs)." minor caveats are hyperlinks to Rust RFC: Policy on semver and API evolution and Rust RFC: Semantic versioning for the language

https://github.com/rust-lang/rfcs/blob/master/text/1122-language-semver.md

on that discussion nikomatsakis said:

"One interesting question is whether we will accept other sorts of breaking changes that are not strictly tied to soundness, but just cases where we seem to have gotten something slightly wrong. I have exactly one example in mind at the moment (some details of trait object lifetime defaults), but I can easily imagine that this scenario will arise. Probably best is to just address such things on a case-by-case basis: every good rule needs an exception, after all."

on hackernews, pcwalton said that technically, any of the following is a potentially-breaking change:

---

although Oot itself has optional static typing, the canonical Oot package repo requires strict mode to be turned on for packages it hosts (but mb have a special section called 'vetted dynamic' for packages which can't use strict mode b/c the type system is not expressive enuf?)

---

i dunno where i wrote this (mb i just said it to Dana), but i once wrote that mb projects could have some major releases that just remove cruft and dont add new functionality. Both of us immediately said, yeah, but then users have no incentive to upgrade, eg Python 3. so just one more data point there, now Ember is doing the same thing: http://emberjs.com/blog/2015/08/13/ember-2-0-released.html

---

one reason Python is so slow to switch from Python 2 to Python 3: the official recommendation is that the "python" command defaults to Python 2, even in Q3 2015! (Python 3 was released at the end of 2008, 7 years ago!). They anticipate "that there will eventually come a time where the third party ecosystem surrounding Python 3 is sufficiently mature" to change this recommendation!

https://www.python.org/dev/peps/pep-0394/

---

Tyr42 4 days ago

Though, I want to make clear that it is permissible for some theoretical code to stop compiling when going from 1.4 to 1.5, for example. Check out https://github.com/nikomatsakis/rfcs/blob/projection-and-lif..., for an example of this.

This is because the compiler wasn't checking some cases strictly enough, and you could "hide" constraints from the compiler, causing them to fail to be enforced.

They checked against crates.io when raising this RFC, and found that there were 35 crates (packages) that would be able to keep compiling if this turned into a future-compat warning, to become an error in the release afterwards.

The example

    trait Test {
        fn test(&self) -> Option<Self>;
        //                ~~~~~~~~~~~~
        //            Incorrectly permitted before.
    }

Should be

    trait Test: Sized {
        fn test(&self) -> Option<Self>;
    }

Because `Option<T>` requires that `T` be `Sized`. Poking around on 1.3, I think that it would be an error to instantiate this trait with a [i8] or something that was not sized anyways, so you're not losing much by needing to declare the trait properly.

reply

kibwen 4 days ago

To elaborate, Rust has a category of bugs called "soundness bugs" which are allowed to justify breaking backwards compatibility, though the team still endeavors to make breakage as minimal and painless as possible. The Rust developers have written a tool called Crater to gauge the impact of any potential change to the compiler, which is actually really neat: it compiles nightly versions of the compiler against every package on crates.io (the central third-party package repository) and generates a report of any packages that fail to compile due to changes to the compiler.

To answer the question of why they don't bump the major version when soundness fixes break backcompat, it's because soundness bugs generally have implications for memory safety/undefined behavior, and it's assumed that people using Rust would prefer unsound code to break rather than to continue being potentially unsafe. Also, thanks to Crater, the Rust developers generally take it upon themselves to file patches with third-party crates to update them such that the package developers themselves are (ideally) neither blindsided nor inconvenienced.

reply

ColinDabritz? 4 days ago

Neat! That's very good handling of "as intended". Being explicit about it, having a warning phase, then moving to "more correct" to match the intent, and missed edge cases.

reply

---

 ant6n 4 days ago

How does it relate to writing C++11 without using raw pointers. I.e. using uniq_ptr (single owner), shared_ptr (multiple owners), references (not-an-owner). Are there benefits beyond that; and does it have the capabilities C++ has regarding making classes, operator overloading; templates?

reply

lambda 4 days ago

Rust's model is similar to a cleaned-up and guaranteed safe use of modern C++ smart pointers and references.

A few of the improvements that Rust offers are compiler enforced move semantics by default, so assigning, returning, or passing a Box<T> (similar to unique_ptr<T> in C++) will move the ownership to the new location and statically check that you never try to access the old location. In C++, you have to explicitly call std::move(), and checks to make sure you don't reference the old one happen at runtime.

Rust also does safer checking of use of references. In C++, it's possible to return a reference to a stack variable which is no longer valid. In Rust, the lifetimes of references are tracked statically, so you can never have a dangling reference.

So, while the RAII pattern, use of smart pointers, and use of references, will feel fairly familiar to someone familiar with modern C++, they are actually generally more convenient and more safe in Rust.

Rust does not have classes. You define plain old data structures, and use traits to provide polymorphic behavior. Traits are somewhat like interfaces which can contain default implementations of methods. Traits can be used to constrain types. Traits can be used to provide either static (compile time) dispatch, or runtime dispatch, depending on whether the object is monomorphized at compile time or accessed through a fat pointer that includes a vtable to dispatch at runtime.

Rust provides generics which fill a similar niche to templates in C++. In some ways, they are more powerful as arguments can be constrained by traits; in some ways they are less powerful than C++ templates.

For an introduction to traits in Rust, take a look at the book: https://doc.rust-lang.org/book/traits.html

Rust doesn't provide operator overloading directly, but use of operators is equivalent to just calling a method defined on a particular trait in Rust. So, "a + b" is will work for any types that implement the Add trait (https://doc.rust-lang.org/std/ops/trait.Add.html), and is equivalent to calling a.add(b).

reply

ant6n 4 days ago

One thing I'd miss from C++ are the alternative operator representations for && and

, and and or.

reply

 Manishearth 4 days ago

> templates

The vast majority of C++ template usage is acheived by generics, and in a much more type safe and debugable manner.

Most of the remaining things can be achieved by Rust's hygenic macros (which are not like C++ macros, they're more like C++ templates in how they can be used and the guarantees they provide).

reply

steveklabnik 4 days ago

  > How does it relate to writing C++11 without using raw pointers.

Rust's safety features are similar to those, but even stronger in a number of ways. There's two aspects to this: 1. more guarantees. You can write code with uniq_ptr that segfaults, but you cannot with Box<T>. 2. safety allows for patterns that would be dangerous in C++. For example, shared_ptr uses atomics for thread safety. Rust has two variants, Arc<T> with similar atomics, and Rc<T> without. If you're not using multiple threads, you can use Rc<T>, safe in knowing that Rust will give you a compile-time error if you introduce multithreading. In other words, there are ways that you need to be defensive in C++ that you do not in Rust.

  > does it have the capabilities C++ has regarding making classes, 

Rust does not really have OOP, exactly, we have structs and traits (concepts). One major difference is that we don't currently have inheritance, though proposals are being worked on. But since we already have the concept equivalent, we have extra power there. Tradeoffs.

  > operator overloading

Rust lets you overload certain pre-defined operators, but not create new ones.

  > templates?

AST based, hygenic macros, instead.

reply

---

RoR? has a page called 'doctrine': http://rubyonrails.org/doctrine

i guess we should consider codifying our 'doctrine' and calling it that

---

colin_mccabe 1 day ago

While I agree that Java has been one of the most stable and backwards-compatible languages out there (especially compared to things like Scala, which regularly breaks source and binary backwards compat), there are still rough edges. I am not the original poster, but I can speak for why JVM deployments take time on Hadoop (which I work on).

---

http://aturon.github.io/blog/2016/07/05/rfc-refinement/

Proposal: Roadmap

At the heart of Rust’s open development is the RFC process ... l

Idea: publish a roadmap on a regular cadence, e.g. every two release cycles (12 weeks).

The roadmap would contain, at a minimum, a small set of “major initiatives” for that period. An initiative might cover any phase of development, e.g.:

    Early investigation: For example, building out NDK support in rustup or exploring implications of various memory model choices.
    Design: For example, working out a revised design for rand or const generics.
    Implementation: For example, the MIR initiative or rustbuild.
    Documentation: For example, focused effort on updating API docs in a portion of the standard library.
    Community: For example, launching RustBridge....
    Clear scope: an initiative should have clear-cut goals that can actually be finished. So, an open-ended goal like “MIR” doesn’t fly, but “Get MIR-trans working on all of crates.io” does.
    Timeboxed: relatedly, an initiative should realistically last at most, say, 24 weeks (two roadmaps).
    Commitment: There should be some level of commitment from multiple people to actually work on the initiative. In particular, the initiative should list some primary points of contact, and ideally mentors.

...

We might want to mandate that we have a compiler in BOTH directions for most language changes at each major version. It doesn't have to be 100%; eg if a change is really enabled by the runtime and would require an entire interpreter layer to accomplish, forget it; also if a change it too hard to automatically compile away, some human intervention may be required, but the inter-version compiler should at least detect those locations at which human should take a look for a potential additional change.

---

ok so:

version triple:

plus:

These are all positive integers between 0 and 32767, inclusive.

When either 'major' OR 'minor' are 0, all bets are off (these are called 'unstable releases'). Otherwise ('stable releases'), for any backwards-incompatible change, either major or minor must be incremented. Either 'major', 'minor' or 'patch' must be incremented for each new release.

'pre-release version' must be its maximal value, 32767, for any release. During development, however, it may be non-zero. A higher pre-release version indicates a 'later' pre-release, except that pre-release version 0 (the actual release) comes after all non-zero pre-releases. Pre-release version ranges indicate whether the the pre-release is: pre-alpha, alpha, beta, or release-candidate. 0-8191 are pre-alpha, 8191-16383 are alpha, 16384-24575 are beta, and 24576-32766 are release-candidate, and 32767 (or greater) is a release (values greater than or equal to 32767 have no precedence between each other; this could potenially be used to store 'build metadata' for releases, but note the potential impact on reproducable builds).

A language change (or change in a module versioned with the language) is backwards-incompatible if:

Design choices/motivation: Note that our 'minor' acts like SemVer?'s 'major'. This is because we want to reserve 'major' to indicate subjectively 'major' changes in the language. All bets are off when a minor version is 0 because we want to have space for unstable initial development of each new major version, just as if it were a new project. We don't have anything equivalent to SemVer?'s 'minor'; new functionality is allowed to only increment the patch version in our system. We define 'backwards compatible change' as a conjunction of changes in behavior and in the spec in order to allow ourselves the freedom to introduce changes that break existing code which relied upon bugs, or upon behavior that we didn't guarantee (for example, if some library function happens to always return sorted lists in the existing reference implementation, but we didn't intend to guarantee that).

---

Between any two backwards-incompatible stable releases within the same 'major' version, we will provide a linter that detects/warns about some of the places in your source code that may have to be changed due to a backwards incompatibility (ideally, this tool would also transcompile your code to make the changes for you, but this won't always be the case). We don't commit to detecting 100% of these places.

---

One principal of Oot development is 'gravity', including a canonical implementation. We don't want a situation where there is a spec and various implementations that implement the spec but add implementation-dependent stuff (or worse, that only implement part of the spec), with little possibility for painless code reuse between them.

But we do want to have implementations for all sorts of different platforms. And we do want to have a spec. And we do want to at least legally permit third-party devs to distribute their own implementations (even if we frown upon this). So how do we do this?

---

interestingly, beloved Lua is open source but not developed in the open:

http://lua-users.org/lists/lua-l/2008-06/msg00407.html

---

Yarn's RFC process (and descriptive document) looks pretty good:

https://github.com/yarnpkg/rfcs

---

"...creating a kick ass framework isn't just about writing great code. You need excellent documentation, a strong community helping each other learn, a supporting cast of libraries and plugins to help users solve the hard problems, and short feedback loops based on user feedback to keep the framework relevant. Vue.js is all of that, plus great code. " -- [1]

---

annual-ish public surveys, possibly along the lines of:

http://stateofjs.com/ https://blog.rust-lang.org/2016/06/30/State-of-Rust-Survey-2016.html http://blog.cognitect.com/blog/2016/1/28/state-of-clojure-2015-survey-results

and then annual-ish roadmaps (in Rust, these are first proposed via an RFC)

---

Rust has rust-internals; Rust's annual roadmaps are discussed first there, then on an RFC

Python has python-ideas; i've seen a proposal start as a github issue, then once a rather detailed proposal appeared that e liked, the python lead dev (van Rossum) suggested posting that proposal to python-ideas

---

Rust's proposed 2017 roadmap's non-goals [2] are pretty interesting in that they are diametrically opposed to my thoughts for Oot (where there is always an oot-next).

This makes me wonder if maybe we need not just an oot-next, but two levels of oot-next (oot-next and oot-experimental or something like that). The idea being that oot-next is stuff that is currently being beaten into shape to become the next stable version of oot, whereas oot-experimental is where we play around with new ideas in a less inhibited manner.

Maybe adopt the Debian naming: stable, testing, unstable, experimental:

Instead of just saying that a 0 in the first or second version part means instability, we could extend that to the third part too. Then the user need only remember "anything with a zero in it is not stable", which is what people think anyways.

And n.0.0 could indicate 'experimental'.

So we can get the essence of the Debian classes through version numbers:

i'm a little worried about n.1.0 indicating 'testing' instead of 'stable'; although users sometimes distrust versions with a 0 patch level (eg 1.3.0), i think these are usually expected to be stable, so my proposal would be an expected change to the usual meaning of versions like this.

Should we have simultaneous work on three major versions? Eg when version 1.3.2 is the latest stable version:

That may be too complicated. What about:

Note that within the stable series, things like 1.2.0 are still testing or unstable; since a new minor revision means (potential) backwards incompatibility, this is where we test out those backwards-incompatible things before committing to them (?)

---

i dunno if i ever put a guide to reading my design notes style anywhere. So here it is:

---

" RFC's and the transparency with the community are a mess. Not everyone has the time or inclination to read them over and this leads to things like "When is MIR landing? Is it turned on?" Some RFC's were accepted long ago but no work has been done on them. Having a dashboard that shows what RFC's have been accepted, which ones are actually being worked on, and target dates would go a long way for Rust. Having a place to point users would really help people have a clear understanding with what's coming down the line. " -- https://internals.rust-lang.org/t/setting-our-vision-for-the-2017-cycle/3958/9

---

carussell 350 days ago

parent favorite on: Rewrite Everything in Rust

Do user studies.

It's pretty much unheard of, but nearly every open source project would benefit far more from a week of watching potential contributors trying to get up to speed than it would from making sure the project roadmap is delivered a week sooner.

That's my claim for projects in general, and what you're asking for re Rust is a little different, but the approach can applied there, too. Focus on everything from navigating the project website and other docs, to interactions with the tools that make up the "ecosystem", to programmer expectations that could shape the language (with the latter actually being the least important).

Don't trust users to self-report. Don't overestimate your ability to make a combined diagnosis and prescription, and don't underestimate how much silent resignation is killing your adoption rate. Observe and work off real notes.

---

ppl like PHP's documentation:

"(thorough, heavily commented, and unparalleled) documentation."

http://docs.php.net/docs.php

---

i guess every big project should have something like this:

http://mattwarren.org/2017/03/23/Hitchhikers-Guide-to-the-CoreCLR-Source-Code/

---

the CommonLisp? HyperSpec? has something called a 'permuted symbol index'. It's an index, except that each function with more than one hyphenated word appears in the index for the first letter of EACH word in it. For example, 'get-decoded-time' appears in G, D, and T. To make reading through it easier, the text in the index is aligned to the letter than is causing the term to appear. Examples:

---

"The source of truth for pretty much any project is almost exclusively Github, godoc.org documentation has links directly into the source-code for every single function and struct"

---

we should write an official style guide, eg:

https://vuejs.org/v2/style-guide/

---

for oot we'd have 'major versions' instead of 'epochs', but here's a good list of things that should cohesively support new features/changes in a new major version:

" aturon commented on Jul 3, 2017 •

Rust's ecosystem, tooling, documentation, and compiler are constantly improving. To make it easier to follow development, and to provide a clear, coherent "rallying point" for this work, this RFC proposes that we declare a epoch every two or three years. Epochs are designated by the year in which they occur, and represent a release in which several elements come together:

    A significant, coherent set of new features and APIs have been stabilized since the previous epoch.
    Error messages and other important aspects of the user experience around these features are fully polished.
    Tooling (IDEs, rustfmt, Clippy, etc) has been updated to work properly with these new features.
    There is a guide to the new features, explaining why they're important and how they should influence the way you write Rust code.
    The book has been updated to cover the new features.
        Note that this is already required prior to stabilization, but in general these additions are put in an appendix; updating the book itself requires significant work, because new features can change the book in deep and cross-cutting ways. We don't block stabilization on that.
    The standard library and other core ecosystem crates have been updated to use the new features as appropriate.
    A new edition of the Rust Cookbook has been prepared, providing an updated set of guidance for which crates to use for various tasks.

Sometimes a feature we want to make available in a new epoch would require backwards-incompatible changes, like introducing a new keyword. In that case, the feature is only available by explicitly opting in to the new epoch. Existing code continues to compile, and crates can freely mix dependencies using different epochs.

Rendered

Update: there's a Request for Explanation podcast episode about this RFC, which is a good way to quickly get up to speed!

Edit: fixed rendered link "

" The status quo

Today (((without epochs))), Rust evolution happens steadily through a combination of several mechanisms:

    The nightly/stable release channel split. Features that are still under development are usable only on the nightly channel, preventing de facto lock-in and thus leaving us free to iterate in ways that involve code breakage before "stabilizing" the feature.
    The rapid (six week) release process. Frequent releases on the stable channel allow features to stabilize as they become ready, rather than as part of a massive push toward an infrequent "feature-based" release. Consequently, Rust evolves in steady, small increments.
    Deprecation. Compiler support for deprecating language features and library APIs makes it possible to nudge people toward newer idioms without breaking existing code.

All told, the tools work together quite nicely to allow Rust to change and grow over time, while keeping old code working (with only occasional, very minor adjustments to account for things like changes to type inference.) "

" Furthermore:

    As with today, each new version of the compiler may gain stabilizations and deprecations.
    When opting in to a new epoch, existing deprecations may turn into hard errors, and the compiler may take advantage of that fact to repurpose existing usage, e.g. by introducing a new keyword. This is the only kind of breaking change a epoch opt-in can make.

Thus, code that compiles without warnings on the previous epoch (under the latest compiler release) will compile without errors on the next epoch (modulo the usual caveats about type inference changes and so on).

...

Warning-free code on epoch N must compile on epoch N+1 and have the same behavior.

There are only two things a new epoch can do that a normal release cannot:

    Change an existing deprecation into a hard error.
        This option is only available when the deprecation is expected to hit a relatively small percentage of code.
    Change an existing deprecation to deny by default, and leverage the corresponding lint setting to produce error messages as if the feature were removed entirely."

" the current pattern of first linting and then turning that lint into error by default then into a hard error "

---

so to summarize my takeaways from the previous section, applied to Oot:

Even though backwards incompatibile changes are allowed between minor versions, between minor versions (after 1.0), we try to offer painless upgrades, prioritizing backwards compatibility (at the cost of slower improvement). But between major versions, we try to aggressively improve the language, prioritizing fixing old mistakes (at the cost of backwards compatibility across major versions).

To reiterate, after version 1.0, we want to offer painless upgrades within any major version series. Within one major version, we usually value backwards compatibility and stability over language improvement, although we do permit backwards incompatible changes across minor version bumps. Between major versions (and before 1.0), however, anything goes, and the language may or may not change dramatically without providing an easy upgrade path. Between major versions, we usually value language improvement more than backwards compatibility and stability.

---

geofft 16 hours ago [-]

There is no review process or central restrictions on who can upload to the Ubuntu Snap Store, so in a sense, this isn't surprising. https://docs.snapcraft.io/build-snaps/publish

Does the name "Ubuntu Snap Store" carry a connotation that code is reviewed for malware by Ubuntu, the way that the Apple, Google, Amazon, etc. mobile app stores are? Or does its presence in the software center app imply a connotation that it's endorsed by the OS vendor?

I was at a PyCon? BoF? earlier today about security where I learned that many developers - including experienced developers - believe that the presence of a package on the PyPI? or npm package registries is some sort of indicator of quality/review, and they're surprised to learn that anyone can upload code to PyPI?/npm. One reason they believe this is that they're hosted by the same organizations that provide the installer tools, so it feels like it's from an official source. (And on the flip side, I was surprised to learn that Conda does do security review of things they include in their official repositories; I assumed Conda would work like pip in this regard.)

Whether or not people should believe this, it's clear that they do. Is there something that the development communities can do to make it clearer that software in a certain repository is untrusted and unreviewed and we regard this as a feature? The developers above generally don't believe that the presence of a package on GitHub?, for instance, is an indicator of anything, largely because they know that they themselves can get code on GitHub?. But we don't really want people publishing hello-worlds to PyPI?, npm, and so forth the way they would to GitHub? as part of a tutorial, and the Ubuntu Snap Store is targeted at people who aren't app developers at all.

reply

eat_veggies 16 hours ago [-]

I like Arch's package management model, where sources are split into the official repositories, which are manually approved, and the AUR, which everyone knows are not officially endorsed or reviewed, and to check the sources and PKGBUILDS for anything sketchy before installing.

The processes for installing from the two are also different enough that the user can't mistake one for the other: official packages are a pacman -S away, but installing from the AUR either requires a git clone and a makepkg -sri, or an AUR helper that bugs you to review the PKGBUILD.

reply

zootboy 15 hours ago [-]

Also, compare the wording on the snap store:

> Safe to run - Not only are snaps kept separate, their data is kept separate too. Snaps communicate with each other only in ways that you approve.

Versus the AUR:

> DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.

reply

---

[3] has an interesting idea:

label individual features (or sub-APIis, in the case of the K8s project) as alpha, beta, or stable, even though they are all present in a release.

'alpha' features are unstable, may change or disappear at any time, have not been subjected to an 'API review' (a thorough design review, presumably undertaken AFTER community experience with an implementation), and may not have the usual CLI/API/UI tooling support. 'beta' features are have been reviewed and have tooling support and have a (shortened) minimal guaranteed deprecation schedule.

i think Rust does something similar.

futhermore, [4] K8s discourages new features in patch releases (presumably b/c new features come at the cost of greater probability of new bug).

---

Ethereum's EIP process, which was based on Bitcoin's BIP process, which was based on Python's PEP process:

https://eips.ethereum.org/EIPS/eip-1

i mostly like this (esp. the defn of the metadata fields at the top of the EIPs), except i would change:

---

athenot 2 hours ago [-]

As an outsider, I'd like to see somewhere near the home page a few short snippets of code to get a feel for Julia and hopefully show the kind of uses for which it is a natural choice.

Nim's home page¹ shows a piece of sample code right at the top. Perl6's page² has a few tabs quickly showing some patterns it's good at. Golang³ has a dynamic interpreter prepopulated with a Hello World.

Julia's home page shows a nice feature list and links to docs to deep dive but it doesn't do a good job of selling it.

¹ https://nim-lang.org

² https://perl6.org

³ https://golang.org

reply

ChrisRackauckas? 2 hours ago [-]

For scientific computing, showing the package ecosystem is the most important thing. When you look at this thread, people are asking about dataframes and differential equations. Julia's site reflects this: yes there are things like Pandas, and for plotting, etc.

reply

---

Go has 'draft designs' which might evolve into 'proposals' (which are issue tracked i think?). 'Draft designs' have a "problem overview" "(think "cover letter")", a 'draft design', and a 'wiki feedback page'.

https://go.googlesource.com/proposal/+/master/design/go2draft.md

---

interesting things from the Rust development process:

"No New Rationale rule: decisions must be made only on the basis of rationale already debated in public (to a steady state)."

this is intended to prevent the core team from making decisions in a back room without really participating in the RFC process.

"At some point, a member of the subteam will propose a “motion for final comment period” (FCP), along with a disposition for the RFC (merge, close, or postpone). This step is taken when enough of the tradeoffs have been discussed that the subteam is in a position to make a decision. That does not require consensus amongst all participants in the RFC thread (which is usually impossible). However, the argument supporting the disposition on the RFC needs to have already been clearly articulated, and there should not be a strong consensus against that position outside of the subteam."

three things there: (1) a procedural phase which is a 'last call' for comments; at this point either the subteam has sort-of made up its mind, and that this phase is a chance for others to try and change its mind, or at least "the team believes the discussion has reached a steady state" [5] (2) three decision options: merge, close, postpone (3) a constraint on the core team; they cannot overrule a 'strong consensus' against their position.

-- [6]

---

the 'last call' thing is an interesting procedural innovation. It's as if Congress voted on something, the thing passed, and then there was one more set of speeches for people to try and dissuade them, and then they voted again to finalize. Of course it isn't quite like that because here there are two separate deliberative bodies; the subteam and the rest of the project, and the point here is not for dissenting members of the subteam to dissent, but for people outside the subteam to inform them what everyone else thinks. You might say it's more like if a Congressional committee had the final say on things within its remit, but the rest of Congress at least got the chance to do these 'last call' speeches after the committee made a tentative decision. This 'last call' procedure is also similar to the way that regulatory agencies must put up proposed regulations for comment.

---

regarding the No New Rationale rule, and the 'there should not be a strong consensus against that position outside of the subteam' rule, i guess the No New Rationale rule sounds good b/c it forces deliberation but does not constrain decisions, so why not? (i guess one reason why not: no ability for the core team to move fast)

regarding the cannot-overrule-strong-consensus rule, i chafe at this because i'm of the mind that (a) unlike Rust Mozilla employees, hobbyists who aren't being paid fulltime to work on their side project don't always have the time to debate the community, and (b) good design requires wholistic/coherent hard choices, which design-by-committee (or design-by-consensus in any form) makes difficult, and (c) this brings up the problem of what-if-you-dont-think-the-majority-of-the-community-is-well-informed-and-shares-your-goals. So i probably wouldn't make this a rule. Otoh it's a good guideline -- if there's a strong consensus against your position you're probably wrong, so don't overrule that lightly.

---

http://dtrace.org/blogs/bmc/2018/09/18/falling-in-love-with-rust/ (a famous programmer who likes Rust) opines that "Rust feels like a distillation of the best work that came before it" and also that Rust is good at finding innovative solutions to tradeoffs that were traditionally considered zero-sum; "we can have nice things." E further says "This kind of inclusion is one that one sees again and again in the Rust community: different perspectives from different languages and different backgrounds. Those who come to Rust bring with them their experiences — good and bad — from the old country, and the result is a melting pot of ideas. This is an inclusiveness that runs deep: by welcoming such disparate perspectives into a community and then uniting them with shared values and a common purpose, Rust achieves a rich and productive heterogeneity of thought."

http://aturon.github.io/2018/06/02/listening-part-2/ (a leader of Rust) says similar things, "We developed a pattern of slogans that summarized our understanding at that point: Memory safety without garbage collection; Abstraction without overhead; Concurrency without data races; ... The common thread here is reconciling oppositions. Not just finding a balance in a tradeoff, but finding ways to reduce or eliminate the tradeoff itself... “knowing how to have our cake and eat it too”". He calls this 'positive-sum thinking', "A zero-sum view would assume that apparent oppositions are fundamental, e.g., that appealing to the JS crowd inherently hurts the C++ one. A positive-sum view starts by seeing different perspectives and priorities as legitimate and worthwhile, with a faith that by respecting each other in this way, we can find strictly better solutions than had we optimized solely for one perspective."

Aturon claims that this DESIGN result was achieved by pluralism in the COMMUNITY. He defines pluralism: "Pluralism is about who we target: Rust seeks to simultaneously appeal to die-hard C++ programmers and to empower dyed-in-the-wool JS devs, and to reach several other varied audiences. That’s uncomfortable! These audiences are very different, they have divergent needs and priorities..."

and connects the two: "I can’t tell you the number of times I’ve experienced positive-sum outcomes when working with the Rust community. Times when I’ve ended up with a design much better than the one I started with, and got there because I thought it was important to listen to people with different priorities."

(although he agrees that this pluralism and positive-sum thinking must be moderated in order to achieve a coherency; in the paragraph below that one, he says that the formal Rust teams must ultimately still make some final decisions and have a coherent vision; as he says, "pluralism happens at the level of community and goals, not at the level of the actual design.")

he also goes on to make some concrete recommendations about how to foster pluralism. It comes down to what he said above, "listen to people with different priorities". He also says that it's important to try and be "subtle"; he stands in opposition to a comment by Linus Torvalds:

"I honestly despise being subtle or “nice”. The fact is, people need to know what my position on things are. And I can’t just say “please don’t do that”, because people won’t listen. I say “On the internet, nobody can hear you being subtle”, and I mean it." -- Linus Torvalds

Aturon's issue with this is that he thinks that when someone in the community feels that the decision-makers aren't listening, "you reach for the only tool you have: raising your voice as loud as you can". So in a community that follows Linus's prescription, everyone is supposed to "yell loud to make sure you’re heard. “I’m against every idea in this proposal”. “This feature will ruin Rust”. “Rust is heading in the wrong direction”.". Aturon feels that such an environment "works directly against the principles of plurality and positive-sum thinking. Escalation encourages a zero-sum environment, an us-versus-them battle, completely at odds with the positive-sum thinking that has led to Rust’s best innovations. And it’s a vicious cycle: if everyone is yelling, truly listening becomes very painful, and you “grow a thicker skin” in part by learning to not take other people’s feelings so seriously… which means they need to yell louder..."

http://aturon.github.io/2018/05/25/listening-part-1/ talks about how some parliamentary-ish rules:

(at this point i never finished my summary... what follows is an attempt months later to finish it)

" No New Rationale: decisions must be made only on the basis of rationale already debated in public (to a steady state)

    At some point, a member of the subteam will propose a “motion for final comment period” (FCP), along with a disposition for the RFC (merge, close, or postpone).
        This step is taken when enough of the tradeoffs have been discussed that the subteam is in a position to make a decision. That does not require consensus amongst all participants in the RFC thread (which is usually impossible). However, the argument supporting the disposition on the RFC needs to have already been clearly articulated, and there should not be a strong consensus against that position outside of the subteam. Subteam members use their best judgment in taking this step, and the FCP itself ensures there is ample time and notification for stakeholders to push back if it is made prematurely."

---

 "
 Embrace negative space. Make a process for defining features and concepts out of the future trajectory of the language. Allow (or encourage) RFCs that say "Rust will never have X" for some value of X. While this sounds "negative" -- and it is, the word is written on the label -- it's a one-time thing where objections can be honestly considered with a long-term horizon ("never" is quite some time!) and given fair discussion, but then put to rest rather than being a perennial source of lingering conflict. A few examples where one might want to find and articulate negative spaces: paint certain categories of expressivity permanently out of the type system (eg. dependent types, HKTs, etc.), or out of the grammar (eg. parameter qualifiers, positional or keyword args), or out of the set of kinds of item (eg. anonymous record types), or out of the set of inference obligations in the middle-end (eg. constant value synthesis, implicit arguments). Put some hard limits in place, both to avoid those features themselves, and also to avoid people "putting pieces in place" that exist only to eventually-enable them.

Front-load costs, make them explicit. Taking a page from webassembly's change process, make it clear that moving an RFC past a very early phase is going to require a commensurate investment in implementation, formalization, documentation revision, teaching-materials revision, test-writing, maintenance and so forth. Absent a way to cover the costs, defer changes "nobody has yet been found to pay for" at that stage.

Set a learning-rate and book-length target. Try to work backwards from the amount of time and number of pages it "should take" to learn the language, or become expert in it, then cut things that go beyond that. If "teach yourself Rust in 21 days" isn't going to work, figure out what should. Three months? Six? A year? Think about languages that "definitely take too long", and pick a number that's less than that. Is a thousand page manual healthy? Five hundred? Three hundred?

Set other mechanized limits: lines of code in the compiler, overall time to bootstrap, dollars spent per day on AWS instances, number of productions in the grammar, number of inference rules in the type system, percent test coverage, percent of documents allowed to be marked as "incomplete", etc. Get creative, figure out meaningful things you can measure and then put mechanisms in place to limit them.

Rate-limit activity based on explicit personal-time-budgeting: ballpark-figure how many hours (or paid hours) per person are realistically available without exhaustion or burnout -- including the least-privileged but still necessary participants -- and work backwards to figure out how many hours of participation and review "should be" digested per team, per release cycle, thus how much work gets scheduled. Then cut or defer things that go beyond that.

Allow the moderator team to apply rate limits or cooling-off periods to particular discussions as a whole. Sometimes an outside perspective that a discussion is just too heated overall is an easier way to de-escalate than shining a spotlight on a single person's behaviour.

As with the moderator team: grow an additional cross-project team that does budgeting and auditing of load levels in other teams. This can be effective because the auditing/budgeting team sees their work as helping people to say no to the right things, rather than the default stance most people will have when participating in a team, to say yes to too many things. "

-- [7]

---

already in 2018/12 we see that ppl are having trouble finding all of Rust's features [8]. Inspired by the thread in that last link (no need to reread it tho), a language should ideally eventually have all of:

These things should link to each other, especially downward; eg each part of the short tutorial should link to the corresponding part in the 'book' (and should include little notes about things not otherwise mentioned which link downwards), the lines of the cheat sheet and the parts of longer tutorial should link to the reference, the reference should link to the spec.

see also http://cslibrary.stanford.edu/101/EssentialC.pdf as another example document

---

Rust has a build bot called 'homu' which maintains a queue of things to be merged. It used to be called 'Bors'. Motivation: https://bors.tech/devdocs/bors-ng/readme.html

https://buildbot2.rust-lang.org/homu/ https://buildbot2.rust-lang.org/homu/queue/rust https://bors.tech/documentation/getting-started/ http://huonw.github.io/blog/2015/03/rust-infrastructure-can-be-your-infrastructure/

---

" Use .editorconfig to standardise coding standards across your team

My final offering is that of .editorconfig. This file is checked into your source control and configures the IDE of your team members. This ensures that you are all using the correct formatting when working on the codebase. Saving you unnecessary code conflicts and office arguments on the type of tabs to use, and the size that they should be.

[*] end_of_line = lf insert_final_newline = true

[*.php] indent_style = space indent_size = 4

"

---

" Major changes

    The lockfile (and configuration) format will become a strict subset of YAML. In the process, the lockfile information will be slightly changed to account for some long-awaited changes (such as #5892).
    We'll add support for plugins, which will be able to alter various things - from adding new commands to hooking into the resolution / fetching / linking steps to add support for new package sources or install targets.
    Related to the plugin system, Yarn will become an API as much as a CLI. You can expect to be able to require it and start using its components in your script - no need to parse your package.json anymore, no need to run the resolution .. Yarn will abstract all those tedious tasks away.
    Support for both Node 4 and Node 6 will be dropped. We don't expect Berry to have stable releases until after Node 6 gets EOL (expected April 2019), by which point it won't matter anymore.
    The log system will be overhauled - one thing in particular we'll take from Typescript are diagnostic error codes. Each error, warning, and sometimes notice will be given a unique code that will be documented - with explanations to help you understand how to unblock yourself.
    Some features currently in the core (such as autoclean) will be moved into contrib plugins. They'll still be supported, but might have a different release cycle than the standard bundle.
    The codebase will be ported from Flow to TypeScript. To understand the rational please continue reading, but a quick summary is that we hope this will help our community ramp up on Yarn, and will help you build awesome new features on top of it.
    The cache file format will switch from Tar to Zip, which offer better characteristics in terms of random access.

"

scrollaway 40 days ago [-]

Very happy to see yarn.lock will finally be a proper format that won't need its own parser. YAML subset is a pretty good choice, though I think canonicalized, indented JSON would be a better choice for this use case. Incidentally that's what npm uses as lockfile, I wonder if there's room to have the two package managers share the format (or even share the file itself).

Very excited to see shell compatibility guarantee in scripts as well. Using environment variables in scripts is a pain right now.

Finally one of the biggest news is the switch from Flow to Typescript. I think it's now clear that Facebook is admitting defeat with Flow; it brought a lot of good in the scene but Typescript is a lot more popular and gets overall much better support. Uniting the JS ecosystem around Typescript will be such a big deal.

.

wopian 40 days ago [-]

npm's lockfile is a pain to diff in PRs because of the JSON format where what was maybe 20 changed lines in yarn is upwards of 80 from the brackets.

With YAML and whatever format yarn.lock was in, the only changed lines are changes to the version resolutions, hash and dependencies.

donatj 40 days ago [-]

I'd say safely merging YAML diffs however could be trouble.

I don't know how restricted their YAML subset is, but in my experience it's so loose a format the only way to be sure YAML says what you think it says is to run it through a parser.

---

mjw1007 10 hours ago [-]

If your project has any third-party dependencies, and so (nowadays) you're going to set up requirements.txt and virtualenv and whatever anyway, I can see that you're going to think things like "this XML parser in the standard library is just getting in the way; I can get a better one from PyPi?".

But I think a lot of the value of a large standard library is that it makes it possible to write more programs without needing that first third-party dependency.

This is particularly good if you're using Python as a piece of glue inside something that isn't principally a Python project. It's easy to imagine a Python script doing a little bit of code generation in the build system of some larger project that wants to parse an XML file.

reply

rogerbinns 8 hours ago [-]

I think the biggest problem is going from zero third-party dependencies to one and more. Adding that very first one is a huge pain since there are many ways of doing it with many different trade offs. It is also time consuming and tedious. The various tools like you mention are best at adding even more dependencies, but are hurdles for the very first one.

reply

tachyonbeam 7 hours ago [-]

Not true. The more external dependencies you add, the more likely it is that one of them will break. I try to have as few external dependencies as possible, and to pick dependencies that are robust and reliably maintained. There is so much Python code on GitHub? that is just broken out of the box. When people try your software and it fails to install because your nth dependency is broken or won't build on their system, you're lucky if they open an issue. Most potential users will just end up looking for an alternative and not even report the problem.

reply

Groxx 5 hours ago [-]

They're typically broken out of the box because they don't pin their dependencies. pip-tools[1] or pipenv[2], and tox[3] if it's a lib, should be considered bare minimum necessities - if a project isn't using them, consider abandoning it ASAP, since apparently they don't know what they're doing and haven't paid attention to the ecosystem for years.

[1] https://github.com/jazzband/pip-tools [2] https://docs.pipenv.org/en/latest/ [3] https://tox.readthedocs.io/en/latest/

reply

tachyonbeam 3 hours ago [-]

It's trickier than just pinning dependencies because some libraries also need to build C code, etc. Once you bring in external build tools, you have that many more potential points of failure. It's great. Also, what happens if your dependencies don't pin their dependencies? Possibly, uploading a package to pipy should require freezing dependencies or do it automatically.

reply


stochastastic 9 hours ago

unvote [-]

The Python standard library has been a huge help for me. Evaluating which third party packages to trust and handling updates is a hassle. (Would love a solution for this. Does anyone have a curated version of PyPI??) I’m surprised that people want to slim it down other than for performance on a more constrained system.

As an aside, why doesn’t the Python standard library extend/replace features with code from successful packages like Requests? Tried it and it didn’t work? Too much bloat? Already got too much on the to-do list?

reply