proj-oot-old-150618-ootDevelopmentProcessNotes1

Of course, Python has a great dev guide:

https://docs.python.org/devguide/index.html

---

http://www.reddit.com/r/haskell/comments/2if0fu/on_concerns_about_haskells_prelude_favoring/cl1mv9k

[–]aseipp 32 points 1 day ago*

RE last point:

    More importantly, the wiki helped summarize and explain a discussion that was extremely laborious to read and impossible to understand by looking through a mail list.

Yes yes yes yes. I really think this is needed more that I noodle about it. I believe it has helped immensely with GHC and I think it's crucial for several reasons to help keep things moving.

Look at the Python Enhancement Proposals (PEP) index: http://legacy.python.org/dev/peps/

These are beautiful, straightforward, have a relatively standard format, are indexed for historical records, are revision controlled, and go through a standardization process like ours. Some are big, some are small.

As it stands, we have a 'process' (the last part), but it falls flat in several ways. We frequently hit points like:

    What proposals occurred before?
    What status is a current proposal in?
    Did a specific proposal occur before - why, and what happened to it?
    What discussion started the idea?
    What objections or concerns do people have about it?
    What refinements did a proposal have?

A mailing list just doesn't scale for recording these particular things in an accessible way, honestly.

libraries@haskell.org is great for discussion and open forums, when we want to have an open discussion about what might happen, and let people voice concerns. But it is awful as a historical index/catalogue, and an awful place to get an overview - many times the discussion happens first, but rarely do refined abstracts of a proposal appear. Reading the full mail thread is almost always required to get a 'full, updated understanding' of what state something might be in, just as you said.

I think adopting a process like the PEPs - with the primary motivation being a better way to document and evolve proposals - would be a great idea.

    permalink
    save
    report
    give gold
    reply

[–]tibbe 11 points 1 day ago

We do have a PEP process for the Haskell platform, modeled after the Python PEP process:

    http://trac.haskell.org/haskell-platform/wiki/AddingPackages
    http://trac.haskell.org/haskell-platform/wiki/Proposals

I must say it hasn't been a major success however.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 3 points 1 day ago

It kind of sounds like no one knows about it. Could that be why it hasn't been successful or do you think there is something wrong with the proposal idea itself?

    permalink
    save
    parent
    report
    give gold
    reply

[–]tibbe 4 points 1 day ago

I think it's most likely a combination of issues, one of them is that it's not widely known.

    permalink
    save
    parent
    report
    give gold
    reply
    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 16 points 1 day ago*

We have a proposal.

It is the Foldable/Traversable Proposal / Burning Bridges Proposal a year and a half ago. It garnered over 100+ comments, across 2-3 threads, heavily biased in the positive on the topic of Foldable/Traversable generalization.

A large part of the reason for the formation of the committee was to manage the sense of frustration that folks in the community had that nothing could ever change without complete universal agreement on any package that didn't have a dedicated maintainer. As more and more of the platform fell nominally into GHC HQ's hands there was nobody there who felt responsible for making decisions.

Mind you, the proposal itself just said "swap out the monomorphic versions of combinators in base with the ones from Foldable and Traversable" and after SPJ formed the committee, and once we collectively figured out how to work with GHC HQ we did.

That part is done.

It was easy except for a few knock-on effects that we found when we went to implement it. Data.List and Control.Monad re-exported monomorphic functions, and we had to restructure parts of base to put the code in the right place to allow the re-exports. Finally, since we wanted to do it without changing semantics in user code, we needed the raw material from the "extra members in the Foldable class" proposal from a few weeks back.

If we just want that proposal, it is summarized in that sentence directly, and already implemented.

Anything around haskell2010 or trying to make things better for folks who would prefer a more monomorphic Prelude is actually an extension to the scope.

If you want us to go farther and try to make it easier for folks to try to work with a monomorphic Prelude, these are the things we need help with, but they are technically out of scope of the original proposal.

I personally want that smoother upgrade path, but we could live without it.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 2 points 1 day ago

    It is the Foldable/Traversable Proposal / Burning Bridges Proposal a year and a half ago. It garnered over 100+ comments, across 2-3 threads, heavily biased in the positive on the topic of Foldable/Traversable generalization.

Where was it posted? Was it easy to miss?

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 6 points 1 day ago*

It was a proposal on the libraries@ mailing list under "Burning Bridges", "Foldable/flexible bridges", and several other titles that raged for a couple of months, and more or less overwhelmed the mailing list in May 2013.

The formation of the core libraries committee spun out of the "Making decisions" thread that SPJ formed in response to the furor.

    permalink
    save
    parent
    report
    give gold
    reply

[–]hailmattyhall 2 points 1 day ago

Ok, thanks

    permalink
    save
    parent
    report
    give gold
    reply

[–]rwbarton 4 points 1 day ago

The proposal "Swap out the monomorphic instances in base with the ones from Foldable and Traversable" doesn't say anything about exporting new names from Prelude like foldMap, mappend, and traverse. I don't see how it is logically necessitated by generalizing the functions which are already exported by Prelude. Someone who actually wants to write a Foldable or Traversable instance can import the necessary modules.

Was this simply a mistake? If not it looks exactly like committing the code into the repo before writing the proposal.

    permalink
    save
    parent
    report
    give gold
    reply

[–]edwardkmett 8 points 1 day ago

I would argue that it was a decision.

We have a Libraries Submission process.

    The maintainer is trusted to decide what changes to make to the package, and when. They are strongly encouraged to follow the guidance below, but the general principle is: the community offers opinions, but the maintainers decide.

The core libraries committee acts as a collective maintainer for the portions of the Haskell Platform that aren't maintained by anyone else.

We had a nuance to the proposal that required a decision, and so we made one.

Admittedly we also have,

    API changes should be discussed on the libraries mailing list prior to making the change, even if the maintainer is the proposer. The maintainer still has ultimate say in what changes are made, but the community should have the opportunity to comment on changes. However, unanimity (or even a majority) is not required.
    Changes that simply widen the API by adding new functions are a bit of a grey area. It's better to consult the community, because there may be useful feedback about (say) the order of arguments, or the name of the function, or whatnot. On the other hand few clients will actually break if you add a new function to the API. Use your judgment.

The precise space of names we want to add to the Prelude is still open for discussion. It is in the space of things we need to talk about -- we have as-yet unmerged changes into GHC around this question, but I do personally think the balancing act of utility vs. complete non-breakage is best served by making the classes that are exported from the Prelude implementable as given to you.

    permalink
    save
    parent
    report
    give gold
    reply

---

i find the Scheme development process to have various flaws.

footnote 1: "In 1971, largely in response to the need for programming language independence, the work was reorganized: development of the Data Description Language was continued by the Data Description Language Committee, while the COBOL DML was taken over by the COBOL language committee. With hindsight, this split had unfortunate consequences. The two groups never quite managed to synchronize their specifications, leaving vendors to patch up the differences. The inevitable consequence was a lack of interoperability among implementations." -- http://en.wikipedia.org/wiki/CODASYL

---


maybe use semantic versioning.

At any one time, development is going on in two ways; in previous major version, and in the experimental/development version (versioned via pre-release identifiers). At some point, open-ended development stops in the pre-release version, and it transitions into testing (alpha, beta, rc, etc) and then into release; at this point, (a) new releases stop being developed on the old major version (although whatever new release was currently in development is finished, and maybe the next one, if development had already started on it), (b) a new experimental/development version for the next major release is opened.

The experimental/development versions' prerelease numbers are in two parts, a 'minor revision' number and a 'patch number'. The minor revisions are incremented to indicate significant new ('public API'-changing, e.g. language changes, not just toolchain improvements) functionality, and the patches are incremented for everything else. Note the presence of 'significant' there; in the experimental/development versions, generally the minor version number should only be incremented when at least one of the changes since the previous minor version has been discussed in a PEP (and, any change that has been discussed in a PEP warrents a minor version bump in the prerelease).

So e.g. you'd have Oot 2.2; and simultaneously Oot 3.0.0-22.3.

In the released series, you'd have the same thing going on with the next minor release.

So e.g. you'd have Oot 2.2; and simultaneously Oot 2.3.0-11.2; and simultaneously Oot 3.0.0-22.3.

At some point Oot 3.0.0 would start moving towards a release, at which point an 'alpha.' prefix would be introduced, a minor and patch pre-release would still be there (because in alpha and beta we still would probably have occassion to change things, not just fix bugs, although we'd try not to, trying not to harder in beta than in alpha), then it would go thru e.g. Oot 3.0.0-alpha.0.0, Oot 3.0.0-alpha.2.11, Oot 3.0.0-beta.0.0, Oot 3.0.0-beta.1.11, Oot 3.0.0-rc.0, Oot 3.0.0-rc.3, and finally Oot 3.0.0. When Oot 3.0.0-alpha.0.0 was created, creation of new minor development versions of Oot 2.x would not occur anymore (e.g. Oot 2.3.0-11.2 would still be developed into Oot 2.3.0, and that would still be patched, but no Oot 2.4.0-* would be created); that activity would move to Oot 3.1.0-*.

?: to which of these versions is a PEP-like process (sorta) strictly applied? i'm guessing, only the the minor releases after the major release. A PEP-like process is sloppily applied to the next major release? Perhaps we should also have a 'sandbox' version without any PEP process being applied (more of a pure BD (benevolent dictator/dictatorial) process)? Or perhaps instead add another grouping to the pre-release versions that can only be incremented when everything included has been documented in a PEP-like thing?

A: the solution i like best is to have a 'bayle's sandbox' branch off of the experimental version; because the latest 'bayle's sandbox' will always include the changes from the latest experimental version. Then require at least a short PEP-like thing before inclusion of new stuff from bayle's sandbox into the actual experimental version. Recall that one of the ideas of a PEP is to alert the wider community that a change is being considered, so that people who have criticism can state it. Note also that stuff can be put in in one experimental pre-release version and then taken in in a later experimental pre-release version; so it's not crucial for the PEP discussion to terminate before the decision is made to put it in there. Instead, there is a lower standard: something goes from the sandbox into the experimental prereleases when the BD thinks the change is sufficiently likely to remain so as to make it worthwhile for contributors to experimental to merge with it now.

---

for major version 0, semver is silent. But we could use an (unevenly applied) system that mimics semver, but in multiple microcosms, in two layers. The previous sentence is gibberish, but what i mean is this: versions of the form 0.0.0-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0 (so this is a 'microcosm' of the semver system). Similarly, versions of the form 0.0.1-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0. Etc (that's the first 'layer') Similarly, versions of the form 0.1.0-(X) work as if X was the whole version, and X was governed by the semver.org standard for major versions above 0 (this is the second 'layer').

So, in other words, we'd start with version 0.0.0-0.0.1. The 0.0.1 part (the "pre-release version" according to semver) is interpreted using semver as if 0 was a major version above 0 (that is, major version 0, minor version 0, patch version 1). We keep on going with this until we feel that some sort of stability threshold has been breached, at which point we increment the patch version, and get to 0.0.1-0.0.1. Then we keep going with that, incrementing the patch version, until we get the urge to release a version 1.0. Instead, we increment the minor version.

Part of the motivation here is to defer our urge to release a version 1.0. Why? Because i see that Rust declared that they are moving towards a 1.0, but it seems to me that they are still making substantial changes in the language. So, the system i proposed has sort of a design-time variant of the idea of 'release candidate'; each minor version under major version 0 is sort of a 'release candidate' for the language design. This would allow a language to, like Rust, say "ok we're trying to move towards a 1.0 here, but we still have to finish the design in areas X and Y", but then instead of actually releasing a 1.0 when X and Y are finished, releasing a 'release candidate' with a version like 0.1.0; this would allow more time to actually evaluate the decisions made in X and Y as work on the language goes on. Once design changes seem to have settled down (determined by empirically looking backwards, not just because it seems like it will be time for a 1.0 soon), only then do you release a 1.0.

The motivation for the second layer is that you probably want a way to indicate progress towards a 1.0 before you hit these 'design release candidates'. The motivation for using the pre-release version as another semver is so that, while you are iterating, you can still express to people whether some change is a major backwards-incompatible change in how the language works, a significant backwards-compatible addition, or a minor bugfix or implementation change. We would interpret semver sloppily with respect to this pre-release microcosm, in that we might sometimes allow 'minor' backwards-incompatible changes to increments the prerelease-minor-version rather than the prerelease-major-version, or 'minor' additions to be prerelease-patch-version increments rather than prerelease-minor-version increments.

A defect with the idea of using the major version 0's minor and patch version in this manner is that it makes the rules for major version 0 different from the experimental development on later major versions; even though we probably want to apply a similar process in each case.

Perhaps we should just expand the above experimental pre-release versioning idea to have a prerelease-major, which fills the 'design release candidate' role noted here. That makes the minor at patch versions for major version 0 unused. However, for major version 0, we could still use them in the way described above, for publicity (e.g. to broadcast when changes are settling down for the purpose of attracting more users and contributors when we feel like we're almost done).

---

i think we need to modify this slightly: at any given point in time, there needs to be potentially 4 points of development, not 2 or 3:

the next major version is distinct from the experimental version because you might want to isolate a subset of the ideas from the experimental version and release them as a new major revision of the language. Eg in the experimental version you try all sorts of crazy stuff; at any given point in time, you'll probably only be sure about some of the stuff you've tried, while much of it seems good but you want more confidence before you bake it into a major version of the language, forcing adopters to actually use it for real. So you put the stuff you're confident about into the next major version, and leave the rest in the experimental branch.

This isn't too different from the 'Bayle's branch' idea above, but it is different in that you want to officially encourage the language developer community to all be trying it out, on a synchronized version; which is why now i call it an experimental version, not a branch. There might still be a Bayle's branch off of the experimental version, which is not being officially encouraged for everything to try it out.

---

the last minor version of the previous major version of the language receives security fixes, bug fixes, and possibly other small changes such as deprecations and helpers that could help incrementally convert code towards the next major version.

the last minor version of the second-previous major version of the language receives security fixes.

---

after each minor version is released, we immediately try to start a new release based on HEAD; we don't wait for sufficient changes to pile up

---

rust moved to a 'train' model:

http://blog.rust-lang.org/2014/10/30/Stability.html

and they provide some elegant justifications for not stabilizing specific things and for not allowing ppl to opt in to unstable features:

" ...we cannot stabilize syntax extensions, which are plugins with complete access to compiler internals. Stabilizing it would effectively forever freeze the internals of the compiler; we need to design a more deliberate interface between extensions and the compiler. So syntax extensions will remain behind a feature gate for 1.0. ...

What about stability attributes outside of the standard library?

Library authors can continue to use stability attributes as they do today to mark their own stability promises. These attributes are not tied into the Rust release channels by default. That is, when you’re compiling on Rust stable, you can only use stable APIs from the standard library, but you can opt into experimental APIs from other libraries. The Rust release channels are about making upgrading Rust itself (the compiler and standard library) painless.

Library authors should follow semver; we will soon publish an RFC defining how library stability attributes and semver interact.

Why not allow opting in to instability in the stable release?

There are three problems with allowing unstable features on the stable release.

First, as the web has shown numerous times, merely advertising instability doesn’t work. Once features are in wide use it is very hard to change them – and once features are available at all, it is very hard to prevent them from being used. Mechanisms like “vendor prefixes” on the web that were meant to support experimentation instead led to de facto standardization.

Second, unstable features are by definition work in progress. But the beta/stable snapshots freeze the feature at scheduled points in time, while library authors will want to work with the latest version of the feature.

Finally, we simply cannot deliver stability for Rust unless we enforce it. Our promise is that, if you are using the stable release of Rust, you will never dread upgrading to the next release. If libraries could opt in to instability, then we could only keep this promise if all library authors guaranteed the same thing by supporting all three release channels simultaneously.

It’s not realistic or necessary for the entire ecosystem to flawlessly deal with these problems. Instead, we will enforce that stable means stable: the stable channel provides only stable features.

What are the stability caveats?

We reserve the right to fix compiler bugs, patch safety holes, and change type inference in ways that may occasionally require new type annotations. We do not expect any of these changes to cause headaches when upgrading Rust.

The library API caveats will be laid out in a forthcoming RFC, but are similarly designed to minimize upgrade pain in practice. "

---

i don't like how Rust's 'alpha' release claims to be feature complete but also plans further breaking language changes:

http://blog.rust-lang.org/2015/01/09/Rust-1.0-alpha.html

---

"

heydenberk 8 hours ago

I'm impressed with how thoughtful the entire Rust development cycle has been. They've managed not just to strike a balance between stability and language redesign, but to approach to API and syntax standardization in a way that made tracking the changes relatively painless. Some of this is done in the language itself, with opt-in statements and stability annotations in the API, but the tooling around compatibility in the ecosystem is quite useful as well. Now they're going to doing CI on their own nightlies against packages in the ecosystem:

> To help ensure that we don’t accidentally introduce breakage as we add new features, we’ve also been working on an exciting new CI infrastructure to allow us to monitor which packages are building with the Nightly builds and detect regressions across the entire Rust ecosystem, not just our own test base.

reply

idunning 8 hours ago

CI against package ecosystems is a really great idea. We do it for Julia too [1], and it can identify some really subtle issues that would otherwise take longer to become apparent.

http://pkg.julialang.org/pulse.html

reply "

--

from the Rust Beta release announcement:

" The Beta release also marks a turning point in our approach to stability. During the alpha cycle, the use of unstable APIs and language features was permitted, but triggered a warning. As of the Beta release, the use of unstable APIs will become an error (unless you are using Nightly builds or building from source).

The Rust ecosystem continues to grow. The crates.io repository just passed 1 million downloads and has over 1,700 crates available. Many of the top crates in crates.io can now be built using only stable Rust, and efforts to port the remainder are underway. Therefore, we are now recommending that new users start with the Beta release, rather than the Nightly builds, and the rustup script will be modified to install Beta by default. (However, it is easy to switch to the Nightly build if some of your dependencies aren’t updated yet. See the install page for details.) What happens during the beta cycle?

The final Rust 1.0 release is scheduled for May 15th – exactly six weeks from now. In the interim, we expect to put most of our effort into fixing bugs, improving documentation and error messages, and otherwise improving the end-user experience. We don’t plan on making functional changes to stable content, though naturally we may make minor corrections or additions to the library APIs if shortcomings or problems are uncovered (but the bar for such changes is relatively high).

While we don’t expect to add any new features (or major new APIs) for the 1.0 release, that doesn’t mean we’re going to stop working on them altogether. In fact, quite the opposite! Per the train model, the plan is to continue development on new features on the master branch, in parallel with the beta. And of course, we’ll be issuing the beta for 1.1 release at the same time as we issue the final 1.0 release, so you shouldn’t have to wait long to start putting that work to use.

To help ensure that we don’t accidentally introduce breakage as we add new features, we’ve also been working on an exciting new CI infrastructure to allow us to monitor which packages are building with the Nightly builds and detect regressions across the entire Rust ecosystem, not just our own test base. This infrastructure is still in the development phase, but you can see a sample report here. "

---

rust's new governance and RFC process:

discussion:

summary:

previous governance structure:

the proposer of the new governance process thought the use of RFCs is very helpful. The proposer thinks it's important for all to understand that "... people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer."

the proposer of the new governance process thought the following things needed improvement:

the new system:

core team + several topically-focused subteams

initial subteams:

subteam composition:

subteams do these things (within their topical area):

subteam leader (core team member):

subteam deliberations must be transparent/public

composition of core team:

the core team does these things:

subteam/RFC mechanics:

moderation:

comparison to Mozilla's 'module' system:

the proposer of the current system says it can be seen as "an evolution of the current core team structure to include subteams", or alternately as "an evolution of (Mozilla's) module system where the subteam leaders (module owners) are integrated into an explicit core team"

my opinons:

i dont feel that this process would be disasterous, just that it may produce (a) a conservate rejection of big language changes, both de jure from the consensus requirements, and de facto from the separation of the language design and implementation team, and (b) a creeping lack of coherency, as subteam visions diverge and approve RFCs for the nightlies with no central direction.

instead, i would recommend a system where (a) the 'subteams' are explicitly merely subordinate delegates of the core team: the subteam discusses RFCs, filters out the ones it doesn't like, and then presents the core team with the ones it does like for approval; the core team would directly appoint subteam members, (b) the subteams are divided by language design topic, not by language design/library/compiler; and (c) less than consensus, especially if outside experts and users are being given a formal seat at the table.

---

boatzart 1 day ago

I'm so happy they finally moved the official documentation to Doxygen (http://docs.opencv.org/master/index.html), which appear to be a huge step up from their old Sphinx docs (http://docs.opencv.org/).

reply

---

not sure if i agree with this or not, i havent read it yet, but here's OpenCV?'s very detailed (old? it talks about Sphinx but i thought they used Doxygen?) style guide: http://code.opencv.org/projects/opencv/wiki/Coding_Style_Guide?version=1


this person has an interesting proposal for a 'rest period' for any potentially breaking changes, even 'minor' ones, after 1.0, after noting that eis company decided not to use Rust after seeing the discussion about a possibly breaking change, https://internals.rust-lang.org/t/pre-rfc-adjust-default-object-bounds/2199/52 .

MetaCosm? 6 minutes ago

> Tell me what you wanted Rust to do instead...

Well -- if we are going into what I would have preferred. I would have preferred a rest period after release. No ongoing changes across 3 trains on a 6 week cycle. Give the community time to accept, adapt, bitch, moan, whine, create workarounds, and then better workarounds. Let enterprises have time to buy in, test, accept or reject and give feedback. Deal with critical issues and focus on stability as a deliverable... and don't break anything during the rest period (even if that means being on a single version of LLVM and no new traits or modules or functions or whatever).

---

A comment on https://internals.rust-lang.org/t/pre-rfc-adjust-default-object-bounds/2199/52 suggests (criticizes, actually, but i like the idea they're criticizing) an interesting middle group between 'no breaking changes (with minor caveats)'

the idea is to not count something as a breaking change if it doesn't breaks any code that is on crates.io (Rust's CPAN)

btw what Rust said when they annouced 1.0 was:

"The 1.0 release marks the end of that churn. This release is the official beginning of our commitment to stability, and as such it offers a firm foundation for building applications and libraries. From this point forward, breaking changes are largely out of scope (some minor caveats apply, such as compiler bugs)." minor caveats are hyperlinks to Rust RFC: Policy on semver and API evolution and Rust RFC: Semantic versioning for the language

https://github.com/rust-lang/rfcs/blob/master/text/1122-language-semver.md

on that discussion nikomatsakis said:

"One interesting question is whether we will accept other sorts of breaking changes that are not strictly tied to soundness, but just cases where we seem to have gotten something slightly wrong. I have exactly one example in mind at the moment (some details of trait object lifetime defaults), but I can easily imagine that this scenario will arise. Probably best is to just address such things on a case-by-case basis: every good rule needs an exception, after all."

on hackernews, pcwalton said that technically, any of the following is a potentially-breaking change:

---

although Oot itself has optional static typing, the canonical Oot package repo requires strict mode to be turned on for packages it hosts (but mb have a special section called 'vetted dynamic' for packages which can't use strict mode b/c the type system is not expressive enuf?)

---