proj-oot-ootPackagingNotes1

(mostly moved to [1]

---

" The one thing I wish it had would be that your dependencies were yours only. I think this is one of the big reasons Node.js got successful so fast. In node you can run different versions with-in the same app depending on the libraries. Elixir is more like ruby in that you can only have one. Well you can have two for swapping out without down time but that is it. I do think this is one of the limits to the Erlang VM. "

---

" Packages and build systems

In addition to GHC itself, we make use of a lot of open-source Haskell library code. Haskell has its own packaging and build system, Cabal, and the open-source packages are all hosted on Hackage. The problem with this setup is that the pace of change on Hackage is fast, there are often breakages, and not all combinations of packages work well together. The system of version dependencies in Cabal relies too much on package authors getting it right, which is hard to ensure, and the tool support isn't what it could be. We found that using packages directly from Hackage together with Facebook's internal build tools meant adding or updating an existing package sometimes led to a yak-shaving exercise involving a cascade of updates to other packages, often with an element of trial and error to find the right version combinations.

As a result of this experience, we switched to Stackage as our source of packages. Stackage provides a set of package versions that are known to work together, freeing us from the problem of having to find the set by trial and error. "

---

https://nylas.com/blog/packaging-deploying-python

summary: docker sounds cool but it's too new for us. Wound up using dh-virtualenv

discussion: https://news.ycombinator.com/item?id=9861127

discussion summary:

svieira 13 hours ago

Back when I was doing Python deployments (~2009-2013) I was:

Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn't go out and I'd try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).

These days the things I'd definitely change would be:

Things I would consider:

reply

they should have used Docker anyway (paraphrased)

 Cieplak 17 hours ago

Highly recommend FPM for creating packages (deb, rpm, osx .pkg, tar) from gems, python modules, and pears.

https://github.com/jordansissel/fpm

reply

"http://pythonwheels.com/ solves the problem of building c extensions on installation. " "Pair this with virtualenvs in separate directories (so that "rollback" is just a ssh mv and a reload for whatever supervisor process)" "Also, are there seriously places that don't run their own PyPI? mirrors?"

localshop and devpi are local PyPI? mirrors, apparently

 perlgeek 5 hours ago

Note that the base path /usr/share/python (that dh-virtualenv ships with) is a bad choice; see https://github.com/spotify/dh-virtualenv/issues/82 for a discussion.

You can set a different base path in debian/rules with export DH_VIRTUALENV_INSTALL_ROOT=/your/path/here

reply

 erikb 7 hours ago

No No No No! Or maybe?

Do people really do that? Git pull their own projects into the production servers? I spent a lot of time to put all my code in versioned wheels when I deploy, even if I'm the only coder and the only user. Application and development are and should be two different worlds.

reply

---

"The final question was about what he hates in Python. "Anything to do with package distribution", he answered immediately. There are problems with version skew and dependencies that just make for an "endless mess". He dreads it when a colleague comes to him with a "simple Python question". Half the time it is some kind of import path problem and there is no easy solution to offer." -- van Rossum, paraphrased in https://lwn.net/Articles/651967/

arturhoo 4 hours ago

I agree with Guido that the thing I hate the most in Python is packaging in general. I find Ruby's gems, bundler and Gemfile.lock to be a much more elegant solution.

On the other hand, I really like the explicit imports (when used properly). Less magic that makes code navigation way easier.

reply

davexunit 28 minutes ago

As a distro packager, I find Python's packages to be much better and easier to integrate than Ruby gems. I've had no shortage of troubles with Ruby gems: requiring on the git binary to build the gem (even in a release tarball), test suites not being included in the gem releases on rubygems.org, rampant circular dependencies, etc. Python's PyPI? has caused me no such issues.

reply

---

dchesterton 2 hours ago

I find external dependencies much more reliable in the PHP world than JS. Most packages try to follow semver. Composer is one of the best dependency manager tools I've used and you can easily lock down dependency versions so you can install from a state which you know works.

People hate on PHP but there are a lot of quality packages out there and Composer is definitely considered best practice by most developers now.

reply

---

"Other points: Cargo is underrated. Traits over primitives is a huge win over Java's boxed collections."

---

JoshTriplett? 11 hours ago

I'm curious if there were any significant responses to the tooling/ecosystem questions regarding packaging and integration with Linux distributions.

I'd like to provide an application written in Rust in major Linux distributions. "cargo install" will not work, both because it depends on network access, and because it pulls in dependencies from outside the distribution. Similarly, many distributions have policies against bundling library sources in the application. There needs to be a straightforward way to turn a source package with a Cargo.toml file into a package for Debian and Fedora that depends on other packages corresponding to its Cargo dependencies (and C library dependencies).

reply

bluejekyll 11 hours ago

There are people working on this, here's an announcement for a deb plugin to Cargo:

https://www.reddit.com/r/rust/comments/4ofiyr/announcing_car...

reply

wyldfire 10 hours ago

I hadn't seen this. Thank you for sharing.

reply

steveklabnik 11 hours ago

Cargo install was never intended to be used that way. It's for things useful for Rust developers, not for people who use a program who happen to be written in Rust.

There's a few things at play here. For "build from source" distros, we've been making some changes to make it easier to package rustc. Namely, instead of relying on a specific SHA of the compiler to bootstrap, starting with 1.10, it will build with 1.9, and 1.11 will build with 1.10. This is much, much easier for distros. As for Cargo dependencies on those packages, we'll see. There's a few ways it could go, but we need the compiler distribution sorted first.

A second is that we'd like to eventually have tooling to make giving you a .deb or .rpm or whatever easier: https://github.com/mmstick/cargo-deb is an example of such a tool. This won't necessarily be good enough to go into Debian proper; I know they tend to not like these kinds of tools, and want to do it by hand. But "Hey thanks for visiting my website, here is a thing you can download and install", these kinds of packages can be made with this kind of tooling.

In general, it's complex work, as then, these issues are also different per distro or distro family. We'll get there :)

reply

JoshTriplett? 11 hours ago

I was more looking for a path to producing Policy-compliant packages for Debian and other distributions.

Do you know of anyone working on that right now, and how I can help?

reply

steveklabnik 11 hours ago

https://internals.rust-lang.org/t/perfecting-rust-packaging/2623 is the big internals thread about it; weighing in there is a good entry point. Thanks!

reply

---

some folks are making some noise about the TUF initiative for package signing:

https://theupdateframework.github.io/ http://freehaven.net/%7Earma/tuf-ccs2010.pdf good overview: https://lwn.net/Articles/628842/ https://lwn.net/Articles/629478/ http://legacy.python.org/dev/peps/pep-0458/ http://legacy.python.org/dev/peps/pep-0480/ https://github.com/rust-lang/cargo/issues/1281 https://github.com/rust-lang/crates.io/issues/75

---

"

regularfry 8 days ago [-]

NPM would be hugely improved if:

1) `npm install` on two different machines at the same point in time was guaranteed to install the same set of dependency versions.

1b) shrinkwrap was standard.

2) it was faster.

3) you could run a private repo off a static file server.

reply

jdlshore 7 days ago [-]

Check in your modules.

reply "

---

https://github.com/pypa/pipfile https://news.ycombinator.com/item?id=13011932

---

discussion on https://www.kennethreitz.org/essays/announcing-pipenv

cderwin 13 hours ago [-]

This is great, but sometimes I think that python needs a new package manager from scratch instead of more tools trying to mix and mash a bunch of flawed tools together in a way that's palatable by most of us. Python packaging sucks, the whole lot of it. Maybe I'm just spoiled by rust and elixir, but setuptools, distutils, pip, ez_install, all of it is really subpar. But of course everything uses pypi and pip now, so it's not like any of it can actually be replaced. The state of package management in python makes me sad. I wish there was a good solution, but I just don't see it.

Edit: I don't mean to disparage projects like this and pipfile. Both are great efforts to bring the packaging interface in line with what's available in other languages, and might be the only way up and out of the current state of affairs.

reply

renesd 12 hours ago [-]

I think python packaging has gotten LOTS better in the last few years. I find it quite pleasurable to use these days.

From binary wheels (including on different linux architectures), to things like local caching of packages (taking LOTS of load off the main servers). To the organisation github of pypa [0], to `python -m venv` working.

Also lots of work around standardising things in peps, and writing documentation for people.

I would like to applaud all the hard work people have done over the years on python packaging. It really is quite nice these days, and I look forward to all the improvements coming up (like pipenv!).

I'd suggest people checkout fades [1] (for running scripts and automatically downloading dependencies in a venv), as well as conda [2] the alternative package manager.

[0] https://github.com/pypa/

[1] https://fades.readthedocs.io/en/release-5/readme.html#what-d...

[2] http://conda.pydata.org/docs/intro.html

reply

sametmax 11 hours ago [-]

+1. Relatively to what we have before, it's so much better. But compared to the JS/Rust ecosystem, we are behind.

Now it's hard to compete with JS on some stuff : it's the only language in the most popular dev plateform (the web) and it has one implicit standardized async model by default.

It's hard to compete with rust on some stuff : it's compiled and is fast, can provide stand alone binaries easily and has a checker that can avoid many bugs.

But this. The package manager. We can compete. And yet we are late.

It's partially my fault since it's a project I had in mind for years and never took the time to work on. It's partially everybody's fault I guess :)

reply

....

jessaustin 16 hours ago [-]

One suspects it's you who hasn't distributed or installed many modules on either python or node. So many of the problems that python has, simply don't exist for node, because it finds modules in a bottom-up hierarchical fashion. That allows a single app or module to use modules that in turn use different versions of other modules, and not to worry about what other modules are doing, or how other modules are installed, or how node is installed, or what version of node is installed. This prevents the traditional "dependency hell" that has plagued devs for decades. Thanks to tools like browserify and webpack, the browser may also benefit from this organization.

On top of all that, npm itself just does so many things right. It's quite happy to install from npm repos, from dvcs repos, from regular directories, or from anything that looks like a directory. It just needs to find a single file called "package.json". It requires no build step to prepare a module for upload to an npm repo, but it easily allows for one if that's necessary. package.json itself is basically declarative, but provides scripting hooks for imperative actions if necessary. At every opportunity, npm allows devs to do what they need to do, the easy way.

In a sense, node and npm are victims of their own quality. The types of "issues" (e.g. too many deps, too many layers of deps, too many versions of a particular dep, deps that are too trivial, etc.) about which anal code puritans complain with respect to node simply couldn't arise on other platforms, because dependency hell would cause the tower of module dependencies to collapse first. node happily chugs along, blithely ignoring the "problems".

Personally, I used to be able to build python packages for distribution, but since I've been spoiled by node and npm for several years I've found I simply can't do that for python anymore. It is so much harder.

reply

philsnow 15 hours ago [-]

npm has its own special problems. disclaimer: what I'm talking about in this post is at least six months old, which in node/npm/js world is ancient history.

> it finds modules in a bottom-up hierarchical fashion. That allows a single app or module to use modules that in turn use different versions of other modules, and not to worry about what other modules are doing

To my understanding, if your app transitively depends on package foo-1.2 in thirty different places [0], there will be thirty copies of foo-1.2 on disk under node_modules/ . Each package reads its very own copy of foo-1.2 when it require()s foo.

On a large app, that adds up to a lot of inodes ("why does it say my filesystem is full? there's only 10G of stuff on my 80G partition!" because it's used up all its inodes, not its bytes.) and a _lot_ of unnecessary I/O.

((note: the next comment is referring to how npm 3 ditches the deep directories and puts everything in one directory, de-duplicated, as a post further below explains))

jessaustin 15 hours ago [-]

...what I'm talking about in this post is at least six months old...

Haha npm@3 was out June 2015. b^)

I agree that it would have been better, on balance, for previous versions to have created hard links to already-installed modules. Actually that wouldn't be a bad option to have even now, since debugging is often easier when one has a deep directory structure to explore rather than hundreds of random names in the top-level node_modules directory. That is, if I know the problem is in module foo, I can pushd to node_modules/foo, find the problematic submodule again, and repeat until I get all the way to the bottom. [EDIT: it occurs to me that having all these hard links would make e.g. dependency version updates easier, since un-updated dependencies wouldn't have to be recopied, unix-stow-style.]

To me, the more amusing file descriptor problem is caused by the module "chokidar", which when used in naive fashion tries to set up watches on all 360 files and directories created by itself and its own 55 dependencies. At that point it's real easy to run out of file watches altogether. Some of the utilities that call chokidar do so while ignoring node_modules, but many do not.

reply

...

sjellis 22 hours ago [-]

This is actually one of the big problems, I think: Python packaging involves knowing a number of different things and reading various resources to get the full picture.

Recently, I built a small CLI tool in Python, and learned all of the bits needed to build, test and package my application "the right way". I knew Python syntax before, but it was a lot of effort to set this up. The difference in the experience between Python and Rust or .NET Core is actually shocking, and most it isn't down to anything that Python couldn't do, just the current state of the tooling.

reply

d0mine 17 hours ago [-]

Could you provide some specific examples of the "shocking" difference?

reply

sjellis 5 hours ago [-]

Python best practice: figure out the correct directory structure by reading docs and looking at GitHub? repositories, learn how to write setup.py & setup.cfg & requirements.txt & MANIFEST.in files, setup py.test and tox (because Python 2 still lives), write your README in RST format (as used by nothing else ever), and for bonus points: write your own Makefile. Get depressed when you realize that target platforms either don't have Python or have the wrong version.

Rust: type "cargo new", README and doc comments in Markdown, type "cargo test" and "cargo build".

I'm being deliberately snarky, but you get the point: there has been a slow accretion of complexity over a very long time, and most of it is not the language itself.

reply

bastawhiz 13 hours ago [-]

`setup.py` is shockingly awful compared to most other solutions.

reply

...

ghshephard 1 day ago [-]

Okay - maybe I'm missing something, but pip is the only Python package manager I've ever used. And it's basically "pip install xxx", "pip install --upgrade xxx", "pip show xxx", "pip list", "pip uninstall xxx"

I'm curious what I've been missing about pip that makes it problematic - I've never used the other tools you mentioned (setuptools/distutils/ez_install) - so I can't comment on them, but, on the flip side, I've never had to use them, so maybe my requirements are somewhat more simple than yours.

reply

sametmax 1 day ago [-]

One things is a good dependency management. Right now if you want to upgrade your Python version, or one of your packages, it's a mountain of manual work. There is nothing in the stack helping you with the dependency graph.

Another thing is providing a stand alone build. Something you can just ship without asking the client to run commands in the terminal to make it work. I use nuikta (http://nuitka.net/) for this. It's a fantastic project, but man it's a lot of work for something that works out of the box in Go or Rust.

One last thing is to generate packages for OS (msi/deb/rpm/dmg/snap). Your sysadmin will like you. Pex (https://pypi.python.org/pypi/pex) is the closest, but not very standard.

Other pet peeves of mine:

reply

scrollaway 19 hours ago [-]

Oh my god, you've described every single one of my issues with Python packaging.

The whole setup.py/setup.cfg situation really is ridiculous. Having to import the __version__, read() the README, no markdown support on pypi, MANIFEST / MANIFEST.in files, tox.ini, what a mess.

reply

schlumpf 1 day ago [-]

This. Particularly the need for a minimum standard project structure.

Pipenv shows its pedigree and looks like a great tool...that also overlaps significantly with conda. What are the use cases that Pipenv addresses better than/in lieu of conda?

reply

mdeeks 11 hours ago [-]

It looks like Pipenv does not handle the python install itself or associated non-python libraries. With Conda I can tell it to install Python 3.6 along with FreeTDS? (for mssql). Conda lets me do this in one environment.yml file and have it work cross platform. Separate homebrew or apt-get steps are no longer necessary.

That said pipenv still looks awesome. Any improvement to the python packaging world is welcome gift.

reply

sametmax 1 day ago [-]

pipenv allow you to completly ignore the virtualenv. Like node_packages. It seems a detail, but giving a lot of python and js trainings, I came to realize newcomers needs little help like this.

reply

daenney 1 day ago [-]

You don't need to install (ana

mini)conda just to get a package manager, would be why I would use Pipenv over Conda. Miniconda alone requires somewhere close to 400MB of space and comes with a whole bunch of extra things I don't need just to manage packages and virtualenvs.

reply

kalefranz 19 hours ago [-]

The miniconda bootstrap of conda is ~20-30 MB (compressed) depending on platform. It contains only conda and its dependencies, like python and requests. It's how you install conda if you want only conda. The 400 MB number is for the Anaconda Distribution, which is a self contained, single-install, get-all package primarily aimed at scientists and engineers.

reply

sametmax 21 hours ago [-]

pip-tools doesn't solve the problem at all. It will update things to the last up to date version, cascading from package to package.

That doesn't guaranty your setup will work.

Dependency management suppose to create a graph of all requirements, lower and upper versions bound for the runtime and the libs, and find the most up to date combination of those.

If a combination can't be found, it should let you know that either you can't upgrade, or suggest alternative upgrade paths.

pip-tools will just happily upgrade your package and let you with something broken, because it's based on pip which does that. They don't check mutually exclusive dependencies versions, deprecation, runtime compatibility and such. And they don't build a graph of their relations.

reply

ProblemFactory? 20 hours ago [-]

It would be even better if the tool ran your project's tests when checking upgrade combinations.

Something that would say: "You can safely upgrade to Django 1.9.12. Upgrading to latest Django 1.10.5 breaks 20 tests."

reply

StavrosK? 20 hours ago [-]

How can you have an upper bound on compatibility? When a library is released, it knows that it works with version 1.3.2 with its dependency, but how can it ever know it doesn't work with 1.4, unless the developer goes back and re-releases the app?

reply

AgentME? 16 hours ago [-]

If the library follows semantic versioning, then you can always declare that you work with everything from the current version to before the next major version.

reply

StavrosK? 12 hours ago [-]

That's what I usually do (although I pin minor, because you never know). I should also be better about following semver, but it just feels wrong to have your library be at version 5.43.55 :/

reply

ploggingdev 1 day ago [-]

Just curious, what aspects of pip/virtualenv specifically do you find subpar in comparison to other languages' package managers.

reply

cderwin 1 day ago [-]

I would look at this comment[0] by sametmax for a critique of pip. My main gripe with virtualenv is that it's required at all: other interpreted languages, like node and elixir for example, have figured out how to handle non-global dependencies without a third-party package. Beyond that, it's frustrating to deploy because its non-relocatable (in our build/deploy scripts at my last python job we had to use sed all over the place to fix paths), and I find it semi-annoying to have a bunch of different copies of interpreter and all that goes with it (though this is mostly a minor annoyance -- it doesn't take up that much space and it doesn't matter if it gets out of sync.

Also notable, IMO, is the lack of a tool like rbenv or rustup for python. I can't tell you how many times I have had to try to figure out which python version a given pip executable worked with.

[0] https://news.ycombinator.com/item?id=13460490

reply

rplnt 23 hours ago [-]

> like node [...] have figured out how to handle non-global dependencies

Node would be the last place I'd look for a good solution in. Not sure if there was some progress recently, but it was hell some time back. Modules were huge, taking thousands of other modules with them, majority of those being duplicates. There was no deduplication, no version wildcards I believe either. It wouldn't even work with some tools because the path would end up being hundreds of characters long.

reply

nilliams 20 hours ago [-]

Since npm 3 (about 18 months ago), the node_modules dir tree is now 'flat' and de-duped (where possible).

There have always been version wildcards as far as I know. Long paths caused by the deeply nested tree were a problem in Windows only, addressed (I believe, I can't find open issues on it) by the flattening in npm 3.

reply

brennebeck 1 day ago [-]

> Also notable, IMO, is the lack of a tool like rbenv or rustup for python

Does pyenv not meet your needs there?

reply

cderwin 16 hours ago [-]

Oh cool, I actually hadn't seen pyenv before. Looks like it does indeed solve my problems (from a glance anyway, though I didn't see anything about pip in the readme).

reply

brennebeck 1 hour ago [-]

`pyenv which pip` would be the command that answers the specific point you mentioned :). That also works for any bin that gets shimmed by pyenv.

It also has plugins to automatically work with venv, if you don't mind some 'magic' in your workflow.

Overall it's a solid setup.

reply

kyrias 17 hours ago [-]

> My main gripe with virtualenv is that it's required at all: other interpreted languages, like node and elixir for example, have figured out how to handle non-global dependencies without a third-party package.

venv is in the stdlib since 3.3. (Though I agree with the annoyance at the need.)

reply

crdoconnor 22 hours ago [-]

>it's frustrating to deploy because its non-relocatable

I've tried relocating node_modules. It's a recipe for pain and tears.

I don't see why it's a big problem that virtualenv is a package rather than built in.

I also haven't had much of a problem with virtualenv not being relocatable. If you want it somewhere else, just build it there.

>Also notable, IMO, is the lack of a tool like rbenv

Ummmmm the creator of rbenv also created pyenv.

reply

therealmarv 1 day ago [-]

For people who want to do it right without using an additional tool read this: setup.py vs. requirements.txt by Donald Stufft https://caremad.io/posts/2013/07/setup-vs-requirement/

reply

yeukhon 1 day ago [-]

I gave up populating requirements in setup.py. I just use multiple requirements.txt. This article has been debated for years already and there is absolutely no right / wrong.

reply

conradev 16 hours ago [-]

I'm surprised that no one has mentioned pip-tools: https://github.com/nvie/pip-tools

It's a very similar set of tools. I use pip-compile which allows me to put all of my dependencies into a `requirements.in` file, and then "compile" them to a `requirements.txt` file as a lockfile (so that it is compatible with pip as currently exists).

This looks great, though, I'm excited to check it out!

reply

FabioFleitas? 15 hours ago [-]

We use pip-tools on all our Python projects and it works great. I believe the requirements.in compiled to requirements.txt approach is much more sane and less error-prone.

reply

command_tab 1 day ago [-]

See also: https://github.com/pypa/pipfile

I'm glad to see Python getting the same attention as other modern package managers. This is all great work!

reply

georgeaf99 18 hours ago [-]

LinkedIn? has a similar open source project that is much more mature. It builds on Gradle features to manage complex dependencies and build Python artifacts. If you include this LinkedIn? Gradle Plugin [1] you can automatically run tests in a virtual env and source a file to enter the project's virtual env.

PyGradle? [2]: "The PyGradle? build system is a set of Gradle plugins that can be used to build Python artifacts"

[1] https://github.com/linkedin/pygradle/blob/01d079e2b53bf9933a...

[2] https://github.com/linkedin/pygradle

reply

zoul 1 day ago [-]

I always wonder if this could be done once and for all languages, instead of Ruby making bundler, Haskell Cabal sandboxes or stack, Perl Brew, etc. Is this where Nix is going?

reply

olejorgenb 19 hours ago [-]

In some way, absolutely.

You can easily get a nice isolated python environment with some packages in nix without using pip, pyenv, etc. `nix-shell -p python pythonPackages.numpy ...`

So far I think it works quite well for most languages as long the needed packages are in nixpkgs.

Some of the tooling could be better, but the underlying model seems sound.

I'm not really convinced language-specific package managers are needed. Nix isn't perfect yet, but it has come a long way.

reply

 jaybuff 1 day ago [-]

See also, pex: https://www.youtube.com/watch?v=NmpnGhRwsu0

reply

renesd 1 day ago [-]

Neat. Now for questions and comments.

Often people have a requirements.live.txt, or other packages depending on the environment. Is that handled somehow? Can we use different files or sections? [ED: yes, different sections]

Still wondering to myself if this is worth the fragmentation for most people using requirements.txt ? Perhaps the different sections could have a "-r requirements.txt" in there, like how requirements.dev.txt can have "-r requirements.txt". [ED: the pipfile idea seems to have quite some people behind it, and pip will support it eventually. Seems it will be worth it to standardise these things. requirements.txt is a less jargony name compared to Pipfile though, and has a windows/gui friendly extension.]

Other tools can set up an environment, download stuff, and run the script. Will pipenv --shell somescript.py do what I want? (run the script with the requirements it needs). ((I guess I could just try it.)) [ED: doesn't seem so]

Why Pipfile with Caps? Seems sort of odd for a modern python Thing. It looks like a .ini file? [ED: standard still in development it seems. TOML syntax.]

With a setup.py set up, all you need to do is `pip install -e .` to download all the required packages. Or `pip install somepackage`. Lots of people make the setup.py file read the requirements.txt. Do you have some command for handling this integration? Or is this needed to be done manually? [ED: seems no considering about this/out of scope.]

Is there a pep? [ED: too early it seems.]

reply

choxi 1 day ago [-]

Is this like Ruby's Bundler for Python? I've just been getting into Python and am really glad to see this, thanks for creating it!

reply

uranusjr 1 day ago [-]

Without some shim like bundler exec, but yeah, you can say that.

reply

igravious 1 day ago [-]

Very similar. I think Pipenv improves on Bundler by leveraging Virtualenv and Ruby doesn't have a per project equivalent to Virtualenv that I'm aware of. You can set the path config variable of Bundler to not place the project Gems in a central location which I think is cleaner and try to remember to always do now.

It would be _super_ interesting if the Python and Ruby communities got together to harmonize every last detail of their packaging toolchain. Who is in?

reply

sametmax 1 day ago [-]

Actually we could also learn from:

reply

K0nserv 1 day ago [-]

Bundler does proper dependency resolution using Molinillo[0] which is also used by CocoaPods?[1]. This is definitely something that other package managers can stand to adopt.

0: https://github.com/CocoaPods/Molinillo 1: http://cocopods.org/

reply

mhw 1 day ago [-]

> Ruby doesn't have a per project equivalent to Virtualenv that I'm aware of.

The nearest equivalent is to place a file called '.ruby-version' in the top level directory, containing the version number of the Ruby you want to use. Version numbers come from https://github.com/rbenv/ruby-build/tree/master/share/ruby-b.... rbenv, chruby and rvm all support .ruby-version.

One difference from virtualenv is that the Ruby version managers share single installations of each version of Ruby. My understanding from occasional use of Virtualenv is that it copies the python installation into a new per-project subdirectory, which seems a bit wasteful to me.

> You can set the path config variable of Bundler to not place the project Gems in a central location which I think is cleaner and try to remember to always do now.

Yes, this is what I do. It gives me a shared 'clean' Ruby installation of the right version, plus a project-specific copy of all the gems the project depends on. To me this provides the best trade off between project isolation and not duplicating the whole world. You can set bundler up so this is done automatically by creating '~/.bundle/config' containing

    ---
    BUNDLE_PATH: "vendor/bundle"
    BUNDLE_BIN: ".bundle/bin"

(The BUNDLE_PATH one is the important one; see 'bundle config --help' for other options.)

reply

pkd 22 hours ago [-]

> It gives me a shared 'clean' Ruby installation of the right version, plus a project-specific copy of all the gems the project depends on

You can also accomplish the same using gemsets which are provided by rvm.

reply

igravious 22 hours ago [-]

Using RVM is not always an option and some might consider it an anti-pattern.

reply

---

" I tried building a Rails app from scratch in November 2016. It was the first time in months that I attempted any Ruby programming on my machine. A brew upgrade bumped rbenv and thus threw away all my Ruby installs and I didn't even notice.

...

To clone and run one sample app I needed to upgrade XCode, upgrade the command line tools for XCode (>6GB in total), install a new Ruby version and bundler and then bundle install in the sample app... Simple right? The sample app, like the majority of Rails apps, depends on libv8 somewhere in the dependency graph and that alone is more than 1GB in size.

That whole exercise took hours.

Playing with the very impressive demo I realized it was bringing an HCMB to a game of rock-paper-scissors. I decided to build the frontend with Ember instead, since I know Ember and was running out of time.

Same thing again, need to update nvm, install a respectable node version, install ember-cli, generate the app and install the dependencies via npm and bower. "-- https://www.opensourcery.co.za/2017/01/05/the-jvm-is-not-that-heavy/

---

see "Typical Gulp config vs Typical Brunch config" on http://wayback.archive.org/web/20170125155235/http://brunch.io/

---

"

sfifs 115 days ago [-]

The main reason I like Hugo is almost all other engines require me to install an entire Ruby or JavaScript? ecosystem of packages on machine just to be able to preview my blog post. I don't want to manage the dependency hell when I don't have to.

edem 115 days ago [-]

I have never succeeded at installing any Ruby software in one go.

cryptos 114 days ago [-]

Ruby is terrible in this regard. Even Rails was a pain to setup last time I tried it (some years ago).

scriptstar 115 days ago [-]

Me too :-( "

---

" > Since we now have collisions in valid PDF files, collisions in valid git > commit and tree objects are probably able to be constructed.

I haven't seen the attack yet, but git doesn't actually just hash the data, it does prepend a type/length field to it. That usually tends to make collision attacks much harder, "

---

" What did Haskell get us

Building, packaging, and shipping Python packages – in a reproducible way – has always been somewhat painful. We managed to do it with various freeze files and virtualenv hacks, but it was not pretty. Stackage snapshots completely solved this problem for us. With a curated set of packages that work, the only thing we have to do is to upgrade to a new snapshot occasionally. We are confident that when we do a stack build, we get a working artifact, always. And we get a single binary to ship, which depends only on a few system libraries. "

---

Lev1a 11 days ago [-]

The whole "Do you have the dependencies and a Python env installed? Noß Then you can't run this script/program." was one of the main reasons I switched from Python to Rust, where cargo as the (very good) package manager comes with the language and, because Rust is a compiled language, you build all the dependencies into your executable you aren't dependent(heh.) on the user having installed a runtime that maybe or maybe not has all the dependencies at the required versions.

reply

Ruud-v-A 11 days ago [-]

Indeed, Rust + Cargo and Haskell + Stack are very similar in this regard. Both have great package managers, and both produce a shippable executable with only a few dependencies on system libraries. One notable difference is that Stack downloads the compiler, whereas for Rust, every version of the language comes with a compiler and a Cargo. This ensures that you can check out a year-old commit and still build your project with Stack (modulo breaking changes in Stack, which so far I have not encountered), whereas for Rust the compiler version is not pinned.

reply

---

seagreen 11 days ago [-]

Consider this situation: three different developers are working on the same application. They should all have the exact same dependencies installed, right? Therefore they should be working of of a freeze file of some kind.

Why use an entirely ad-hoc freeze file when you can start from a known-working snapshot (that some of them might already have installed on their machines!) and modify it from there. I find this the perfect option in this kind of situation, and so object to saying that stack is just for non-experts.

reply

baldfat 11 days ago [-]

Cabal made me never use Haskell every again. I work in two different locations and at home. All three locations never worked the same and all had different issues with Cabal. After hours and hours of trying different things I walked away into the wonderland of Racket.

reply

arianvanp 11 days ago [-]

They're working really hard on improving it though . Cabal 2.0 will have a nix-style build system, in which multiple verions of the same dependency can be installed globally (so no separate sandbox per project). This will solve most problems of where cabal breaks down. This gives us almost the same usefulness as Stack. However, you will have to make sure that there is actually a feasible build plan, by setting up your version bounds correctly. With stack, other people take care of this for you, and you never touch the version bounds, which is relaxing but also gives you less control.

reply

sjakobi 11 days ago [-]

A nice feature of stack that cabal AFAIK will not provide is that it takes care of installing GHC in multiple versions. I think that's very important for newcomers.

reply

---

dllthomas 11 days ago [-]

> relaxing but also gives you less control

More like "leads you to typically exercise less control". You can override versions of packages in a stack snapshot.

reply

tazjin 11 days ago [-]

...

> Stack is a mysterious "solution" to a problem

There's nothing mysterious about stack. It's just a group of people who step up and say "I am responsible for package $x" and then work together to find stable sets of versions that are guaranteed to work together.

chrisdone 11 days ago [-]

Stack requires package sets (aka "snapshots"), which some kind of CI system (Stackage: http://stackage.org/) has to do a daily build job to see if they all build and pass tests together. That requires some money to keep running, and buy-in from package authors as there is a maintenance overhead each release. It took a few years for Stackage to get enough packages in it to be generally useful, and then we wrote the Stack tool which was able to default to using Stackage snapshots.

There was (and still is a little bit) of resistance to the whole idea of Stackage from the community; people liked the idea of build plans magically being figured out on demand, it's an interesting research problem (it can be hard to let go of an interesting problem when a solution side-steps it all together). I believe eventually many people changed their minds after experiencing the simplicity of using Stack and not having build hell be a substantial part of their development cycle.

Python would likely have to go through the same process. Although with Stack and Yarn providing frozen builds (and QuickLisp?); the idea has some precedence, which makes it an easier idea to sell. I mean, Debian and a bunch of other OSes do it like this, but experience shows programmers don't pay attention to that.

reply

runeks 11 days ago [-]

Stack enables application development in Haskell, as opposed to just library development. A proper library doesn't have more than 20-ish dependencies, in my opinion, and manually handling these and their version bounds is not a problem.

But when writing applications with hundreds of dependencies, manually figuring out a mutually compatible dependency range for all packages just isn't an option.

---

issue to avoid:

" As you can see, install order actually affects the bits that will be run. The key here is that test-a has an absolute dependency on test-c 1.0.0, while test-b has a semver range dependency on test-c ^1.0.0. That means if test-b is installed first, it correctly picks up 1.0.1, then a will of course want 1.0.0. However, if a installs first, then 1.0.0 will be installed, which satisfies b, and thus both get 1.0.0. " -- https://github.com/npm/npm/issues/10999

---

"Cargo is a solid package manager and build tool. Adding a new library dependency: a one line addition to Cargo.toml. Forking a dependency: hit “fork” in GitHub?; tweak forked repo; add git attribute to dependency in Cargo.toml. Building someone else’s Rust project: 99% chance all you need to do is run cargo build

...

I’m a huge fan of vendoring dependencies in applications, and cargo-vendor worked well for that task. It’s not quite up to par with bundler’s vendoring (upgrading dependencies require a slightly awkward dance of commenting cargo-vendor’s config out, running cargo update, deleting the outdated vendored files, rerunning cargo vendor, and then restoring cargo-vendor’s config), but it gets the job done.

...

The include_* macros are great for “packaging”. Being able to compile small assets directly into the binary and eschewing run time file loading is fantastic for a small game. " [2]

---

" cfg is much nicer than #ifdef for managing conditionally compiled code. (It turns out that first class support for something is better than a processor layer glued onto the top.) I do have a few gripes about cfg, the biggest of which is that there’s no way to derive custom configurations (I would love to be able to replace all my #[cfg(any(target_os = "ios", target_os = "android", target_os = "emscripten"))]s with something like #[cfg(opengles)].) It turns out this is possible!. It was also frustrating to occasionally break the build on platforms other than the one I was primarily developing on but not have a way to determine that without doing a full build on every target platform. (It seems like the forthcoming portability lint will solve some or all of this pain.) "

---

" It’s fantastic that rustup and cargo made compiling for a whole slew of targets (10 in total) incredibly easy, but I was definitely in uncharted territory with regards to actually packaging up a Rust application to be distributed on all of them. With the bulk of existing Rust applications being targeted at servers or the command-line, tasks like “Add an icon to my .exe” don’t have ready-made solutions. iOS and Android were particularly tough, as almost all existing literature on getting Rust onto iOS or Android assume you’re building a library (rather than an application). My current solutions to packaging are very, very duct-tape-y, but I hope to clean them up and make them publicly available at some point. (I wrote up a few small details about my current solutions for a friend, and if you’re actively attempting to package a game up for distribution and are running into difficulties, feel free to email me and I’d be happy to share whatever other tips I can.) "

---

raverbashing 247 days ago [-]

> We're professionals, after all, and TypeScript? and React were not build by some teenage hackers.

Overall correct

> The reason is that we started to build complex applications instead of enhancing grandma's blog using jQuery.animate, get over it.

And this is where we disagree

Complexity is needed sometimes, needless complexity only brings the overall value down

If I want to do a website using Django I need to get: Django. Period.

I may need some other libraries, but they're much fewer than any basic node.js project, even with things like Flask

I have one package manager: pip. It works

With express.js you need a library to parse an HTTP request body ffs. https://github.com/expressjs/body-parser

veeti 247 days ago [-]

> I have one package manager: pip. It works

Funny, because this is not the general sentiment in the Python community. See links like [1], [2], [3]. Thankfully, things are improving.

[1] http://lucumr.pocoo.org/2012/6/22/hate-hate-hate-everywhere/

[2] https://www.reddit.com/r/Python/comments/zrm3h/there_have_be...

[3] https://blog.ionelmc.ro/2015/02/24/the-problem-with-packagin...

witty_username 247 days ago [-]

I found Python packaging very easy. pip install works for tons of packages.

---

heydonovan 1 day ago [-]

Package management was so frustrating coming from Ruby to Python. There are so many programs, and version conflicts that I run into all the time. With gems and bundler, it was rare if things didn't work.

reply

cshenton 1 day ago [-]

If you get into the habit of making a virtual environment and requirements.txt in every repo then it's smooth. Just activate the env when you're using the repo and use vanilla pip install to install dependencies. Then pip freeze them into the requirements file. It's a very similar workflow to a gemfile and bundler.

reply

dasil003 1 day ago [-]

I worked on python for a few months last year, and although I didn't find anything representing a canonical description of that workflow, I did eventually come to this conclusion. The only thing was that pip freeze > requirements.txt didn't really give the same power as I was used to with Gemfile.lock and the various permutations of bundle update. I forget the specifics, but I remember being unable to get the finer points of optimistic version locking to work in a way I found acceptable.

That said, I have been doing ruby since before Bundler, and I really have to take my hat off to what Yehuda and company accomplished with Bundler. It was both a technical and open source community triumph to get Bundler done, stable and covering the breadth of use cases it applies to.

reply

yes_or_gnome 21 hours ago [-]

You're going to have a hard time supporting multiple versions of Python that way. If I started a new package today I would target 3.4, 3.5, 3.6, 3.7-dev, and likely, 2.7. A lot of well maintained packages have different dependencies based on the exact version of Python. If you're crazy enough to support 2.6, then there will be a lot of additional packages in pip freeze.

Ideally, packages would have just one set of dependencies and the packages would be version locked, but that's just not the case in the Python community.

reply

highd 1 day ago [-]

I highly recommend switching to the conda distributions and package ecosystem. A lot of those issues have gotten a lot better for me since I did that.

reply

joshuamorton 1 day ago [-]

I'm curious, what issues do you run into with python3 incompatibility?

Most scientific stuff I've done works in python3. In fact, its now possible to run a tensorflow/opencv/scipy environment entirely from pip in a virtualenv, though it won't be the fastest. which is amazing. Doing that kind of thing in any other language would require docker.

In general, if you're sticking to python3, the answer will always be "use pip", everything should work with pip, and everything should install as a binary without needing to build source.

Edit:

There's exactly one context where I use python2, and its for a robotics project where some transitive dependencies are python2 only.

reply

---

" Cargo is Rust’s package manager and build tool and it’s great. I think this is pretty well known. This is especially apparent to me right now because I’ve been using Go recently – There are lots of things I like about Go but Go package management is extremely painful and Cargo is just so easy to use.

My dependencies in my Cargo.toml file look something like this. So simple!

[dependencies] libc = "0.2.15" clap = "2" elf = "0.0.10" read-process-memory = "0.1.0" failure = "0.1.1" ruby-bindings = { path = "ruby-bindings" } # internal crate inside my repo "

-- [3]

---

package metadata should include the version of Oot that they are for

---

	GitHub shouldn't allow username reuse (donatstudios.com)
	214 points by donatj 3 days ago | flag | hide | past | web | favorite | 147 comments
	

dasil003 3 days ago [+35] ...

. I get that it's a real issue that needs fixing, but a GitHub? URL is not a secure package identifier, it was never designed this way, and it's an unfortunate hack that it's become a defacto standard.

niftich 3 days ago [-]

This situation arises solely from Go's myriad dependency management tools [1], including the latest "official experiment" 'go dep', not having the notion of a package repository. Instead, package identity is tightly bound to package location [2], and doing "package management" on this identity results in HTTP GETs or equivalents to that location. This is documented as a core convention [2] of the Go world, by the designers themselves.

I was always bewildered by this choice [3][4], because many, many other package management systems have independently realized this to be a bad idea -- and that it's valuable to have an extra layer of indirection between package identity and package location to prevent this exact situation. The component that solves this indirection is called a package repository, which can then institute strict rules about immutability, deletions, and naming, if so desired. NPM didn't, for a long time, until they got burned. GitHub? is not a package repository, but due to the Go community's guidance, is effectively being used as such in a large volume of Go code in the wild.

[1] https://github.com/golang/go/wiki/PackageManagementTools [2] https://golang.org/doc/articles/go_command.html#tmp_1 [3] https://news.ycombinator.com/item?id=12189356 [4] https://news.ycombinator.com/item?id=15677338

reply

brown9-2 3 days ago [-]

What would a Go package repository host? Bundles of source code? Object files?

reply

petre 2 days ago [-]

Bundles of source code, like CPAN and npm uses. Otherwise you'd have to ship binaries for every platform.

reply

---

resoluteteeth 2 days ago [-]

I've tried messing around with haskell a few times, but the biggest problem I have is the tooling.

I just tried to get haskell set up a few days ago and I couldn't get it working with either vscode or Idea. I also couldn't get ihaskell (jupyter notebook) installed successfully.

I might give it another shot based on these instructions, but the fact that pretty much every program seems to make different assumptions about whether you're using cabal or stack and how you have them set up is really annoying. I assume if you are really knowledgeable about this tools it's not that bad, but as a haskell beginner they are completely impenetrable. (Even just the yaml files used by stack seem completely incomprehensible compared to the formats used by most languages' build tools.)

Considering that the language itself has a pretty steep learning curve, it's really frustrating wasting several hours just trying to get the haskell environment set up without success and not even getting the point of being able to try to actually use it.

I've had a pretty bad opinion of the Ocaml and F# tooling in the past, but the haskell tooling is just so, so much worse. (F# seems to have gotten to a point where the tooling if you can just use .net core, although it gets horrible again if you then need to also use mono at the same time so you can use the interpreter.)

It's especially bad if you compare it to something like rust where cargo works so well it's actually in itself a reason to use rust.

reply

---

allenleein 9 days ago [-]

For people who wanna learn Haskell:

1. Don't touch Haskell platform. Install Stack.

why? https://mail.haskell.org/pipermail/haskell-community/2015-September/000014.html

[4] (this link also provides uninstall instructions to uninstall Haskell Platform)

---

Golang is proposing a package management/versioning system where the package manager selects the MINIMUM version of each dependency that meet the constraints of all of the parts of the program. Furthermore, different parts of the program can use different dependency versions but only if they request different major versions. [5] [6]

There are two important design choices here:

(also note that 'oldest' does not necessarily mean 'smallest version'; eg Python 2.7 was released after Python 3.0)

(also note that if a library author publishes something with only a tertiary version bump, and then realizes that there was a backwards incompatibility after all, there needs to be a mechanism to renumber that version in the repo!)

(also note that Go's proposal is a little more nuanced than that; the 'oldest by default' is only when resolving transitive dependencies, direct dependencies without specified versions are newest by default [7])

i think even if we go with Go's "oldest by default" (which i'm not sure we should), we should allow the specification of a MAX version, not just a min version (unlike Go's proposal)

this discussion also shows an advantage for SemVer? over SdVer?: by distinguishing patch versions from minor versions (non-breaking with no new functionality from non-breaking with new functionality), it provides an opportunity for a package manager to say "latest patch version of the same minor version", on the assumption that patch versions are probably stuff like bugfixes and security fixes. But i'm not aware of any package manager that actually does that; eg cargo updates to the latest minor version [8]. Also, of course there are plenty of reasons that a library maintainer might release an unimportant-to-upgrade-to patch version; in SemVer? a patch version is just anything that is backwards-compatible and that does not add new functionality, so it might be a bugfix or security fix, but it might also just be a performance improvement, a documentation improvement, or a code refactoring. We could imagine that in SdVer? we have a 'patch' version that specifically means that the maintainer is saying 'hey this is super important please upgrade to it if possible' and the package manager would then take the latest patch version available but the earlier tertiary version given that; but this adds complexity and still doesn't help with the cases where (a) there is a bug or security issue but the maintainer only fixes it in HEAD and doesn't backport the fix, or (b) the maintainer accidentally makes a new patch release backwards-incompatible.

---

mb the answer is some or all of these:

1) by default, update depedencies to the newest possible version, but have a flag where the end-user (or end-builder, if compiling) can choose to use the oldest version instead. The idea being that if stuff isn't working, they can try oldest version.

2) have a way in the repo for library authors to mark versions in which they fixed an important bug (like, security issue or something else equally or more important -- "trust me, you really don't want to be running any version before this one" important). Then the packaging system by default selects the oldest version GIVEN that it's newer than the last such mark. Yeah, this empowers library authors to just mark every version if they want to encourage you to stay with the newest version

3) as in the Go proposal, for new direct dependencies select the newest compatible version, the oldest version stuff if only for transitive dependencies.

4) unlike the Go proposal, i want to allow package authors to be able to specify a min version, a max version, individual versions in that range to exclude, and possibly even preferred versions in that range

5) Even Rust's Cargo uses lockfiles and will only change a version when you add, remove, or update a package in your cargo [9]. We should use lockfiles, and when changing the lockfile, rotate the old one into lockfile.old.1 etc, so the user can easily 'go back' if everything breaks. Have a command-line argument to override the default behavior of sticking with the minimum compatible version, and instead select the newest SemVer?-compatible version for everything, or for just a given (possibly transitive) dependency, and possibly for its dependencies as well.

6) do we really want LOWEST version, or do we want OLDEST version? "I think this is confusing older versions and lower. You could, I suppose, build a package manager that forbids publishing a version number lower than any previously published version of the package and thus declare this to be true by fiat. But, in practice, I don't think most package managers do this. In particular, it's fairly common for a package to have multiple simultaneously supported major or minor versions. For example, Python supports both the 2.x and 3.x lines. 2.7 was released two years after 3.0. When a security issue is found in a package, it's common to see point releases get released for older major/minor versions. So if foo has 1.1.0 and 1.2.0 out today and a security bug that affects both is found, the maintainers will likely release 1.1.1 and 1.2.1. This means 1.1.1 is released later than 1.2.0." [10]

---

[11]

"

For years I’ve noodled around with various setups for a Python development environment, and never really found something I loved – until now.

The Setup

1. pyenv

Why? I need to run multiple Python versions, isolated from the system Python. pyenv makes it easy to install, manage, and switch between those multiple Pythons. As a bonus, pipenv integrates with pyenv and will automatically install missing Python versions if they’re required by a Pipfile.

...

pyenv install 3.6.4

...

2. pipsi

Why? pipsi lets me install Python-based CLI stuff (like youtube-dl, awscli, doc2dash, etc.) without those projects’ dependencies messing up my global Python.

...

3. pipenv

Why? pipenv handles dependency- and virtual-environment-management...

...

With this all together, all my use-cases are handled simply:

To start new projects, I just make a directory and type pipenv install ... to start installing my dependencies. Pipenv creates a Pipfile for me, and manages it, and I’m up and running.

To work on existing projects, I clone a repository and either run pipenv install (for projects that already have a Pipfile), or pipenv install -r requirements.txt (which as a side-effect automatically converts a the requirements file to a Pipfile).

If I need to switch Python versions, I run pyenv local <version> in my project directory. I can also add:

[requires] python_version = "<version>"

to my Pipfile, and pipenv will enforce that version requirement.

When I want to install CLI stuff, I use pipsi:

pipsi install awscli pipsi install doc2dash

  1. ... etc

When it comes time to deploy, both Heroku and cloud.gov will read and understand my Pipfile. If I need to deploy to something that doesn’t do Pipfile-based installs, I create a requirements.txt by running pipenv lock --requirements.

"

---

" the import compatibility rule: “If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.” ... If we adopt semantic versioning for Go packages, as most Go developers expect, then the import compatibility rule requires that different major versions must use different import paths. This observation led us to semantic import versioning, in which versions starting at v2.0.0 include the major version in the import path: my/thing/v2/sub/pkg. ... partial code upgrades. In a large program, it’s unrealistic to expect all packages in the program to update from v1 to v2 of a particular dependency at the same time. Instead, it must be possible for some of the program to keep using v1 while other parts have upgraded to v2. But then the program’s build, and the program’s final binary, must include both v1 and v2 of the dependency. Giving them the same import path would lead to confusion, violating what we might call the import uniqueness rule: different packages must have different import paths. The only way to have partial code upgrades, import uniqueness, and semantic versioning is to adopt semantic import versioning as well. ... import compatibility simplifies version selection, which is the problem of deciding which package versions to use for a given build. The constraints of Cargo and Dep make version selection equivalent to solving Boolean satisfiability, meaning it can be very expensive to determine whether a valid version configuration even exists. And then there may be many valid configurations, with no clear criteria for choosing the “best” one. Relying on import compatibility can instead let Go use a trivial, linear-time algorithm to find the single best configuration, which always exists. This algorithm, which I call minimal version selection, in turn eliminates the need for separate lock and manifest files. It replaces them with a single, short configuration file, edited directly by both developers and tools, that still supports reproducible builds. " -- [12]

---

" In addition to the core ideas of import compatibility, semantic import versioning, and minimal version selection, the vgo prototype introduces a number of smaller but significant changes motivated by eight years of experience with goinstall and go get: the new concept of a Go module, which is a collection of packages versioned as a unit; verifiable and verified builds; and version-awareness throughout the go command, enabling work outside $GOPATH and the elimination of (most) vendor directories. ... The result of all of this is the official Go proposal, which I filed last week

https://golang.org/design/24301-versioned-go

" -- [13]

---

"

bigdubs 6 hours ago [-]

The lack of support for private repos is still a dealbreaker for both `dep` and `vgo`.

reply

teabee89 5 hours ago [-]

I wanted to find out, so I looked at vgo's source code, and found that vgo does indeed have some solution to this using Github Access Tokens and ~/.netrc: https://github.com/golang/vgo/blob/b6ca6ae975e2b066c002388a8...

reply

JepZ? 6 hours ago [-]

How is it different from using 'go get'?

reply

Groxx 1 hour ago [-]

`go get` has so many other blockers for enterprise use that "support for mirrors" barely registers. But yes: it's no different, `go get`'s lack of mirror support is also a blocker for tons of businesses.

reply

bigdubs 4 hours ago [-]

govendor anyway will use local versions of deps if you have them in your gopath

reply "

---

i havent watched this yet:

Things I Regret About Node.js [video] (youtube.com) by Ryan Dahl - https://www.youtube.com/watch?v=M3BM9TB-8yA

tnolet 23 hours ago [-]

Having worked with Maven, Gradle, Ruby Gems, Pip and the non-existing Go package management I must say I actually really like the Node / NPM combo. I guess artists are their own worst critics.

edit: forgot Scala's SBT, admittedly a builder using Maven repo's but still an excellent example of how bad UX in this area can get.

reply

stickfigure 22 minutes ago [-]

I've worked with all of these as well, and npm is probably my least favorite. Above all else, I expect my build system to do ONE THING:

Exactly reproduce a build at a later date

Part of it is technological (npm didn't have package-lock.json until very recently), part of it is organizational (the npm repository is surprisingly fluid), and part of it is cultural (the JS community likes zillions of tiny constantly-changing libraries). The net result is that I cannot walk away from a JS build for three weeks without something breaking. It breaks all the time. UGH.

reply

CSMastermind 6 hours ago [-]

npm is the worst software I use daily.

And let's not forget how terrible things were pre-yarn with the nested folder structure of node_modules and no lock file.

Compare that to NuGet? where I've literally never had any of these problems.

reply

darzu 5 hours ago [-]

Here's one thing Node and npm are great at and NuGet? fails at completely: local development of two packages. With npm, I can use "npm link" to redirect a package reference to a local folder. With NuGet?, the best you an do is edit the .csproj and change the nuget reference to a project reference (if you can find the original source code). This makes simple step-through debugging across package boundaries a chore every time, whereas a source-based package system doesn't have this issue.

reply

n0us 1 hour ago [-]

"npm link" only establishes a symlink between the two directories and doesn't respect .npmignore or behave in any comparable way to publishing a package and installing it. Sometimes the only way to debug is to repeatedly publish and re-install the package you are developing.

reply

kenhwang 18 hours ago [-]

Recently, when I use npm, it mostly just works. There's still the occasional node/npm version mix and match to get certain libraries to work and accidental sudo; the former might just be the poor quality of the ecosystem, and the latter is almost just user error.

I'd put it par with rubygems, ahead of pip, gradle, maven, a little bit behind mix, and far behind cargo. Not a bad spot to be by any means.

reply

saurik 17 hours ago [-]

> far behind cargo

For the purposes of this discussion, it is useful to note that cargo was written by Yehuda Katz (wycats), who had previously written Bundler, and so actually had some concept of what mistakes he had made before and experience specifically in this area, in order to apparently (I haven't used it yet, but I have heard lots of good things) finally have built something truly great.

reply

jergason 7 hours ago [-]

He also helped write yarn, an alternative client to the official npm one.

reply

 jonny_eh 23 hours ago [-]

You'll find much harsher critics of Node/NPM in these parts!

I thought Ryan did a great job of explaining his regrets without giving the impression that Node was a "mistake", is "inferior", or anything so drastic.

reply

allover 19 hours ago [-]

> You'll find much harsher critics of Node/NPM in these parts!

They're ill-informed. GPP is correct that, for example pip is fundamentally inferior to npm [1], and those that insist on throwing shade at npm on HN should be corrected. They're wrong, and insulting a sound, well maintained project, without basis.

[1] https://github.com/pypa/pip/issues/988

reply

kbenson 15 hours ago [-]

> those that insist on throwing shade at npm on HN should be corrected.

Preferrably by giving them better ammunition, since I do see NPM as substandard in quite a few ways, which is inexcusable when there do exist examples to learn from (whether it be a positive or negative influence).

First, it helps to clarify whether we are talking about npm the client or NPM the repository and ecosystem. Client issues are generally easily resolved, just use a different client. For npm, this could be yarn. For cpan, this could be cpanm, or cpanplus, etc.

If it's indeed the repository we are talking about, there are some obvious things that could be done to greatly improve it the NPM module ecosystem. For example, how about automating module tests against different versions of Node to determine whether it's in a good running status now for the current and prior interpreter versions, on the platforms it can be run on? [1] How about a prior version, in case you're trying to figure out if the version you're on has a known problem on the platform combo you're running on? [2] Or perhaps you want to know what the documentation and module structure looked like for a module a long time ago, like20 published versions and over a decade ago, because sometimes you run across old code? [3] Or as an author, the ability to upload a version, even for testing, and getting an automated report a couple days later about how well it runs on that entire version/architecture matrix with any problems you might want to look into?

In case you didn't notice the trend, I'm talking about CPAN here, which has been in existence for over two decades, and many of the features I've noted have been around for at least half that time. All in and for a language that most JS devs probably think isn't in use anymore, and on encountering a professional Perl developer would probably think they just encountered a unicorn or dinosaur.

Sure, NPM isn't all that bad compared to some of the examples that were put forth, but the problem is that those examples are a limited subset of what exists. Given the current popularity of JS and the massive corporate interest and sponsership, I frankly find the current situation somewhat disgusting. The only thing keeping JS from having an amazing module ecosystem is ambition. Sure, NPM might be a sound, well maintained project (points I think are debatable), but it could be so much more, and that's what we should be talking about, not almost annual fuckups[4] they seem content with dealing with.

1: http://matrix.cpantesters.org/?dist=DBIx-Class+0.082841

2: http://matrix.cpantesters.org/?dist=DBIx-Class+0.08271

3: https://metacpan.org/pod/release/MSTROUT/DBIx-Class-0.08000/...

4: https://hn.algolia.com/?query=npm&sort=byPopularity&prefix=f...

reply

innocentoldguy 22 hours ago [-]

I've worked with Maven, Ruby's gems, Python's pip, whatever Go's non-existent package management is called, and Node, via npm and yarn. I'd have to say my favorite tooling and package management is found in Elixir's mix utility though. I don't mind the others. They are all decent enough, but I think the Elixir team really nailed it with mix.

reply

---

don't do this:

" The problem is that when their system decides that a package shouldn't be up it completely removes the package, as if it never existed, and allows the namespace to be reused immediately. "

" The problem is that NPM allowed packages to be re-uploaded by new authors after the initial versions had been spam filtered. Especially since allowing packages to be re-uploaded by new authors was the core issue of the left-pad debacle, and the one thing NPM said they'd fixed in response. "

" Of course, NPM's response to the kik/left-pad problem was also pretty awful. Make it so users can't delete packages. "

---

also there was an incident once where a company claimed a trademark on a package in a repo and demanded that it be taken down:

http://azer.bike/journal/i-ve-just-liberated-my-modules/ https://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm

we should probably have a policy to deal with that. imo the policy should say give the platform broad discretion to take down packages that might violating trademarks (but this namespace should be blocked forever, not available for use by anyone). Unless, i guess, the original package owner indemnifies us (and is credit-worthy for that indemnification) while they fight about it legally, or something like that. Maybe in fact require a letter from the original package owner's attorney too, advising us that they are qualified to practice trademark law in the relevant jurisdictions and that they think we have no liability and that they are indemnifying us, etc.

---

ubernostrum 5 months ago [-]

This is your occasional reminder that package signing is not a panacea, and as typically proposed for community package repositories like npm, PyPI?, etc. would likely do absolutely nothing.

For example, people often insist in the Python world that PyPI? should support package signing. But it already does -- you can generate a signature for a package and upload the signature with the package. Django does this, and has been doing it for years. You can also get package download/install tools that will check the signature. But then what?

What people really mean when they say there should be "signed packages" is that there should be a whole bunch of invisible infrastructure (set up by... who, exactly? Maintained by... who, exactly?) to decide which PGP keys are authorized to sign releases of which packages. And that's close to an intractable problem for an anyone-can-contribute community repository like npm or PyPI?.

tragic 5 months ago [-]

This is a very important point. I work for a company that publishes client libs for many different package indexes (although not npm). This is a fairly well automated process, but it takes minutes (if that) to push a new version to pypi, rubygems etc, but at least a few hours of fiddling about to get something on maven, which of course has this security infrastructure.

---

Giroflex 2 days ago [-]

> Even something as simple as installing a library is a conceptual leap for these people (why wouldn't the software just come with everything needed to work?). > Have you ever tried explaining the various python package and environment management options to someone with a background in Excel/SQL?

I don't understand the difficulty I've often seen voiced against this. Why would a newbie or someone who just wants to get analytical work done need anything beyond installing Python and doing `pip install library`? It's certainly orders of magnitude easier and faster than, say, using a C library. The only trouble I can see a newbie running into is if they want to install a library which doesn't have precompiled wheels and they need some dependencies to build it, but that's rarely an issue for popular packages.

reply

CJefferson 2 days ago [-]

Well pip install library needs root, which you probably don't have. So now you have to teach them about making, and acitvating, virtual environments.

Also, they can't easily search through the packages in a nice GUI and just click on the one they want to install.

reply

Giroflex 2 days ago [-]

>pip install library needs root

Hmm, not really. It's actually advised against [1].

[1] - https://askubuntu.com/questions/802544/is-sudo-pip-install-s...

reply

newen 2 days ago [-]

Pip install needs root on my ubuntu install, my lab's and university's old redhat servers and my windows for linux install. I've had to install anaconda python to get any real work done on all three systems. Anaconda works fine for me but I've not even had to think about anything to install packages in R.

reply

sdabdoub 1 day ago [-]

try

    pip install --user

or virtualenvs

reply

rspeer 1 day ago [-]

Ubuntu doesn't ship with pip or virtualenv. In fact it ships with a version of Python where the built-in equivalent to virtualenv, pyvenv, is explicitly disabled.

So you have to install extra Python packages, as root. You have to have that Python experience that guides you to install as few of them as you can, just enough so you can get started with a virtualenv, so you don't end up relying on your system Python environment.

And this is really hard to explain to people who aren't deeply familiar with Python. "Never use sudo to install Python packages! Oh, you got errors. We obviously meant use sudo for two particular packages and never again after that."

In the terrible case where you don't have root, you have to ignore Ubuntu's version of Python and compile it yourself from scratch. Hope the right development libraries are installed!

Maybe I'm wrong and there's a method I've overlooked. If there is: please show me how to install a Python package on a fresh installation of Ubuntu 16.04, without ever using sudo, and I will happily spread the good news.

reply

int_19h 1 day ago [-]

That sounds like a major problem with Ubuntu, rather than with Python or pip.

On Windows, meanwhile, the standard Python installer gets all this set up properly in like three clicks. Better yet, because it installs per-user by default, "pip install" just works. And if you still choose to install it globally, it will fail, but it will tell you exactly what you need to do to make it work:

    Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: ...
    Consider using the `--user` option or check the permissions.

One can't help but wonder how we ended up in a situation where the most popular Linux distro somehow does Python worse than Windows.

reply

angry_octet 1 day ago [-]

Don't despair, in the Anaconda installed with visual studio (now a default) you can't update or install packages without being admin! And if you install Anaconda again it merges the start menu entries and you can't tell which is which...

reply

nzjrs 1 day ago [-]

This is bad advice. Pip should be used in virtual environments, and not to install system packages

reply

txcwpalpha 1 day ago [-]

While you're right that it's bad advice, it also highlights the problem with pip that these less experienced people have. The ideal way to deal with Python packages is virtualenvs, but setting up a virtualenv, and then activating it every time you want to use it (or setting up tools to do it for you) is an incredibly huge headache for less experienced people to deal with. R doesn't require that whatsoever.

reply

jupiter90000 1 day ago [-]

Neither language requires an isolated dev environment, but it can help with avoiding headaches. As python has things like virtualenv and buildout, fortunately R has 'packrat' available, which provides a similar isolated/reproducible dev environment solution.

https://rstudio.github.io/packrat/

reply

int_19h 1 day ago [-]

There's nothing wrong with pip for installing per-user packages outside of virtual environments.

reply

YeGoblynQueenne? 1 day ago [-]

>> Why would a newbie or someone who just wants to get analytical work done need anything beyond installing Python and doing `pip install library`? It's certainly orders of magnitude easier and faster than, say, using a C library.

Except when it isn't. For instance, because some wheel fails to build because you're lacking the VC++ redistributable (or it's not where pip thinks it should be):

  C:\Users\YeGoblynQueenne\Documents\Python> pip install -U spacy
  Collecting spacy
    Downloading spacy-1.2.0.tar.gz (2.5MB)
      100% |################################| 2.5MB 316kB/s
  Collecting numpy>=1.7 (from spacy)
    Downloading numpy-1.11.2-cp27-none-win_amd64.whl (7.4MB)
      100% |################################| 7.4MB 143kB/s
  Collecting murmurhash<0.27,>=0.26 (from spacy)
    Downloading murmurhash-0.26.4-cp27-none-win_amd64.whl
  Collecting cymem<1.32,>=1.30 (from spacy)
    Downloading cymem-1.31.2-cp27-none-win_amd64.whl
  Collecting preshed<0.47.0,>=0.46.0 (from spacy)
    Downloading preshed-0.46.4-cp27-none-win_amd64.whl (55kB)
      100% |################################| 61kB 777kB/s
  Collecting thinc<5.1.0,>=5.0.0 (from spacy)
    Downloading thinc-5.0.8-cp27-none-win_amd64.whl (361kB)
      100% |################################| 368kB 747kB/s
  Collecting plac (from spacy)
    Downloading plac-0.9.6-py2.py3-none-any.whl
  Requirement already up-to-date: six in c:\program files\anaconda2\lib\site-packages (from spacy)
  Requirement already up-to-date: cloudpickle in c:\program files\anaconda2\lib\site-packages (from spacy)
  Collecting pathlib (from spacy)
    Downloading pathlib-1.0.1.tar.gz (49kB)
      100% |################################| 51kB 800kB/s
  Collecting sputnik<0.10.0,>=0.9.2 (from spacy)
    Downloading sputnik-0.9.3-py2.py3-none-any.whl
  Collecting ujson>=1.35 (from spacy)
    Downloading ujson-1.35.tar.gz (192kB)
      100% |################################| 194kB 639kB/s
  Collecting semver (from sputnik<0.10.0,>=0.9.2->spacy)
    Downloading semver-2.7.2.tar.gz
  Building wheels for collected packages: spacy, pathlib, ujson, semver
    Running setup.py bdist_wheel for spacy ... error
    Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\spacy\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'ex
  ec'))" bdist_wheel -d c:\users\yegobl~1\appdata\local\temp\tmpypkonqpip-wheel- --python-tag cp27:
    running bdist_wheel
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-2.7
    creating build\lib.win-amd64-2.7\spacy
    copying spacy\about.py -> build\lib.win-amd64-2.7\spacy
    [217 lines truncated for brevity]
    copying spacy\tests\sun.tokens -> build\lib.win-amd64-2.7\spacy\tests
    running build_ext
    building 'spacy.parts_of_speech' extension
    error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
  
    ----------------------------------------
    Failed building wheel for spacy
    Running setup.py clean for spacy
    Running setup.py bdist_wheel for pathlib ... done
    Stored in directory: C:\Users\YeGoblynQueenne\AppData\Local\pip\Cache\wheels\2a\23\a5\d8803db5d631e9f391fe6defe982a238bf5483062eeb34e841
    Running setup.py bdist_wheel for ujson ... error
    Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'ex
  ec'))" bdist_wheel -d c:\users\yegobl~1\appdata\local\temp\tmp8wtgikpip-wheel- --python-tag cp27:
    running bdist_wheel
    running build
    running build_ext
    building 'ujson' extension
    error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
  
    ----------------------------------------
    Failed building wheel for ujson
    Running setup.py clean for ujson
    Running setup.py bdist_wheel for semver ... done
    Stored in directory: C:\Users\YeGoblynQueenne\AppData\Local\pip\Cache\wheels\d6\df\b6\0b318a7402342c6edca8a05ffbe8342fbe05e7d730a64db6e6
  Successfully built pathlib semver
  Failed to build spacy ujson
  Installing collected packages: numpy, murmurhash, cymem, preshed, thinc, plac, pathlib, semver, sputnik, ujson, spacy
    Found existing installation: numpy 1.11.0
      Uninstalling numpy-1.11.0:
        Successfully uninstalled numpy-1.11.0
    Running setup.py install for ujson ... error
      Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, '
  exec'))" install --record c:\users\yegobl~1\appdata\local\temp\pip-ibtvwu-record\install-record.txt --single-version-externally-managed --compile:
      running install
      running build
      running build_ext
      building 'ujson' extension
      error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
  
      ----------------------------------------
  Command ""c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --recor
  d c:\users\yegobl~1\appdata\local\temp\pip-ibtvwu-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\yegobl~1\appdata\local\temp\pip-build-7o0roa\ujson\

Now that's newbie scary.

Note that this is just one case where I was trying to install one particular package. I got a couple more examples like this in my installation diary, notably one when I tried to install matplotlib, this time on Windows Subystem for Linux, a.k.a. Ubuntu, and hit a conda bug that meant I had to use an older version of QT until upstream fixed it and other fun times like that.

reply

---

want reproducible builds

---

"

bobjordan 1 day ago [-]

Recently found a command line library written in go that was so useful to me in a python webapp I'm building, I decided to just wrap the go app and call it from a python subprocess. It is a bit slow to do it this way, but premature optimization, and all that. Anyhow, the bottom line is due to this I was forced to get a handle on Go this week so I could at least build the go library and make a few changes as needed. Wow, was I absolutely shocked to find out current package management in this go app is just pulling libraries from master on github. It really made me feel like go is far behind python in maturity. Pulling all these dependency libs from master definitely does not feel production ready. Maybe this ( https://roberto.selbach.ca/intro-to-go-modules/ ) will help resolve things.

reply

tomohawk 1 day ago [-]

I was likewise shocked at how good go compared to python in this regard.

Resolve your deps once, at build time, and you're done.

Compare that to dealing with the externalized deps that is deploying a python app on a machine that might have the correct version of python and all dependencies, or not. virtualenv? good luck. You end up bringing in docker and other tools just to deal with it. But what do you do if the machine is locked down by corporate IT and they don't allow docker or the installation of other python deps?

reply

jacques_chester 1 day ago [-]

> Resolve your deps once, at build time, and you're done.

I have ... rarely had this experience.

Every go package manager has found new and frankly fascinating ways to lead me into a corner where I think I have committed and pushed working code, but which causes highly-visible CI barfing.

I should add as an aside: I've done two tours of duty as a Cloud Foundry Buildpacks maintainer. Everyone's package system sucks, but some suck a lot less.

Only Bundler is basically sane. We hit lots of corner cases but for the 20/80 cases it essentially worked.

reply "

---

regarding https://roberto.selbach.ca/intro-to-go-modules/

dom96 1 day ago [-]

Isn't it a little odd that these are called modules when they sound very much like packages? In many languages, a module is just a source code file that can be imported. Why is Go redefining this term?

reply

masklinn 1 day ago [-]

Go's terminology matches java, and is the exact reverse of your comments: a package is a namespace, a module is a bundle of 1..n namespaces (packages).

reply

paulddraper 1 day ago [-]

Java "packages" are namespaces, Java "modules" are modules, and Maven/Ivy "modules" are packages.

reply

---

https://roberto.selbach.ca/intro-to-go-modules/

---

rukenshia 1 day ago [-]

Do I have the ability to somehow specify "use git+ssh for this dependency" with the new modules system?

Right now it seems near impossible with go to do that other than manually cloning the repositories into the correct path. We can't host our things publicly and have to use SSH to clone the repositories at my company.

It is especially frustrating in our CI/CD process if we need to manually clone our packages for setting everything up.

reply

thwarted 1 day ago [-]

That's one thing that frustrated me about dep. go get is deficient in this area too. I understand the need to namespace packages, but requiring them to be hosted (or have metatags on a page) at the location the import path specifies in order to be able to pull them down is insanity.

To make matters worse, dep tried to stuff too much of a DSL into the package specification on the command line. example.com/path/pkg@hashish made it impossible to specify git@example.com/path/pkg as the location because the parser wasn't robust enough, and the package location parser wasn't/isn't smart enough to honor ssh:git@example.com/path/path as a way to be explicit about how you wanted this done.

dep did work for our use case if you edited the toml file directly, once I made a 2 character change to a regular expression in v0.3.0. We use dep and stopped upgrading with that version; I'm hoping go modules make non-public repos easier, but I'm not holding my breath.

reply

 kromem 1 day ago [-]

The v2 library suffix on the package path seems rather hackish.

Feels like it would have been better to use a different delineating character instead of a slash to make it abundantly clear at a glance that it's effectively a major version tag and not a subpackage.

Other than that, seems relatively straightforward. Hope this is the last "new and completely different" attempt to tackle dependency management in Go.

reply

BillinghamJ? 1 day ago [-]

Particularly the fact that it only applies after the second version. If we're going to require /v2, /v3, etc - shouldn't we also require /v1 for the first?

Also it's unclear how initial development (as per server) is to be handled. If the first few versions are v0.1.0, v0.2.0, etc. - how are these handled on the import paths.

What if a package URL actually contains "/v2" at the end, but this isn't actually referring to the version and should be used when fetching the package. Seems very odd. Agree that it should have been more like "@2".

What if you want to include two different minor versions, for example, as you can include two different major ones. Is that possible/allowed?

reply

---

https://xkcd.com/1987/

---

" Module systems (UMD, AMD, CommonJS?) proliferated. (ES6 also came along and invented its own module system that is incompatible with all the others for some reason, sigh.) npm unified the way tools and libraries are shared. Webpack can dynamically swap modules into a running application while you develop it. "

" careful to compile incrementally, which is critical for large apps; changes in one module that don't affect its exported API don't cause downstream modules to recompile " [14]

---

something about packaging (skimmed it): https://github.com/yarnpkg/rfcs/pull/101

wildpeaks 7 hours ago [-]

Thing is, node_modules isn't just thirdparty modules.

It's just what the Node path resolution standard uses, which is also how Local Modules (the "/src/node_modules" structure) allows you to import other files with clean paths, without having to add a config in every tool of the toolchain, post-install symlinks, or any other non-crossplatform tricks. It just works because it's literally what Node uses to resolve paths, and all build tools are based on Node, so when you stick to the standard resolution, you can add new tools to the toolchain without needing a bunch of configs for them to find your files. For example, it's now also the default resolution in Typescript as well.

The only time /src/node_modules doesn't work is when tool goes out of its way to break it, and wrongly assumes that node_modules can only ever be used for external thirdparty code (e.g. Jest).

So best of luck to make Node + NPM + Yarn to agree on a new path resolution syntax, but I hope we won't end up with another tool-specific resolution that only works in Yarn.

reply

cprecioso 7 hours ago [-]

This doesn't break that, it specifically says it will fall back to Node's module resolution algorithm when the module you're looking isn't in the static resolutions table. That means you can keep using that technique as you have bee.

As an aside, you can also use lerna[1], yarn workspaces[2] or pnpm workspaces[3] to achieve the same effect, depending on your package manager of choice. You might get additional boosts to code organization/productivity, it's explained in the links.

[1]: https://lernajs.io [2]: https://yarnpkg.com/lang/en/docs/workspaces/ [3]: https://pnpm.js.org/docs/en/workspace.html

reply

arcatek 7 hours ago [-]

> when the module you're looking isn't in the static resolutions table

The fallback will kick in when the package that makes the request isn't in the static resolution table. Since those packages aren't part of the dependency tree we assume their dependencies have been installed independently, hence the fallback.

That said, I think the use case described by the parent post is neatly solved by the `link:` protocol, which basically 'aliases' a module to a specific name.

reply

https://news.ycombinator.com/item?id=17977698