proj-oot-ootPackagingNotes2

" Ensuring Reproducible Builds with the Cargo.lock File

Cargo has a mechanism that ensures you can rebuild the same artifact every time you or anyone else builds your code: Cargo will use only the versions of the dependencies you specified until you indicate otherwise. For example, what happens if next week version v0.3.15 of the rand crate comes out and contains an important bug fix but also contains a regression that will break your code?

The answer to this problem is the Cargo.lock file, which was created the first time you ran cargo build and is now in your guessing_game directory. When you build a project for the first time, Cargo figures out all the versions of the dependencies that fit the criteria and then writes them to the Cargo.lock file. When you build your project in the future, Cargo will see that the Cargo.lock file exists and use the versions specified there rather than doing all the work of figuring out versions again. This lets you have a reproducible build automatically. In other words, your project will remain at 0.3.14 until you explicitly upgrade, thanks to the Cargo.lock file.

Updating a Crate to Get a New Version

When you do want to update a crate, Cargo provides another command, update, which will ignore the Cargo.lock file and figure out all the latest versions that fit your specifications in Cargo.toml. If that works, Cargo will write those versions to the Cargo.lock file.

But by default, Cargo will only look for versions larger than 0.3.0 and smaller than 0.4.0. If the rand crate has released two new versions, 0.3.15 and 0.4.0, you would see the following if you ran cargo update:

$ cargo update Updating registry `https://github.com/rust-lang/crates.io-index` Updating rand v0.3.14 -> v0.3.15

At this point, you would also notice a change in your Cargo.lock file noting that the version of the rand crate you are now using is 0.3.15.

If you wanted to use rand version 0.4.0 or any version in the 0.4.x series, you’d have to update the Cargo.toml file to look like this instead:

[dependencies]

rand = "0.4.0"

The next time you run cargo build, Cargo will update the registry of crates available and reevaluate your rand requirements according to the new version you have specified. "

---

[1] is a review article that says:

├─ src │ └─ packagename │ ├─ __init__.py │ └─ ... ├─ tests │ └─ ... └─ setup.py

[2] adds:

---

Twirrim 32 days ago [-] One routine source of pain was when one of your upstream dependencies changed its dependencies. That would happen quite routinely. All was fine, unless you actually had two packages that had dependencies on different versions of a library.

You could work around it by pinning the version of the dependency, but of course that's risky. You don't know if you're exposing yourself to bugs, because you're making software run with a dependency version it hasn't been tested against.

Pretty much every build file I ever saw in Amazon had at least a half dozen or more dependencies pinned. Every now and then you'd find you were getting a new dependency conflict, and that things had become an impossible knot to untangle. All you could do is unpin _everything_ and then figure out all the version pins you needed again from scratch.

I swear I would lose at least one to two days a quarter doing that. The development focussed teams would spend way more than that on fixing conflicts.

pornel 32 days ago [-]

For native Rust libraries this is a solved problem. Cargo finds one common compatible version of each library that satisfies requirements of all dependencies, and only when that isn't possible, it allows more than one copy of the library (and everything is strongly namespaced, so there's no conflict).

And it has a lockfile, so your dependencies won't update if you don't want them to.

The only problem is C dependencies like OpenSSL? that do break their API, and don't support multiple copies per executable, so there's nothing that Rust/Cargo can do about it (you as a user can avoid them and choose native Rust libs where possible).

---

 Waterluvian 2 days ago [-]

As a newcomer to Go, by an immensely wide margin, the hardest, most frustrating thing, which soured the language for me, is whatever the heck package management is in Go.

There's like three or four different angles, all of which overlap. Some are official. Some aren't. The unofficial ones seem more popular. They're all kind of incomplete in different ways. And it was all such a frustrating migraine to try and figure out. I haven't felt so viscerally aggressive about software like that whole experience made me feel in a long time.

I hope Go2 makes something concrete from the start and sticks with it, for better or worse.

reply

atombender 2 days ago [-]

Having used the new Go module system (introduced in Go 1.11 as an option, to be the default choice in 1.12) since August, it's my opinion that this is now a solved problem.

The biggest source of pain moving forward is going to be the projects that haven't transitioned, including the various command-line tools that work on parsing, generating and manipulating Go code (e.g. linters, code generators). Most of the important ones are already there, and I've transitioned several myself.

As an added bonus, word is that the Go team wants an official package repository system (similar to Cargo, RubyGems? etc.). I wouldn't be surprised if this happens rather quickly.

reply

jniedrauer 2 days ago [-]

> it's my opinion that this is now a solved problem.

It's starting to look like a viable solution, but it's not even close to actually solved yet. Why does `go mod why <module` make changes to your module? How do you run go get to install a remote package when modules are enabled (without explicitly running `GO11MODULE=off go get`)? Why isn't the module cache concurrency safe? Why does the module cache sometimes mysteriously cause compile errors until you `go clean -modcache`? There are so many little bugs and oddities.

And as you mentioned, a lot of things have side effects now that didn't use to, which has catastrophically broken a lot of the tooling surrounding the language. Autocomplete using gocode used to be nearly instant. Now it sometimes triggers downloads and takes 30+ seconds.

I'm hopeful that go 1.12 will be the first release where this problem is really solved.

reply

atombender 2 days ago [-]

There are bugs, but I was referring to the design of the whole thing.

By the way, your "go get" bug was fixed today [1], if I understand your complaint correctly: With the new modules turned on, you could no longer do "go get" globally.

(I would agree that it's a little weird that "go get" outside a module installs it globally, while inside a module it installs it locally; that's going to trip up scripts and Dockerfiles, and it should really be two separate commands. "go mod add" to add a new dependency, for example.)

[1] https://github.com/golang/go/issues/24250#event-1996119923

reply

skywhopper 1 day ago [-]

There are still some rough edges, but I agree that the new module system is very good. And it's given me the confidence in Go to start using it far more broadly than I had before, when every new project meant hang-wringing over how to handle dependencies. Now it Just Works well enough for 90+% of use cases, with no extra steps required. It just works.

reply

I tried Go a while ago. I was hooked by the performance , the community around it and vendors support ( AWS , Heroku , GCloud etc...) but I got quickly fed up by the awkward package management system, the weird syntax and the horrible idea of $GOPATH, especially on Windows.

Haven’t tried it since.

Hope lots of this change to make the language more welcoming for Newcomers to the language.

reply

smudgymcscmudge 2 days ago [-]

If $GOPATH was your biggest complaint, now may be a good time to give it another look. As of 1.11, there's an experimental feature called go modules that lets you avoid using GOPATH. I believe it's going to be non-experimental starting in 1.12.

reply

---

on a new Rust 'edition': "I really like the changes in the module system. Having to use extern crate was a drag, so it's nice that that isn't necessary anymore. Importing macros with use is both nicer and more intuitive than macro_use."

-- https://news.ycombinator.com/item?id=18621247

---

https://www.kablamo.com.au/blog-1/2018/12/10/just-tell-me-how-to-use-go-modules

---

neuland 1 day ago [-]

A lot of people are mentioning Apple removing scripting languages at the same time as Windows is adding it. The stub is nice for learners. But ultimately I like Apple's decision to remove them. You should not be relying on system installed interpreters, because you end up with an ancient, unmaintained version. And you are co-mingling all your dependencies with whatever else is installed. Everyone should be using virtualenv. And it should be easier to use virtualenv.

reply

---

my summary of

https://jacobian.org/2019/nov/11/python-environment-2020/#atom-entries https://news.ycombinator.com/item?id=21510262

OP uses: pyenv pipx poetry

pipenv is a more popular competitor to poetry

some others use: pip and Virtualenv sometimes with virtualenvwrapper

some use Nix

regarding 'just use pip and Virtualenv', some comments were:

ph2082 1 day ago [-]

Small python projects I did, I used venv and pip. Learned my lesson through wasting couple of hours after fighting through dependency issues.

Being from JAVA Shop for long time, If I have to switch between different version of JAVA, all I do is change JAVA_HOME to point to correct version, go to base project directory and "mvn clean install" does the job. :).

reply

globular-toast 1 day ago [-]

Yes, definitely. But other tools can be used in addition to solve their own problems.

Need to have multiple versions of python installed and easily accessible? Use pyenv.

Need to run tests across multiple versions of python? Use tox.

Need to freeze environments for deployment purposes? Use pip-tools.

Need to freeze the entire operating system? Use docker or vagrant.

Don't use tools you don't need. That would be silly.

reply

thesuperbigfrog 1 day ago [-]

pip and virtualenv are my preferred solution too because they are simple and easy to use.

However, you can run into issues if you are using different versions of Python, or Python on different operating systems.

reply

j88439h84 1 day ago [-]

Nah, Poetry does the work of both and more.

reply

ropans808 1 day ago [-]

That's my general solution as well, but when pip fails at dependency resolution tools like poetry become really nice.

reply

aequitas 1 day ago [-]

I would add pip-tools to that so the requirements update process can be automated.

reply

also note:

wp381640 1 day ago [-]

If you're going to be using pyenv + poetry you should be aware of #571 that causes issues with activating the virtualenv

https://github.com/sdispater/poetry/issues/571

the OP himself has a fix for this in his own dotfiles repo:

https://github.com/jacobian/dotfiles/commit/e7889c5954daacfe...

reply

angrygoat 1 day ago [-]

The big gap is management of the full dependency tree. With yarn I can get a package.lock which pretty well ensures I'll have the same exact version of everything, with no unexpected changes, every time I run yarn install. I get the same thing in the Rust world with Cargo.

In Python it's a mess. Some packages specify their deps in setup.py; some in a requirements file, which may or may not be read in by their setup.py. It's not rare to have to have multiple 'pip install' commands to get a working environment, especially when installing fairly boutique extensions to frameworks like Django.

There just isn't a holistic, opinionated approach to specifying dependencies, especially at a project level. Which leaves unexpected upgrades of dependencies (occasionally leading to regressions) as a reality for Python devs.

reply

Rotareti 1 day ago [-]

Since you are familiar with nodejs:

poetry == npm

pyenv == nvm

pipx == npx

No big difference, IMO.

reply

_AzMoo? 1 day ago [-]

Right? It's literally the same set of issues.

Package Management - npm? bower? yarn? Which should I use this week?

Interpreter Versions - Revisiting a project I last touched 2 years ago on node 8 has a load of locked dependencies that only work on that node version. OK, let's bring in nvm so I can get a dev environment going.

Executable Packages - oh no I've got two different projects with different and incompatible versions of apollo, what do I do? Oh right, sure npx for executable isolation so we don't pollute the global namespace.

Every ecosystem has these problems, and if they don't it's probably because they're still relatively esoteric.

reply

Rotareti 1 day ago [-]

> Every ecosystem has these problems, and if they don't it's probably because they're still relatively esoteric.

Exactly! I'm not aware of any non-compiled language where (all) these issues are solved much better. I can be very productive with the tools I mentioned above and I'm glad that they work almost identical for both my main drivers (Python and JS/TS).

reply

dec0dedab0de 1 day ago [-]

...

For what it's worth I've been using pipenv for over a year and it works good enough. I think npm's approach is better, but not perfect. I've heard good things about yarn. I know CPAN is the grandfather of all of them. I've barely used gems but they seem like magic, and Go get your repo URI out of my code please and thank you. :-) All fun aside, what languages have it right? and is there maybe a way to come up with a unified language dependency manger?

reply

---

" A long term solution to the problem of support for platforms not originally considered by project authors is going to be two-pronged:

    Builds need to be observable and reviewable: project maintainers should be able to get the exact invocations and dependencies that a build was conducted with and perform automatic triaging of build information. This will require environment and ecosystem-wide changes: object and packaging formats will need to be updated; standards for metadata and sharing information from an arbitrary distributor to a project will need to be devised. Reasonable privacy concerns about the scope of information and its availability will need to be addressed.
    Reporting needs to be better directed: individual (minimally technical!) end users should be able to figure out what exactly is failing and who to phone when it falls over. That means rigorously tracking the patches that distributors apply (see build observability above) and creating mechanisms that deliver information to the people who need it. Those same mechanisms need to have some mechanism for interaction: there’s nothing worse than a flood of automated, bug reports with insufficient context13." [3]

---

"Unlike npm, Python mainstream packaging tools have no concept of vulnerable versions."

---

https://new.pythonforengineers.com/blog/python-in-2021-the-good-the-bad-and-the-ugly/ https://lobste.rs/s/otshxn/python_2021_good_bad_ugly

---

https://medium.com/knerd/the-nine-circles-of-python-dependency-hell-481d53e3e025

---

https://r2c.dev/blog/2022/the-best-free-open-source-supply-chain-tool-the-lockfile/

---

https://blog.izs.me/2021/10/my-favorite-npm-commit/ summary: for reproducible builds, if you use .tar or .zip, the mtime and ctime of files must be invariant. One way to do this is to override whatever they naturallly are and set them all to a constant time. The zip format does not support file timestamps with dates before 1980-01-01. NPM chose to use 1985-10-26T08:15:00.000Z as the constant time, as a homage to an old scifi movie about time travel.

in a comment someone pointed out that you don't want to just ALWAYS override mtime because then 'make' doesn't work:

 0x2ba22e11 3 hours ago | link | flag |

FWIW a tangential nitpick about this quote:

    git (like most other source control systems) does not view it that way, gleefully letting the timestamps be whatever the system decides they should be.

Version control systems let the mtime of each file get bumped along on purpose. Otherwise, make; git checkout HEAD^; make would fail to do the right rebuilding on the second “make” invocation.

There was a question about supporting “record the mtimes and restore them upon checkout” as a svn feature way back. The svn maintainers read it carefully, said it was a neat idea, but they they wouldn’t be putting it in because it broke “make” and friends. They did suggest using svn’s plugin mechanism to implement it instead.

---

https://go.dev/blog/supply-chain https://news.ycombinator.com/item?id=30869261 ---

---

reproducable builds

https://fossa.com/blog/three-pillars-reproducible-builds/

---

builds should be possible to do offline

---

~ doug-moen edited 1 hour ago

link flag

I’m really impressed by Zig’s cross-platform hermetic build feature. It’s valuable, unique, and strongly positions Zig as a “better C”.

What do I really want from a better C? The ability to write once, run anywhere, for real. The ability to lock in all my dependencies, then build for any platform, confident that the binaries will work, and that my code won’t bitrot after a few years. My experience with C, C++ and Java projects on github is that if they haven’t been maintained for the last five years, then you cannot build and run them without updating the code, which requires a lot of expertise.

Zig can produce distro independent portable Linux binaries that are statically linked, as long as you don’t have the “wrong dependencies”. I have a forthcoming project that will use WebGPU? for cross-platform graphics, and glfw or an equivalent for windows and input events. I’m pretty sure you can’t create static platform-independent Linux binaries with these dependencies, because Vulkan on Linux requires dynamic linking, and there is no portable ABI that a statically linked binary can use to interface with Vulkan. Either the Linux Vulkan people need to provide this, or there has to be a separate portable runtime (with a portable ABI) that is ported to each platform, that Zig binaries can dynamically link to.

I can see that @AndrewRK? has done some experiments with portable Vulkan apps in Zig. Although I think there is a lot of work left to be done, I have hope that Zig will mature into the kind of system I’m asking for.

---

    13
    andrewrk Zig Creator edited 13 hours ago | link | flag | 

Note that all of the following components, which are essential to the function of these toolchain features, are completely written in Zig:

    a cross platform compiler-rt implementation that is lazily compiled from source for the chosen target. All C code depends on this library. (Also all code that uses LLVM, GCC, or MSVC as a backend depends on this library.)
    a Mach-O linker (macOS)
    glue code for building lazily from source musl, mingw-w64, glibc stubs, libc++, libc++abi, libunwind. Basically the equivalent of these build systems ported over to imperative .zig and integrated with the job queue & caching system.
    zig CLI & glue code for passing appropriate flags to clang, such as which C header directories are needed for the selected target

That said, I am interested in trying to cooperate with other open source projects and sharing the maintenance burden together. One such example is this project: glibc-abi-tool

    ~
    ac 9 hours ago | link | flag | 

Thanks for the reply, it is certainly much more involved than I thought. It is impressive you are able to deliver such an improvement over the status-quo as a sub project.

    ~
    andrewrk 6 hours ago | link | flag | 

Thanks for the compliment. The trick is that almost all of these components are also used by the Zig language itself, or as prerequisites for the upcoming package manager so that C/C++/Zig projects can depend on C/C++/Zig libraries in a cross-platform manner. Mainly the only “extra” sub-project that has to be maintained is the integration with Clang’s command line options (which are helpfully stored in a declarative table in the LLVM codebase).

---

https://jakstys.lt/2022/how-uber-uses-zig/

---

    6
    snej edited 10 hours ago | link | flag | 

I’m finding that the most complex part of learning yet another language isn’t the language, it’s the tooling and ecosystem.

TypeScript? itself was fun and easy … but figuring out that config file the translator uses was annoying, and ditto for TSLint and Jester, and don’t get me started on the seventeen flavors of JS modules. Stuff like Grunt can die in a fire. (I now understand why some folks go nuts for radical simplicity and end up living in a cave running Gopher on their 6502-based retro computer: Node and the web ecosystem drove them to it. So much worse than C++, even with CMake.)

-- https://lobste.rs/s/zdtzbs/minimalism_programming_language_design#c_dyvd8q

---

https://lucumr.pocoo.org/2022/7/9/congratulations/ has some suggestings (and some things he mentions tangentially) for a language package repository:

Also:

---

https://jazzband.co/

---

https://matt-rickard.com/the-unreasonable-effectiveness-of-makefiles

https://lobste.rs/s/sq9h3p/unreasonable_effectiveness_makefiles " carlmjohnson 4 hours ago

link flag

Yeah, it’s not a good enough make system. Mtime sucks. A good make system should have content hashing at the least and probably a proper system for effects on top of that. "

---

4 colindean 17 days ago

link

I subscribe to the belief that every project should have a Makefile with tasks:

    deps installs all dependencies as automated as they can be (it’s OK if it outputs some manual steps, but a rerun should detect those things having been done and not output the manual nudges if they’re not needed)
    check runs all linting and static analysis without modifying anything
    test runs all unit tests minimally and any integration tests that are low impact
    build produces some artifact
    all does all of the above
    clean deletes artifacts and, if possible, restores the codebase to as close to its original state as possible
    help outputs a list of tasks

For my Scala and Rust projects, this yields a ~14 line Makefile that just executes sbt or cargo, respectively. For my Python and Ruby projects, there’s a lot more to it. Any sufficiently advanced system of build scripts or documentation eventually just reimplements make.

All of this in pursuit of the idea that someone should be able to clone and verify a build representing a clean starting point for development and troubleshooting with two three commands: git clone whatever && cd whatever && make all.

---

david_chisnall 17 days ago

link
    BSD make is great for small projects which don’t have a lot of files and do not have any compile time option. For larger projects in which you want to enable/disable options at compilation time, you might have to use a more complete build system.

Here’s the problem: Every large project was once a small project. The FreeBSD? build system, which is built on top of bmake, is an absolute nightmare to use. It is slow, impossible to modify, and when it breaks it’s completely incomprehensible trying to find out why.

For small projects, a CMake build system is typically 4-5 lines of CMake, so bmake isn’t really a win here, but CMake can grow a lot bigger before it becomes an unmaintainable mess and it’s improving all of the time. Oh, and it can also generate the compile_commands.json that your LSP implementation (clangd or whatever) uses to do syntax highlighting. I have never managed to make this work with bmake (@MaskRay? published a script to do it but it never worked for me).

    17
    mort edited 17 days ago | link | 

The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

Some of the “modern” cmake stuff is slightly less horrible. Maybe if the cmake community had moved on to using targets, things would’ve been a little better. But most of the time, you’re still stuck with ${FOO_INCLUDE_DIRS} and ${FOO_LIBRARIES}. And the absolutely terrible syntax and stringly typed nature won’t ever change.

Give me literally any build system – including an ad-hoc shell script – over cmake.

    6
    zk 17 days ago | link | 

Agreed. Personally, I also detest meson/ninja in the same way. The only thing that I can tolerate writing AND using are BSD makefiles, POSIX makefiles, and plan9’s mkfiles

    2
    calvin 17 days ago | link | 

You are going to have a very fun time dealing with portability. Shared libraries, anyone?

    2
    mort 17 days ago | link | 

Not really a problem, pkg-config tells your makefile what cflags and ldflags/ldlibs to add.

    2
    calvin 17 days ago | link | 

Using it is less the problem - creating shared libraries is much harder. Every linker is weird and special, even with ccld. As someone dealing with AIX in a dayjob…

...

5 orib edited 17 days ago

link
    The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

Yes. The last time I seriously used cmake for cross compiles (trying to build third-party non-android code to integrate into an Android app) I ended up knee deep in strace to figure out which of the hundreds of thousands of lines of cmake scripts were being included from the system cmake directory, and then using gdb on a debug build of cmake to try to figure out where it was constructing the incorrect strings, because I had given up on actually being able to understand the cmake scripts themselves, and why they were double concatenating the path prefix.

Using make for the cross compile was merely quite unpleasant.

Can we improve on make? Absolutely. But cmake is not that improvement.

    2
    david_chisnall 17 days ago | link | 

What were you trying to build? I have cross-compiled hundreds of CMake things and I don’t think I’ve ever needed to do anything other than give it a cross-compile toolchain file on the command line. Oh, and that was cross-compiling for an experimental CPU, so no off-the-shelf support from anything, yet CMake required me to write a 10-line text file and pass it on the command line.

    2
    orib edited 17 days ago | link | 

This was in 2019-ish, so I don’t remember which of the ported packages it was. It may have been some differential equation packages, opencv, or some other packages. There was some odd interaction between their cmake files and the android toolchain’s cmake helpers that lead to duplicated build directory prefixes like:

 /home/ori/android/ndk//home/ori/android/ndk/$filepath

which was nearly impossible to debug. The fix was easy once I found the mis-expanded variable, but tracking it down was insanely painful. The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

    2
    david_chisnall 17 days ago | link | 
    The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

The sad path with bmake is far sadder. I spent half a day trying to convince a bmake-based build system to compile the output from yacc as C++ instead of C before giving up. There was some magic somewhere but I have no idea where and a non-trivial bmake build system spans dozens of include files with syntax that looks like line noise. I’ll take add_target_option over ${M:asdfasdfgkjnerihna} any day.

david_chisnall 17 days ago

link

Modern CMake is a lot better and it’s being aggressively pushed because things like vcpkg require modern CMake, or require you to wrap your crufty CMake in something with proper exported targets. Importing external dependencies.

I’ve worked on projects with large CMake infrastructure, large GNU make infrastructure, and large bmake infrastructure. I have endured vastly less suffering as a result of the CMake infrastructure than the other two. I have spent entire days trying to change things in make-based build systems and given up, whereas CMake I’ve just complained about how ugly the macro language is.

borisk 16 days ago

link

Would you be interested to try build2? I am willing to do some hand-holding (e.g., answer “How do I ..?” questions, etc) if that helps.

To give a few points of comparison based on topics brought up in other comments:

    The simple executable buildfile would be a one-liner like this:
    exe{my-prog}: c{src1} cxx{src2}
    With the libzstd dependency:
    import libs = libzstd%lib{zstd}
    exe{my-prog}: c{src1} cxx{src2} $libs
    Here is a buildfile from a library (Linux Kconfig configuration system) that uses lex/yacc: https://github.com/build2-packaging/kconfig/blob/master/liblkc/liblkc/buildfile
    We have a separate section in the manual on the available build debugging mechanisms: https://build2.org/build2/doc/build2-build-system-manual.xhtml#intro-diag-debug
    We have a collection of HOWTOs that may be of interest: https://github.com/build2/HOWTO/#readme
    3
    david_chisnall 16 days ago | link | 

I like the idea of build2. I was hoping for a long time that Jon Anderson would finish Fabrique, which had some very nice properties (merging of objects for inheriting flags, a file type in the language that was distinct from a string and could be mapped to a path or a file descriptor on invocation).

2 ruki 17 days ago

link
    you can also try xmake. It is fast and lightweigt and contains a package manager.

---

for Python, ppl seem to like Poetry at [5]

---

some people like 'just' instead of 'make':

https://github.com/casey/just

---

akselmo 14 hours ago

link flag

Genuine question from a newbie: I’ve mostly seen CMake being used in projects, is Make still useful to learn?

    11
    snej 13 hours ago | link | flag | 

Make is OK for simple projects, but its lack of dependency analysis becomes really painful, and the need to manually update a file’s dependencies in the makefile leads to exactly those stale-binary bugs described in that article’s intro quote.

In more detail: Say you have a C project. You add each of your .c files to the makefile. But for each .c file you have to list all the .h files it includes, as dependencies. Worse, this applies transitively, so you also have to list all the .h files included by those .h files, and so on.

Any time you add/remove an include directive in your source code, you have to update the makefile accordingly. If you don’t, you can end up in a state where your binary contains stale code, which can be an absolute nightmare to debug.

(This is also necessary with languages that use other mechanisms than direct inclusion. Basically you have to follow all the imports and duplicate them in the makefile.)

Every non-tiny project I’ve worked on that used make also used some adjunct tool that scanned all the source files to identify dependencies and wrote the result as a makefile to be used by the build system. This was fairly kludgy. Eventually these tools became flexible enough that they took over the job — as with CMake, you work only with the more powerful tool and let it generate and run a makefile for you.

    5
    ubernostrum 9 hours ago | link | flag | 

I think it really is worth noting the distinction between:

    make as interface to compiler/linker toolchain where the make targets are specific files to build/link
    make as generic nearly-universally-available task runner not tied to compilation or even to languages which have explicit compile steps

The latter is what a lot of modern “use Makefiles” articles are actually doing.

    ~
    dijit 8 hours ago | link | flag | 

Except make isn’t actually installed by default on most desktop or server operating systems.

It’s an additional package, which just means it has been arbitrarily chosen and isn’t actually a lowest common denominator. It might as well have been bazel (bloated) or Procfiles

    ~
    ubernostrum 7 hours ago | link | flag | 
        I don’t know the situation on Windows, but on Linux and on macOS desktop, just bootstrapping a dev environment for lots of different languages will basically always pull in some form of make as a transitive dependency. I know, for example, that basically every language’s “set up dev environment on macOS” instructions begin with xcode-select --install, which I believe will implicitly install make.

~ akselmo 13 hours ago

link flag

Thanks for the advise. I’ve noticed CMake is popular in projects that has a lot of libraries, I guess that’s why.

8 kornel 11 hours ago

link flag

Make is nice for very simple things, but as your project grows and requires non-trivial things, Makefile becomes a patchwork of tricks and hacks with obscure syntax impossible to google, and it’s an endless maintenance burden.

I recommend sticking to basics and treating make as a launcher for scripts. If you need more, don’t dig deeper, but run away to a more maintainable system.

~ viraptor 13 hours ago

link flag

Depends what your goal is. If you’re interfacing with / building other projects then definitely. You seem to be in a happy environment where people use cmake, but globally we’re still 95% on autotools and make in the c/c++ world. But if you only work with your own things, yeah, maybe you can get away without learning makefiles.

(just keep in mind, you can learn 90% of makefile functionality in a couple hours - basic functional knowledge is not a huge investment here)

    ~
    akselmo edited 13 hours ago | link | flag | 

Thanks. Could probably switch CMake to plain Makefiles in my project and see how it works out.

Edit: Now that I think of it, I may be having easier time using CMake for this project since it uses external libraries, and CMake makes that pretty easy. Could try Makefiles with a smaller project without external libraries.

    ~
    ane 11 hours ago | link | flag | 

You should also have a look at Meson, I’ve had pleasant experiences with it, especially coming from autotools

---

qbasic_forever 12 hours ago

prev next [–]

For a task runner I really like just and its Justfile format: https://github.com/casey/just It is heavily inspired by make but doesn't focus on the DAG stuff (but does support tasks and dependencies). Crucially it has a much better user experience for listing and documenting tasks--just comment your tasks and it will build a nice list of them in the CLI. It also supports passing CLI parameters to task invocations so you can build simple CLI tools with it too (no need to clutter your repo with little one-off CLI tools written in a myriad of different languages).

If most of your make usage is a bunch of .PHONY nonsense and tricks to make it so developers can run a simple command to get going, check out just. You will find it's not difficult to immediately switch over to its task format.

reply

dahfizz 8 hours ago

parent next [–]

I don't understand the use case of `just`. It drops every useful feature from `make`. It doesn't look like it has parallelism or the ability to not needlessly re-run tasks.

Even if `just` was installed on a standard Linux box, I don't see the benefit of it over a bash script.

reply

psibi 2 hours ago

root parent next [–]

I don't think just is trying to be a build system. It's major focus is as a task runner and in that space it does it's job well IMO.

reply

niedzielski 9 hours ago

parent prev next [–]

Just looks soooo promising! I don't think I can use it until conventional file target and dependencies are supported though. Right now everything's tasks (phonies) so conventional makefile rules like the following are impractical:

  tic-tac-toe: tic.o tac.o toe.o
    cc -o '$@' $^
  
  %.o: %.c; cc -c $^

reply

qbasic_forever 9 hours ago

root parent next [–]

You might find checkexec useful to pair with just, it is basically a tool that only does the file-based dependency part of make: https://github.com/kurtbuilds/checkexec

reply

---

(in response to a comment saying that make is a good build system)

david_chisnall 7 hours ago (unread)

link flag

Except if the things you need to make:

    Have single build steps that produce more than one output,
    Have dependencies that are dynamically discovered during a build step,
    Have outputs whose freshness is not simply a function of the timestamp, or
    Have build steps that may need to be run more than once to reach a fixed point.

For example, Make cannot build LaTeX? documents well. You need to run latex first to generate all of the cross-reference targets, then typically bibtex once to generate the bibliography with the targets for all of the cross-references, then latex once more to generate the cross references. The second latex invocation may generate more unresolved references (and more reference targets) and so you may need to rerun it. The latexmk tool can do this by parsing the output of the various tools to determine whether they need to be rerun, but make cannot, by itself.

You can drive latexmk from make but by the time it’s run for the first time the rest of the dependency tree in make‘s view of the world is computed and so if the first run of pdflatex tells you that you need a PDF file and make has a build rule that can construct a PDF from, say, an SVG file, then there’s no way for the build rule that invoked latexmk to add the dependency without resorting to hacks like modifying a file that is included by make (using either a non-standard GNU or a different non-standard BSD extension) and then re-invoking make with the top-level target.

---

anordal edited 9 hours ago (unread)

link flag

Oh, Make is indeed a flawed and broken language to its core. That’s the real critique! But has anyone tried to make a proper once-and-for-all Make killer successor language (beyond just a command runner) that’s as expressive (not just meant to be generated)?

I’m not talking about tabs: That’s but a syntactic surprise, not a flaw of the kind that makes it impossible to use correctly. For that, one only needs to think of what flaw it shares with the POSIX shell: Lack of a proper list datatype. That’s impressively broken for an implicitly parallel language whose reason to exist is to express relations between files in plural. I also agree about the implicit shelling out. And recursive Make needs to be fundamentally rethought if correctness is to be preserved.

I haven’t actually tried writing Ninja, although I use it more than Make these days (as a CMake/Meson backend). I know it can express things that Make can’t, such as rules with multiple outputs, but I get the impression that it’s not as expressive and is really meant to be generated. I would rather write 1 generic rule in Make than N special rules in Ninja.

    ~
    andyc edited 4 hours ago (unread) | link | flag | 

Yup, all agreed. But you should try generating Ninja :) Ninja is a good replacement for Make because most people generate Make anyway – like autotools does, CMake does, and kconfig does for the Linux kernel.

Make isn’t powerful enough on its own. It lulls you into thinking you can use one tool, and then you end up with a mess :)

I just copied Ninja’s own 197 line wrapper API, which makes it perfectly reasonable to do in Python:

https://github.com/oilshell/oil/blob/master/vendor/ninja_syntax.py

There are basically only two functions to learn – rule() and build(), which generate tne rule and build statements in Ninja. That’s it :) You can learn Ninja in 20 minutes. A typical way I use it is something like this:

for compiler in ['cxx', 'clang']: for variant in ['dbg', 'opt', 'asan', 'coverage']: out = f'_obj/{compiler}-{variant}/foo.o' v= [('compiler', compiler), ('variant', variant)] n.build(out, 'compile-one', ['foo.c'], variables=v)

(not tested)

So it’s a simple nested loop that generates a build rule on every iteration.

It’s trivial in Python, but doing that in GNU make is HUGE hassle!

And this is extremely useful because you need clang and ASAN/UBSAN to help with your C code :)

I hope to write a blog post about this, because it’s come up a lot lately …

The generation can make more sense if you think of it as a “staged execution model” – you’re using imperative code to generate a parallel graph. That’s how GPU code and AI code works too. (And it’s how Bazel works too – it “lowers” your code to a target graph)

http://www.oilshell.org/blog/2021/04/build-ci-comments.html#language-design-staged-execution-models

This model is how the recent Hay feature of Oil works, and I explicitly mentioned CMake/Ninja there:

https://lobste.rs/s/phqsxk/hay_ain_t_yaml_custom_languages_for_unix

The generator / graph split can make sense for other reasons too. One thing I am going to do is experiment with ./NINJA_config.p –sandbox=X , which may help accomplish what the recent “Landlock make” is doing: https://news.ycombinator.com/item?id=32377264

---

~ lcapaldo 5 hours ago (unread)

link flag
    I asked him a few times to explain what Procfiles were for that couldn’t be done equally well with existing tooling using Makefiles and phony targets
    Have a target named web and a web process type. You’d at least need 2 separate Makefiles then.
    As far as I can tell you’d have to implement a Makefile parser to implement the scaling UI. There might be a way to get make(1) to tell you the phony targets but if so it is not obvious to me how. That seems awfully complicated for the use case.

---

package repo should include vuln management

ppl seem to like Golang's:

https://go.dev/blog/vuln

---

dont allow user-created packages to have a 'flat' namespace; that is, the first person to create an 'sql' package at the official package website doesn't get the name 'sql'. Rather, than have qualified names such as bshanks/sql. There are also collections of packages that alias other packages, eg stdrecommended/sql could be an alias for bshanks/sql.

stdlib packages get the flat namespace at the top.

however that's only in the analog of Cargo.toml that specifies dependencies. In source code files, the import statement can use the unqualifed name, eg 'sql', unless the Cargo.toml analog points at multiple 'sql' packages.

---

https://chriswarrick.com/blog/2023/01/15/how-to-improve-python-packaging/

---

https://www.bitecode.dev/p/whats-up-python-new-packaging-proposal notes that python pep 723 is proposing a way to put the pyproject.toml stuff inside a single-file script. Cool!

---

https://github.com/crev-dev/crev/

---