Difference between revision 74 and current revision
No diff available.Table of Contents for Programming Languages: a survey
Part of what draws people to (or repels people from) a programming language is its tooling. So if you are implementing a programming language, you may wonder, what does good tooling look like? For now, we'll settle for just surveying what tooling is out there, good or bad.
Note that many tools span multiple categories; in such cases we made a more-or-less arbitrary choice as to which category to put the tool in, so don't get too hung up on that.
Our focus here is language-specific tooling, rather than language-independent tools. However, there are many tools that, although they can in theory be used upon codebases of any language, in practice are associated with a particular language (typically the same language that the tool is written in). In some cases such tools may have a better fit with one language, in others they are just more popular in one language community than in others. In any case, we'll list many of these as well.
In addition, we sometimes list the most popular language-independent tools in each category.
todo? https://docs.google.com/document/d/1SLk36YRjjMgKqe490mSRzOPYEDe0Y_WQNRv-EiFYUyw/view
"This extension adds rich language support for the Go language to VS Code, including:
Colorization Completion Lists (using gocode) Signature Help (using godoc) Snippets Quick Info (using godef) Goto Definition (using godef) Find References (using go-find-references) File outline (using go-outline) Workspace symbol search (using go-symbols) Rename (using gorename) Build-on-save (using go build and go test) Format (using goreturns or goimports or gofmt) Add Imports (using gopkgs) [partially implemented] Debugging (using delve)" -- https://github.com/Microsoft/vscode-go
https://www.sublimetext.com/docs/3/syntax.html
http://www.kythe.io/docs/schema/
https://plus.google.com/115849739354666812574/posts/WUgoSr8VVsq
summary: that is a rant against the cloud-based nature of at-that-time Grok. Mentioned some other related systems and features:
desired features:
another blog post with lists of similar projects (todo take notes on this; i think i read it years ago already tho):
See also [1].
https://github.com/Microsoft/language-server-protocol https://github.com/Microsoft/language-server-protocol/blob/master/protocol.md
Supported by MS (VS Code), Redhat, CodeEnvy?, Rust Language Server, and others (see https://github.com/Microsoft/language-server-protocol/wiki/Protocol-Implementations ).
Supports features such as (list from [2]):
an implementation for VS Code:
https://github.com/Microsoft/vscode-languageserver-node
https://github.com/apple/swift/blob/master/tools/SourceKit/docs/Protocol.md
note: some commentators note that it seems like SourceKit? pushes syntax highlighting across process boundaries and say that that is too slow: [3]
(list from [4])
"
-- [5]
https://en.wikipedia.org/wiki/Build_automation
https://en.wikipedia.org/wiki/GNU_build_system
make debugger: https://github.com/rocky/remake tips: http://www.cl.cam.ac.uk/~srk31/blog/2014/11/19/#writing-makefiles
"...This does make me wonder how things went so badly with make, makemaker, autoconf, aclocal, and the rest of the Texas Toolchain Massacre." [6]
"I work on a lot of Javascript projects. The fashion in Javascript is to use build tools like Gulp or Webpack that are written and configured in Javascript. I want to talk about the merits of Make (specifically GNU Make). Make is a general-purpose build tool that has been improved upon and refined continuously since its introduction over forty years ago. Make is great at expressing build steps concisely and is not specific to Javascript projects. It is very good at incremental builds, which can save a lot of time when you rebuild after changing one or two files in a large project. Make has been around long enough to have solved problems that newer build tools are only now discovering for themselves." [7]
"I used make heavily in the 80s and 90s, but haven't much since then. Recently I started a project that had source files getting processed into PDF files, for use by humans. Since this is the 21st century, those files have spaces in their names. At a certain point, I realized that I should be managing this processing somehow, so I thought of using a simple Makefile. A little searching reveals that the consensus on using make with files with spaces in their names is simply "don't even try." In the 21st century, this is not an acceptable answer." [8]
"I think the reason make is both so controversial and also long-lived is that despite how everyone thinks of it, it isn't really a build tool. It actually doesn't know anything at all about how to build C, C++, or any other kind of code. (I know this is obvious to those of us that know make, but I often get the impression that a lot of people think of make as gradle or maven for C, which it really isn't.) It's really a workflow automation tool, and the UX for that is actually pretty close to what you would want. You can pretty trivially just copy tiresome sequences of shell commands that you started out typing manually into a Makefile and automate your workflow really easily without thinking too much. Of course that's what shell scripts are for too, but make has an understanding of file based dependencies that lets you much more naturally express the automated steps in a way that's a lot more efficient to run. A lot of more modern build tools mix up the workflow element with the build element (and in some cases with packaging and distribution as well), and so they are "better than make", but only for a specific language and a specific workflow." [9]
"> and the UX for that is actually pretty close to what you would want.
That is so not true. Make has deeply woven into it the assumption that the product of workflows are files, and that the way you can tell the state of a file is by its last modification date. That's often true for builds (which is why make works reasonably well for builds), but often not true for other kinds of workflows...Anything where the relevant state lives in a database, or is part of a config file, or is an event that doesn't leave a file behind (like sending a notification)... " [10] , [11]
" send: foo.log tail foo.log
touch send" [12] (in response to the previous comment about 'make' not being able to record events like the sending of notifications)
"But regardless of that, a tool that makes a semantic distinction between tabs and spaces is NEVER the UX you want unless you're a masochist. " [13]
"GNU make has had an option (.RECIPEPREFIX) to change this..." [14] (in response to the previous comment regarding spaces and tabs)
" Another issue with Make is that it's not smart enough to know that intermediate files may change without those changes being important. Consider that I change the comments in foo.c or reformat for some reason. This generates a new foo.o because the foo.c timestamp is updated. Now it wants to rebuild everything that uses foo.o because foo.o is newer than those targets. Problem, foo.o didn't actually change and a check of its hash would reveal that. Make doesn't know about this. So you end up making a trivial change to a source file and could spend the afternoon rebuilding the whole system because your build system doesn't understand that nothing in the binaries are actually changing. ... With regard to my last comment (the problem with small changes in a file resulting in full-system recompilation), see Tup. It maintains a database of what's happened. So when foo.c is altered it will regenerate foo.o. But if foo.o is not changed, you can set it up to not do anything else. The database is updated to reflect that the current foo.c maps to the current foo.o, and no tasks depending on foo.o will be executed. Tup also handles the case of multiple outputs from a task. There are probably others that do this, it's the one I found that worked well for my (filesystem-based) workflows. " [15] , [16]
" The reason I don't like (((make))) is portability. Since the steps within the makefile are going to be run through a shell, it is going to behave differently on different systems.
If your makefile fixes up a file using sed and your system has gnu sed, your makefile may fail on a system with BSD sed (e.g., a mac). If you rely on bash-isms, your makefile may not work on a debian system where it will be run with dash instead of bash. And so on. " [17]
"Make's interface is horrible. Significant tabs. Syntax which relies on bizarre punctuation... If only whoever authored Make 40 years ago had had the design acumen of a Ken Thompson or a Dennis Ritchie!" [18]
"I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner. (I love CMake, but it's a beast, and even harder to debug than make. If it weren't for its nice cross-platform capabilities, I'm not sure it would see much use. It's also too specialized for a generic build tool. Then again, I definitely prefer it to raw Makefiles for a large C++ project.) " [19]
" "In all seriousness, what's wrong with it?"
1. Claiming a rule makes a target, but then fails to make that target, ought to be a runtime fatal error in the makefile. I can hardly even guess at how much time this one change alone would have saved people.
2. String concatenation as the fundamental composition method is a cute hack for the 1970s... no sarcasm, it really is... but there's better known ways to make "templates" nowadays. It's hard to debug template-based code, it's hard to build a non-trivial system without templates.
3. Debugging makefiles is made much more difficult than necessary by make's default expansion of every target to about 30 different extensions for specific C-based tools (many of which nobody uses anymore), so make -d output is really hard to use. Technically once you learn to read the output it tends to have all the details you need to figure out what's going wrong, but it is simply buried in piles of files that have never and will never be found in my project.
4. The distinction between runtime variables and template-time variables is really difficult and annoying.
5. I have read the description of what INTERMEDIATE does at least a dozen times and I still don't really get it. I'm pretty sure it's basically a hack on the fact the underlying model isn't rich enough to do what people want.
6. Sort of related to 2, but the only datatype being strings makes a lot of things harder than it needs to be. " [20]
" With the debugging expansion thing you're mentioning, now I'm craving a make built on some minimalist functional programming language like Racket where "expand the call tree" is a basic operation. " [21]
" I've been writing Makefiles regularly for maybe 15 years and I always end up on this page every time I need to write a new one: https://www.gnu.org/software/make/manual/html_node/Automatic...
$< $> $* $^ ... Not particularly explicit. You also have the very useful substitution rules, like $(SRC:.c=.o) which are probably more arcane than they ought to be. You can make similar complaints about POSIX shell syntax but at least the shell has the excuse of being used interactively so it makes sense to save on the typing I suppose.
That's my major qualm with it however, the rest of the syntax is mostly straightforward in my opinion, at least for basic Makefiles. " [22]
" give pmake a shot sometime.. the syntax/semantics are much more 'shell-like' imho and some things are just much more possible.. (e.g. looping rather than recursive calls to function definitions)" [23]
" ...some great features of make:
"it could be a lot worse. (see also: m4, autoconf, sendmail.cf)" [25]
"Make is the tried and true, but very rapidly becomes difficult to manage for larger projects, especially in a monorepo." -- [26] (that commentor prefer Bazel)
(do these even belong in this section?)
"an F# "Make" system...a build DSL in F# scripts." [27]
Links:
"I've seen plenty of unmanageable Makefiles, but I haven't seen another system that would make them inherently cleaner. (I love CMake, but it's a beast, and even harder to debug than make. If it weren't for its nice cross-platform capabilities, I'm not sure it would see much use. It's also too specialized for a generic build tool. Then again, I definitely prefer it to raw Makefiles for a large C++ project.) " [29]
cons:
" Cmake is more of a replacement for autotools than for make.
Advantages: Good Windows support.
Disadvantages: Dictates the directory structure, much less flexible than autotools.
If you know shell, an existing autotools project can be modified easily.
If you want to do something special in cmake, first you need to go to Stackoverflow. If your are lucky, the thing you want to do is supported (often it is not).
All in all, I feel locked in by cmake - to the point that once the build works I'm less inclined to refactor because the directory structure cannot be changed easily. " -- [34]
Used by:
Opinions:
https://github.com/premake/premake-core/wiki/Your-First-Script
https://en.wikipedia.org/wiki/SCons
"When to stick with Webpack. The job that Webpack does is quite specialized. If you are writing a frontend app and you need code bundling you should absolutely use Webpack (or a similar tool like Parcel). On the other hand if your needs are more general Make is a good go-to tool. I use Make when I am writing a client- or server-side library, or a Node app. Those are cases where I do not benefit from the specialized features in Webpack." [35]
"Maven, though at first rather overwhelming, turned out to have a ton of features I’d often wished for in other build/dependency management systems." -- https://medium.com/@octskyward/why-kotlin-is-my-next-programming-language-c25c001e26e3
"I find Leinigen a bit bloated respect to Mix. Mix is faster, lighter and integrated with Elixir. Lein is not. I find Lein a bit slow..." [36]
eg with Clojure Boot installed:
$ curl https://raw.githubusercontent.com/alda-lang/alda/master/bin/alda
; this is a minimal dispatch script that fetches the latest version of ; alda from clojars (a maven repository for clojure projects) and runs the ; alda.cli/main method, passing along any command-line arguments.
; this script will automatically update your version of alda as newer ; versions are released.
(set-env! :dependencies '[[alda "LATEST"]])
(require '[alda.cli])
(defn -main [& args] (apply (resolve 'alda.cli/-main) args)) $ sudo curl https://raw.githubusercontent.com/alda-lang/alda/master/bin/alda -o /usr/local/bin/alda $ sudo chmod +x /usr/local/bin/alda $ alda Retrieving boot-2.2.0.jar from https://clojars.org/repo/ Retrieving clojure-1.7.0.jar from https://repo1.maven.org/maven2/ Retrieving dynapath-0.2.3.jar from https://clojars.org/repo/ Retrieving pod-2.2.0.jar from https://clojars.org/repo/ Retrieving shimdandy-impl-1.1.0.jar from https://repo1.maven.org/maven2/ Retrieving core-2.2.0.jar from https://clojars.org/repo/ ...
"emphasis has been placed on build speed, correctness, and reproducibility.[2][4] The tool uses parallelization to speed up parts of the build process...One of the goals of Bazel is to create a build system where build target inputs and outputs are fully specified and therefore precisely known to the build system.[7] This allows a more accurate analysis and determination of out-of-date build artifacts in the build system's dependency graph. Making the dependency graph analysis more deterministic leads to potential improvements in build times by avoiding re-executing unnecessary build targets...One of Bazel's goals is to enable distributed and parallel builds on a remote cloud infrastructure. Bazel is also designed to scale up to very large build repositories which may not be practical to download to an individual developer's work machine.[9]
Bazel provides tooling which helps developers to create bit-identical reproducible build outputs. Bazel's implemented rules avoid typical pitfalls such as embedding timestamps in generated outputs to ensure content digest matches. This in turn allows the build system to reliably cache (memoize) the outputs of intermediate build steps. Furthermore, reproducible build makes it possible to share intermediate build results between teams or departments in an organization, using dedicated build servers or distributed caches...Bazel was designed as a multi-language build system. Many commonly used build system are designed with a preference towards a specific programming language. Examples of such systems include Ant and Maven for Java, Leiningen for Clojure, sbt for Scala, etc...Bazel also provides sand-boxed build execution. This can be used to ensure all build dependencies have been properly specified and the build does not depend, for example, on libraries installed only locally on a developer's work computer" -- https://en.wikipedia.org/wiki/Bazel_(software)
Opinions:
bazel competitor (both inspired by Google's internal 'blaze')
https://please.build/ https://news.ycombinator.com/item?id=25238453
tup opinions:
https://en.wikipedia.org/wiki/Category:Build_automation
" Specifically, we propose that, as an experiment for Go 1.5, we add a temporary “-vendor” flag that causes the go command to add these semantics:
If there is a source directory d/vendor, then, when compiling a source file within the subtree rooted at d, import "p" is interpreted as import "d/vendor/p" if that exists.
When there are multiple possible resolutions, the most specific (longest) path wins.
The short form must always be used: no import path can contain “/vendor/” explicitly.
Import comments are ignored in vendored packages. " -- https://groups.google.com/forum/#!msg/golang-dev/74zjMON9glU/4lWCRDCRZg0J
0xdeadbeefbabe 10 hours ago
A vendors B and C, but C vendors D and E. According to this proposal do you lay out the files like this: (A (vendor (B C D E)))?
Edit: I guess HN tree markup is not working :)
reply
jooon 9 hours ago
Yes. That would work. However, I also believe this will work: (A (vendor (B (C (vendor (D E)))))).
https://groups.google.com/d/msg/golang-dev/74zjMON9glU/LjrVU...
https://groups.google.com/d/msg/golang-dev/74zjMON9glU/0e7M7...
https://groups.google.com/d/msg/golang-dev/74zjMON9glU/qGzsS...
reply
jooon 9 hours ago
and if you have both: (A (vendor (B (C (vendor (D E))) D E)))
C would pick D E in its C/vendor before D E in A/vendor before D E in $GOPATH/src
ps. I have edited these comments about 5 times to get the parentheses right. :)
reply
for 'monorepos'
"Pants can’t seem to figure out what it wants to be and is a massive paint to work with in my experience." -- [43] (that commentor prefer Bazel)
combined build system/packaging system
See also:
" iainmerrick 27 days ago
...
I think Xcode is what a lot of other IDEs and build systems are moving towards. Xcode is nice as long as you're working with normal code in permitted languages (C++, Obj-C, Swift) and permitted resource formats (NIBs). But if you need to do something slightly unusual, like calling a shell script to generate resources, it's horrible.
Oh, and I didn't even mention package managers! Having those tightly coupled to the other tools is horrible too.
bluetomcat 27 days ago
> But if you need to do something slightly unusual, like calling a shell script to generate resources, it's horrible.
Not quite true. Xcode provides a "Run Script" build phase that lets you enter your shell script right into the IDE. A lot of handy environment variables are also there. You can easily reach your project via $SRCROOT, or modify the resources of the output bundle via "${CONFIGURATION_BUILD_DIR}/${PRODUCT_NAME}".
iainmerrick 26 days ago
That's the sort of stuff I mean when I say "horrible". :)
It'll just run the script every time, rather than doing anything smart with dependencies. Output from the script might or might not be picked up and tracked properly by the IDE. If you accidentally mess something up nothing will detect or prevent that.
(Edit: should add that I haven't given it a proper try in recent Xcode versions. I probably should.)
"
-- https://news.ycombinator.com/item?id=10695646
"- SBT is amongst the best build tools ever available. I could rant all day about the clusterfuck of Javascript (npm, Bower, Grunt, Gulp, Brunch.io, etc.) or Python (easy_install, setuptools, virtualenv) or .NET (MSBuild, Nuget, dotnet) or Haskell (cabal). For all its quirks, SBT is by far the sanest dependency and build management tool I've worked with. In fact, amongst the best reasons for preferring Scala.js is being able to work with SBT and avoid Javascript's clusterfuck completely. " -- [44]
" ...here is what I dislike about SBT:
"If you like Make (as opposed to shell bits embedded in JSON!), you’ll probably like Ninja even more. " -- https://lobste.rs/s/mtw9kb/makefile_websh_tconfig_json_js#c_oaxeke
Well-liked [45]
apparently has a problem with making it too easy to publish dotfiles in the same directory as other code:
https://news.ycombinator.com/item?id=10686676 https://github.com/ChALkeR/notes/blob/master/Do-not-underestimate-credentials-leaks.md
tjholowaychuk 9 hours ago
I'm sure I've done this in the past haha, the npm workflow isn't great at times in this regard. If you have something (to test etc) that is not checked into Git, but still in the directory, it can still make its way into a publish. That's definitely what I'd advise people to be most careful of, use npm-link and use credentials elsewhere etc.
Koa I'm curious of, I've seen almost every pull-request go in there, anyway nice post.
reply
doublerebel 9 hours ago
Npm package "irish-pub" has definitely saved my ass a few times. (It shows a dry run of "npm publish".)
reply
mofle 8 hours ago
There's an easy way to prevent credential leakage when publishing to npm => Explicitly list the files to include in the package through the `files` property in package.json.
Docs: https://docs.npmjs.com/files/package.json#files
Example: https://github.com/sindresorhus/got/blob/2f5d5ba94d625802880...
reply
https://code.facebook.com/posts/1840075619545360
Alternative to npm
http://yehudakatz.com/2016/10/11/im-excited-to-work-on-yarn-the-new-js-package-manager-2/
http://lucumr.pocoo.org/2012/6/22/hate-hate-hate-everywhere/
https://news.ycombinator.com/item?id=10072460 (and read the replies, there may be some errors in that comment)
" The thing that really stands out to me though is the poor state of python tooling and the library ecosystem. Having used Rubygems and bundler pip feels like taking an enormous step back. It's much less expressive. It doesn't handle the difference between production only dependencies, regular dependencies and development only dependencies in a good way. It's difficult to differentiate between locked dependencies vs desired dependencies(the Gemfile/Gemfile.lock distinction). PyPi? and especially using private PyPi? registers is more complex than it is in Ruby. There seems to be fewer nice libraries and they seem to be spread across the web whereas Ruby centralises around GitHub?. I also find that Python libraries have lacking or hard to find documentation in many cases. " -- [46]
"gem ist superior to pip. Take a look at "pip-tools" if you haven't already, it eases some of the pain." [47]
Links:
" Daniel Holth wrote a PEP for the wheel format, which allows for binary redistribution of libraries. In other words, it lets authors of packages which need a C compiler to build give their users a way to not have one." -- [48]
https://tox.readthedocs.io/en/latest/
https://www.kennethreitz.org/essays/announcing-pipenv
"Freeze (package) Python programs into stand-alone executables "
https://github.com/pyinstaller/pyinstaller
Opinions:
http://go-talks.appspot.com/github.com/davecheney/presentations/reproducible-builds.slide#1 discussion: https://news.ycombinator.com/item?id=9456931
Stack:
https://help.github.com/en/articles/about-github-package-registry https://github.blog/2019-05-10-introducing-github-package-registry/
"GitHub? Package Registry currently supports these clients and formats":
Ruby Bundler, Javascript NPM, Rust Cargo, Javascript Yarn, and the need for determinism and lockfiles and also for supporting multiple versions of a dependency used within the same project:
https://news.ycombinator.com/item?id=12684980
Need for install from git repos and private packages: https://news.ycombinator.com/item?id=12685986
This book isn't about this but since it's related to language-associated configuration and deployment systems, we'll briefly list some:
Links:
todo: add stuff from http://www.tiobe.com/index.php/content/TICS/FactSheet.html
recommended by [50]
recommended by [51]
recommended by [52]
" Recent security stories confirm that errors like buffer overflow and use-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where fuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly. In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimes even logical bugs. OSS-Fuzz's goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer?) and provides a massive distributed execution environment powered by ClusterFuzz?. " -- [53]
https://parasol.tamu.edu/pivot/
Uses an IL (intermediate language) called IPR, which is serialized as XPR, for which an overview is given in https://parasol.tamu.edu/pivot/publications/user-guide.pdf
"Currently, the Pivot does not support an annotation language. Pivot programs can annotate IPR nodes, but there is no facility for the programmer to embed annotations in the C++ source text. Providing such a facility is easy, but once programmers starts to depend on such annotations, they have created a new special-purpose language. We want to explore how much can be done with the SELL approach, relying only on standard conforming C++ source text." -- [54]
Links:
Seems to be inactive as of 2015?
https://news.ycombinator.com/item?id=7802005
--
lbotos 15 hours ago
link |
Current Go users, What's the state of package versioning right now? Is vendorization still the answer?
reply
dayjah 12 hours ago
link |
We use Godep, it is very good. As per another answer to your question: with '-copy=false' it behaves a lot like bundler.lock. Having spent a lot of time working with it we've found a few areas where you can get burned a little; particularly if you've structured your repos as a set of libraries, as seems to be the encouraged golang pattern.
When you have multiple libraries you have to be very specific about when you run godep, lest you find yourself with two libraries needing different versions of a common library, for example Main imports Foo and Bar, which both import Baz. Godep provides a mechanism for handling this: each dependency is explicitly locked into a fixed revision (e.g. commit sha, in the case of git). The pain comes about when during debugging as it can be very hard to reason which version of a library you're using.
Additionally the revision aspect is also a bit of a PITA, we use a development flow which rebases our small commits into a big commit and then merges that into our master branch; if you ran godep prior to that you're now referencing a commit that no longer exists. Given the chain of references that can exist this can go a very long way down. This same pattern also forces you into needing to push your dev branches to an origin server, as godep checks out the repos during the build, which while pretty benign a concern is a PITA if you forget and your build breaks because of it.
We're strongly considering moving to "one big repo" to help combat this issue (as well as a few others) for our internal golang repositories. Referencing "published commits" in 3rd party libraries is an acceptable level of pain. We're not entirely sold on this yet... just considering it.
reply
leef 15 hours ago
link |
No need to vendor. Use Godep without copying (godep save -copy=false) to create the equivalent of a bundler.lock file and check that into source.
reply
AYBABTME 15 hours ago
link |
There are other ways around, but I'd say the community is solidifying towards godep. Someone correct me if I'm wrong.
reply
chimeracoder 1 hour ago
link |
There seem to be a lot of comments here recommending godep, but just to throw my experience in: none of the projects I've interacted with use godep (other than the Heroku buildpack, which was written by the author of Godep).
It seems to be a solution for some (not all) projects that are released in binary form, but that isn't relevant to most projects out there[0]. I have never felt the need for what godep provides; vendoring myself has been sufficient for the (very rare) case in which I need specific versions of dependencies other than tip/trunk.
I asked around on #go-nuts, and (though the sample size was small), the other regular contributors who idle in the channel seemed to have the same experience.
YMMV obviously.
[0] https://botbot.me/freenode/go-nuts/2014-06-19/?msg=16563763&...
reply
---
---
mimog 1 day ago
link |
I like how it says "Rock-Solid Ecosystem", yet I have had the exact opposite experience trying to install even the most basic things with Cabal. I still can't get the Sublime text haskell plugin to work due to a dependency that fails to compile.
reply
rhizome31 1 day ago
link |
I had a similar experience with Cabal. On one computer I haven't been able to install Yesod whereas on another one, it finally worked after I had wiped my ~/.cabal. It gave me the impression that Cabal's dependency resolution mechanism is still a bit britle.
Also I found that installing stuff through Cabal was pretty slow. It's probably partly because Haskell libraries tend to be kept narrow in scope so it's necessary to install a lot of small packages to get a piece of functionality (take for instance the dependency list of Aeson, which seems to be the recommended choice for working with JSON : http://hackage.haskell.org/package/aeson ). Another reason is that Cabal compiles Haskell code into native code.
reply
---
---
dbaupp 3 days ago
link |
As others have said, having a package management system that deeply understands the language and tooling is awesome. Examples:
I'm sure all of this is possible with other systems, but it seems unlikely to be so nice to use.
reply
seabee 2 days ago
link |
It's very similar to Racket, and yes, it is nice to use!
Other systems can get you much of the way there (node, Python are the only ones I'm really familiar with) but I suspect you need a little language help to achieve the same kind of convenience.
reply
---
pestaa 3 days ago
link |
Curious to hear more about language-specific (though OS-agnostic!) package management systems. IMO composer is the best thing ever happened to PHP, Ruby gems are huge, Python eggs also make a very useful ecosystem.
OpenSUSE?'s Open Build System would be great to ship independent packages, but those are again heavily tied to Unices, hence leaving other platforms behind.
reply
JoshTriplett? 3 days ago
link |
> Curious to hear more about language-specific (though OS-agnostic!) package management systems.
As far as I can tell, one of the main justifications for most language package management systems is "we also run on Windows/OSX, which has no package management, so we'll invent our own". As a result, users of systems that do have sane package management get stuck with multiple package management systems, one for the distro and one for every language. Even then, I find it disappointing that nobody has built a cross-platform package management system for arbitrary languages to unify those efforts.
reply
smacktoward 3 days ago
link |
The other justification is generally a clash of cultures: the people who maintain distro/OS package managers generally come out of the culture of sysadmins, who value stability over feature-richness, while the people working the language communities generally come out of the culture of developers, whose priorities are the exact opposite.
When languages try to hook into existing OS-level systems, the people on the language end get frustrated by the way the people on the package-manager end don't hurry to rush out bleeding-edge versions of packages the second they hit Github. To the package-manager people, that's no big deal, their orientation towards stability and predictability makes them comfortable with waiting a little for the coffee to cool. But to the developers, who want to get their hands on the Latest and Greatest Right Now!, it feels like slogging though molasses.
So the developers eventually end up blowing their stacks and stomping off yelling "Yeah? Well fine, we'll build our own package manager then! With blackjack! And hookers!"
reply
DennisP? 3 days ago
link |
Maybe OS-level package managers should default to stable, but let the user check a box to get the latest and greatest. Developers want a stable system like everyone else, but for the stuff we're hacking on, we have a legitimate need to get the most recent, so our software isn't obsolete by the time we finish it.
reply
djur 3 days ago
link |
Most OS-level package managers also aren't designed to install more than one version of a package at a time. They don't tend to integrate with build systems as well, either.
reply
mercurial 2 days ago
link |
That's not so simple. A distro is a fine-tuned collections of packages which work more or less well together. Debian, for instance, comes in stable/testing/unstable/experimental flavours, depending on how daring you are. But even this isn't a universal solution. If you are deploying for instance a web application, you will want to deploy a locked down number of dependencies as well, regardless of what is present on the target system. And you may need deploy multiple applications side by side. Few system package managers have an answer to this.
reply
DennisP? 2 days ago
link |
So developers end up installing later versions manually. And in many cases it's no big deal. If the distro has Julia 0.2.1 and Emacs 23, I can upgrade to Julia 0.3 and Emacs 24 and it's not likely to damage anything. It'd just be nice if I could do it with the package manager instead.
But just because I'm doing that doesn't necessarily mean I want, say, the latest unstable version of the window manager.
reply
mercurial 2 days ago
link |
Debian will let you do that. You can run, say, your machine on testing but get the latest Firefox from experimental if you want. This may, however, upgrade other dependencies on your system, but it's pretty much unavoidable.
reply
yxhuvud 2 days ago
link |
I'd be happy enough if the OS-level packagers stopped modifying the package-level packages they packaged.
reply
kungfooguru 3 days ago
link |
The problem with language package management systems is they've been used for installing user facing software. As a developer tool I think it is the perfect way to go.
And you should add Linux to your Windows/OSX as being an issue, which Linux package management tool would you build packages for? All of them?
The end user package management provided by the OS should be for installing end user packages and the language tool for installing and publishing libraries and dev tools.
reply
wycats 3 days ago
link |
> The problem with language package management systems is they've been used for installing user facing software. As a developer tool I think it is the perfect way to go.
Precisely so.
reply
pjmlp 3 days ago
link |
> As a result, users of systems that do have sane package management
Given the diversity of OS in the IT landscape, which systems are those?
reply
stefantalpalaru 3 days ago
link |
I met only one package manager that I don't need to fight in order to get what I want: Gentoo's Portage. With a local overlay and language specific functionality concentrated in eclasses it's trivial to add new packages, do version bumps, have fine grained control over installed versions, enabled features, etc.
reply
kryptiskt 3 days ago
link |
The distro only contains a small selection of the packages (even if there are hundreds or thousands of them) and the language package system is usually the source the distro maintainers use to find the packages anyway.
reply
yla92 3 days ago
link |
Lately, IMHO, Gradle in Android Development(applicable for Java development as well) is a huge improvement over managing dependencies with pom.xml(ant) and linking jar files manually. Besides, you can totally customize build.gralde too.
reply
dscrd 2 days ago
link |
>Disappointing to see yet another language-specific package management system (Cargo), though.
As a packager in a Linux distro, I'm disappointed every time somebody tries to cram in PL-specific packages inside distro packages.
reply
pjmlp 3 days ago
link |
> Disappointing to see yet another language-specific package management system (Cargo), though
So what is the solution to have portable packages for:
reply
adrusi 3 days ago
link |
The goal of the [nix](http://nixos.org/) project is to solve this, and every time anyone brings up a package manager on HN, someone has to mention nix. The reality is that nix is really nice, but isn't any better than making a new package manager until it has wide adoption, so no one is using it.
reply
steveklabnik 3 days ago
link |
Nix was brought up during the discussion that led to Cargo, but no Windows support is a deal breaker.
reply
pmahoney 3 days ago
link |
I would probably make the same decision, but I hope in the end Caro is easy to wrap with Nix, which is a breath of fresh air particularly when needing to mix dependencies that cross language boundaries and share those build recipes with a team.
Previously, I wrote shell scripts and worried whether everyone on the team had rsync installed, or xmlstarlet, or some other less common tool. Now I wrap those scripts in a Nix package that explicitly depends on all those and distribute with confidence. It's fantastic.
Bundler and rubygems, for example, do various things that make good support within Nix rough. Two examples: 1. rubygems has no standard way of declaring dependencies on C libraries; 2. as far as I know there is no way to ask Bundler to resolve dependencies, create a Gemfile.lock, but not install any gems (I realize github gems must be downloaded to see the gemspec...)
reply
steveklabnik 3 days ago
link |
Cargo has the second, and there's a plan for the first.
That said, the reason that you want it to do the installation is that a lockfile is supposed to represent the way to do a build successfully. Without building everything, you can't actually be sure that the lockfile is correct. In theory, it should be...
reply
pmahoney 3 days ago
link |
> reason that you want it to do the installation is that a lockfile is supposed to represent the way to do a build successfully
Sure, and I'd like to do that build within Nix (and someone else might want to do it with another packager), which gives a stronger guarantee than Bundler since it incorporates C library dependencies and more. Anyway, the specifics aren't relevant to this discussion, and it seems you have a grasp of the issues, so carry on!
reply
derefr 3 days ago
link |
Wouldn't it still have been less effort to port Nix to Windows, than to write an entirely new package manager and then port it to every OS?
reply
steveklabnik 3 days ago
link |
If that were the only downside, possibly. I don't really do Windows development, so I can't tell you how difficult porting Nix would be. There's large advantage to having a packaging system that knows your language well. It's going to have tighter integration than a generic one ever could.
reply
pjmlp 3 days ago
link |
It seems to be only for GNU/Linux systems, what about all other OSs out there?
reply
BruceM? 3 days ago
link |
I've been experimenting with Nix on Mac OS X lately and it works fine. I've heard that it works on FreeBSD? as well. The big gap is Windows.
The good news is that you can integrate your language-specific tools with Nix as well, such as has been done for Haskell, node.js and other things. (I'm looking at it so that we can integrate our Dylan stuff with it.)
reply
pjmlp 3 days ago
link |
When this discussions happen on HN, I always see a narrow discussion of Mac OS X, GNU/Linux, Windows and with luck *BSD.
But the world of operating systems is so much bigger than the desktop under the desk.
Good work on Dylan by the way.
reply
BruceM? 2 days ago
link |
I'd love to have the time and the resources to deal with more OSes. :) 20 years ago, I had to keep stuff running on Solaris and lots of other platforms. About 20 years ago, I still did some work on VMS on actual VAX hardware! It wasn't that long ago, that we had the possibility of BeOS? either. Comparatively, we have quite a monoculture (of POSIX) these days with Windows being the non-POSIX representative.
Maybe unikernels like OpenMirage? will help make things interesting.
And thanks! The work on Dylan is a lot of fun and keeps me semi-sane by keeping me busy.
reply
CMCDragonkai 3 days ago
link |
Nix is much more than just a package manager though.
reply
doe88 3 days ago
link |
> Disappointing to see yet another language-specific package management system (Cargo), though.
Coming from Python I find Cargo very very smart and very well thought so far, it is not feature heavy, but everything has a very clear and useful purpose. For instance today I found that if I created a file .cargo/config I could override my dependancies to make Cargo search projects on my fs instead of grabbing them on Github, while doing developments it's a big thing I think.
reply
Ygg2 3 days ago
link |
> Disappointing to see yet another language-specific package management system (Cargo), though.
I don't think it is. You need support for Rust modules on various platforms Linux/Mac/Windows(possibly Android). No single tool works on all those platforms. Cargo does and it has minimal dependencies.
Not having to juggle three different configuration (CMake, Makefile, etc.) on different platform is actually pretty great.
reply
---
"
Rust’s build system, Cargo, is very good. It’s not perfect, but it is a breath of fresh air after Java’s Gradle.
Cargo’s trick is that it doesn’t try to be a general purpose build system. It can only build Rust projects, and it has rigid expectation about the project structure. It’s impossible to opt out of the core assumptions. Configuration is a static non-extensible TOML file.
In contrast, Gradle allows free-form project structure, and is configured via a Turing complete language. I feel like I’ve spend more time learning Gradle than learning Rust! Running wc -w gives 182_817 words for Rust book, and 280_506 for Gradle’s user guide.
Additionally, Cargo is just faster than Gradle in most cases.
Of course, the biggest downside is that custom build logic is not expressible in Cargo. " [55]
--- "No disagreement there, I hate Gradle :) Dynamically typed DSLs that have arbitrary context changes are not my cup of tea. I much prefer bazel, though I haven’t really used Maven." [56] ---
" The problem is that Visual Studio doesn’t work on non-Windows systems, and Unix tools like autotools and pkg-config don’t work on Windows (or have quirky ports in mingw/cygwin), so it’s hard to make a project that builds on both. Package management on Windows is a fragmented mess. MSVC has different flags, pragmas and system headers than gcc and clang. C support in MSVC is incomplete and buggy, because Microsoft thinks C (not ++) is not worth supporting. It’s nothing insurmountable, but these are thousands of paper cuts.
OTOH: cargo build works on Windows the same way as on any other platform. Unless you’re doing something very system-specific, it just works. And even if you touch something that’s system-specific, chances are there’s already a dependency you can use to abstract that away.
Cross-compilation in Rust is not as nice as I’d like. While Rust itself can cross-compile object files and static libraries easily, they need linking and system libraries. Rust uses C toolchain for linking, so it inherits many of C’s cross-compilation pains. " kornel
--- http://doc.crates.io/guide.html
---
https://www.quora.com/Node-js-Whats-so-great-about-npm
(the following are from HN, not the previous link)
" carrja99 1 day ago
link |
I'd have to say the biggest thing that npm has over module systems found in java, ruby, python etc. is the complete isolation of transitive dependencies. It is nice to use two dependencies and not waste a day or two because:
In all the languages you mentioned it becomes a pain because you can only use one version of module C, meaning either module A or B simply will not work until you find a way around it.
reply "
" dragonwriter 1 day ago
link |
> Semantic versioning's raison d'être is to prevent these sorts of issues.
Semver may surface them by making it very clear (assuming all involved libraries use semver) where they can occur, but, if you have a package management/loading system that only allows one version of a particular package to be loaded, obviously can't do anything to prevent the situation where different dependencies rely on incompatible versions of the same underlying library.
Sure, with semver it won't happen if A depends on C v.1.0.1 and B depends on C v.1.4.3 (as A and B can both use C v.1.4.3), but it will still happen if A depends on C v.1.0.1 and B depends on C v.2.0.0.)
To actually avoid the problem, you need to isolate dependencies so that they aren't included globally but only into the package, namespace, source file, or other scope where they are required.
reply "
" No1 1 day ago
link |
NPM's way of managing dependencies still can waste a day or two (or more) of your time. For example, get a C object from B, then pass it into A.
Things are even more twisted when you have a half dozen versions of C floating around in your node_modules, and the problem isn't in your code, but a dependency of a dependency.
Another issue I've run into is patching a bug in a module, and then having to figure out how to get that patch into all of the other versions that cropped up in node_modules.
NPM is one way to solve the modules problem, but it's no panacea. "
" k3n 1 day ago
link |
That's great, but it's not without cost. Here, the cost is you end up with deeply-nested directory nodes (which breaks Jenkins ability to properly purge the directory after a job). Node modules are also extremely liberal in the number of files they create -- even a "simple" app using just a few common modules could end up with 1k+ extra files. This can produce problems in your IDE, as well as with your source control or continuous delivery systems, among other things.
So, it solves some headaches, and creates others.
reply "
npm shrinkwrap
"
nobleach 1 day ago
link |
While I like using NuGet? packages with C#, I'm not really wild about how they can get magically linked in to a project, and then required. I had nunit and fluent assertions become inextricable from a project I was working on even after all the tests were removed. Just a total mind-f*ck. Python when using pip is a whole lot better but I've had some issues finding things there too. Ruby... it depends. Are we talking Rails gemfile or "gem install $package"? Conflicting versions can become an issue. Java with Gradle has been pretty cool so far. NPM as a whole, has just worked. Packages are referenced in ONE place (package.json) I can do an "npm install $package --save" during development and it gets included automatically. "
"
clintonb11 1 day ago
link |
I agree. PIP in python is great, but has some extra overhead and difficulty with it, like having to setup virtual environments for each project. NPM by default installs to the local project only and with a quick --save will put it in package.json dependencies (similar to requirements.txt with pip). Node package management is awesome because it is so simple.
reply
rhelmer 1 day ago
link |
Virtual environments are optional though, right? You could have one big virtualenv for all projects, or simply install things into the system path (although I wouldn't recommend either)
reply "
"
PuercoPop? 1 day ago
link |
I should just probably say, clearly you haven't seen Common Lisp's defpackage, modules are actually first class objects there and are completely decoupled from the file system.
But most importantly as Barbara Liskov mentions in this video[1] we don't know what is a module exactly or how to use them yet. Which is an specific statement aligned with Alan Kay's famous "We don't know how to design systems so let's not turn it into a religion yet."[2]
tl;dr; 1) Innovation is good. 2) Javascript's module is a half assed implementation of Common Lisp's defpackage. (Don't get me wrong is still way better than Python's abhorrent ninja goto: import)
[1]: http://www.infoq.com/presentations/programming-abstraction-l... [2]: https://www.youtube.com/watch?v=oKg1hTOQXoY
reply "
"
coolsunglasses 1 day ago
link |
You have not used a good module system. Clojure's namespace system for example is really nice.
reply
rafekett 1 day ago
link |
Have you used ML?
reply "
" Man, global system-wide installations that require admin rights by default? That's certainly something! Quite the stark comparison to Node.js and npm, where everything is installed locally into the current directory (under node_modules) by default, and "global" installation is actually a per-user installation. Tricking pip with virtualenv seems to get you pretty close to what you get by default with npm, albeit still somewhat more clunky. But to be fair, most other package managing solutions seem to pale in comparison to npm :-)"
"
phren0logy 2 days ago
link |
Nice article, but after using leiningen (the clojure solution to a similar problem, based on maven), it's really hard to go back to something like this. I really, really wish there was an equivalent in python (really, every language I use). "
http://jacobian.org/writing/django-apps-with-buildout/
"
arnarbi 2 days ago
link |
I find it best to keep virtual envs completely away from the project (I use http://virtualenvwrapper.readthedocs.org/en/latest/ which puts them by default in ~/.virtualenvs). A virtualenv is completely machine-specific.
If your project is a package itself (i.e. it has a setup.py file), then use that file to specify dependencies. On a new machine I check out a copy, create a virtual env and activate it. Then in the local copy I run "pip install -e .". This installs all the requirements from setup.py in the virtualenv, and links the local copy of my project to it as well. Now your package is available in the virtual env, but fully editable.
If your python project is not a package, you can install its dependencies in a virtual env with pip. Then run "pip freeze" to generate a list of all installed packages. Save that to a text file in your repository, e.g. ``requirements.txt``. On a different machine, or a fresh venv, you can then do "pip install -r requirements.txt" to set everything up in one go.
reply "
"
"pip is vastly superior toeasy_install for lots of reasons, and so should generally be used instead."
Unless you are using Windows, as pip doesn't support binary packages. "
virtualenv
extensions:
"The only debug tools that we found that were better than XDebug or NuSphere? were Studio And C#. Perl, Python and Ruby were all a bit rubbish in comparison." -- https://news.ycombinator.com/item?id=10797344
Elixir "Mix is like gems/bundler/rails {console, server, etc.} in one neat package done right and minus the headaches." [57]
"I find Leinigen a bit bloated respect to Mix. Mix is faster, lighter and integrated with Elixir. Lein is not. I find Lein a bit slow..." [58]
Elixir ExTest? Diffing:
"
ExUnit? will now include diff-ing output every time a developer asserts assert left == right in their tests. For example, the assertion:
assert "fox jumps over the lazy dog" == "brown fox jumps over the dog"
will fail with
ExUnit? diff
such that “lazy” in “lhs” will be shown in red to denote it has been removed from “rhs” while “brown” in “rhs” will be shown in green to denote it has been added to the “rhs”. "
---
yodsanklai 4 days ago [-]
I've been working on a side project in OCaml and I can totally relate. I'm an OCaml old timer and I find amazing the amount of development that has happened recently. There's a lot of ongoing development in the libs and the surrounding tools (more so than in the language). I've spent a lot of time just to set up my environment and I had to pin several packages to their development version to make things work (jbuilder, merlin, ppx...). Moreover, a lot of these tools lack proper documentation, and it's difficult to get answers to your questions since it's a very small community.
reply
djs55 4 days ago [-]
I'm also an OCaml old timer and I think I can relate too. I believe the recent tooling changes are going in the right direction and will eventually fix several of these problems, for example:
There's a push to remove "optional dependencies" which are the reason why opam dependencies rebuild again and again: http://rgrinberg.com/posts/optional-dependencies-considered-... For example in the Mirage project we've been working on this https://discuss.ocaml.org/t/ann-major-releases-of-cohttp-con... but it has caused some breakage here and there.
jbuilder (from Jane Street) is excellent: expressive, easy to understand, builds packages extremely quickly, is actively developed, has minimal dependencies and a lovely manual http://jbuilder.readthedocs.io/en/latest/ It takes care of generating boilerplate for other tools like merlin (which to be honest I never got around to manually configuring). There's also work to integrate it with utop https://github.com/janestreet/jbuilder/issues/114
jbuilder also supports building multiple libraries in one big source tree, so we could switch to a package lockfile model: the author uses opam to create a solution to the package constraints and checks in the specific versions known to work, the build clones the dependency sources and jbuilder builds it all simultaneously. I'm keen to try this on one of my larger projects so that "git clone; make" just works, irrespective of where the host OCaml comes from.
PPX syntax extensions depend on specific compiler versions, so when (for example) homebrew updates to OCaml 4.05 you might find that extensions you need have not been ported yet. ocaml-migrate-parsetree aims to fix this problem http://ocamllabs.io/projects/2017/02/15/ocaml-migrate-parset...
There's obviously still plenty of work to do, but I think things are improving!
reply
---
"
platz 1 day ago [-]
The only build systems that I'm aware of that are monadic are redo, SCons and Shake-inspired build systems (including Shake itself, Jenga in OCaml, and several Haskell alternatives).
One realistic example (from the original Shake paper), is building a .tar file from the list of files contained in a file. Using Shake we can write the Action:
contents <- readFileLines "list.txt" need contents cmd "tar -cf" [out] contents
There are at least two aspects I'm aware of that increase the power of Make:
It seems every "applicative" build system contains some mechanism for extending its power. I believe some are strictly less powerful than monadic systems, while others may turn out to be an encoding of monadic rules. However, I think that an explicitly monadic definition provides a clearer foundation.
http://neilmitchell.blogspot.com/2014/07/applicative-vs-mona...
reply "
---
" We don't want to remember and execute the build commands by hand (at least I don't). That's why we have build tools:
Make, meson, Autotools, bazel, CMake, VisualStudio, bash scripts, etc.
A build tool usually:
has a list of source files, knows how to build each source file, keeps a dependency graph to rebuild only files that change, keeps a list of directories containing header files, keeps a list of external libraries to link to (static/dynamic), manages compiler flags (optimization level, warning level), knows which files to link into executables and libraries.
Some build tools offer additional features:
program installation, cross platform support, cross compilation, dependency installation." [59]
---
---
implementing code formatters/prettyprinters
http://journal.stuffwithstuff.com/2015/09/08/the-hardest-program-ive-ever-written/ https://news.ycombinator.com/item?id=22706242
---
---
---
https://maxmcd.com/posts/bramble/ discussion: https://lobste.rs/s/g1tqfe/bramble_purely_functional_build_system
---
some types of attack vectors against packaging systems:
https://kerkour.com/rust-crate-backdoor/ discussion: https://news.ycombinator.com/item?id=29265765
---
---
reproducable builds
https://fossa.com/blog/three-pillars-reproducible-builds/
---
"I’m finding that the most complex part of learning yet another language isn’t the language, it’s the tooling and ecosystem." -- snej
---
blog post with some trends and opinions: https://earthly.dev/blog/programming-language-improvements/ discussion: https://lobste.rs/s/qlb7iy/slow_march_progress_programming
---
" ... I tried rewriting it in xmake. This was a bit better. We could pass thread descriptions through as Lua objects with a decent set of properties and add rules for building compartments and libraries. There were a few annoyances though:
90% of what I wanted to do involved undocumented interfaces (the docs are really lacking for xmake) xmake really likes building in the source directory, which is unacceptable in a secure build environment (source will be mounted read only, the only writeable FS is a separate build FS) and you have to fight it every step if you don’t want to do this. I hit a lot of bugs. A clean rebuild always tries to link the firmware image before it has linked some of the compartments that it depends on and I have no idea why. Specifying the build directory usually works but then xmake sometimes forgets and starts scattering things in the source tree again. Overall, it feels like a 0.2 release and I’m not sure I’d want to handle problems users will have with it.
That sounds really negative, but I liked a lot about xmake and I’d probably be very happy with the project in a couple of years, it just isn’t there yet. For example, the build process in xmake is a map and fold sequence for every target (apply some transform to every input independently, then apply a transform to all of the sources). There is no doc with high level concepts explaining this, you need to figure it out. " -- [60]
thread about cmake that also mentions some problems with xmake: https://lobste.rs/s/fjmwaz/things_surprised_me_about_cmake
---
https://determinate.systems/posts/introducing-riff
---
https://nex3.medium.com/pubgrub-2fb6470504f
---
https://rust-analyzer.github.io/blog/2020/07/20/three-architectures-for-responsive-ide.html
---
https://www.tweag.io/blog/2023-03-09-announcing-topiary/
---
" eitland 12 hours ago
root | parent | next [–] |
Then again, Gradle has come to show why that is a terrible idea.
I think at some point 24 out of 28 Gradle projects I had access to at a certain customer had variations in either kotlin/Groovy style or the way they did or didn't use variables, how they did or didn't do loops or maps and what not.
With Maven you (or someone who know Maven) can immediately look at a rather small, very standardized file and start making educated guesses and so can an IDE.
With Gradle you sometimes have to run it to actually know what it will do.
reply
aftoprokrustes 10 hours ago
root | parent | next [–] |
I had the same experience with Maven vs SBT (scala build system, config is scala). At first it is really cool to have access to a full programming language (in particular when it is the same as the one the project is in, which means that you do not need to "switch brains" when working on the config), but quickly people start trying to be smart or cute, and it becomes a big mess. In particular in Scala, where people _love_ defining new DSLs and favor cuteness over readability. After two years working with SBT I still do not really understand some of the DSLish constructs used in there (and I tried to read the docs).
On the other side I fell in the trap of trying to overcome the limitations of purely declarative config formats by using jinja templates, which also ended up being a very bad idea and a maintenance nightmare.
For most projects, my approach is now to try to be as standard as possible compared to the particular community in the tech at end, and resist the urge to be smart or cute (hard!). Configuration always sucks, and I now prefer to just suck it up and get done with the config part, rather than loosing time reinventing the wheel, ending with a config that still sucks _and_ no one understands.
reply
eitland 17 minutes ago
root | parent | next [–] |
The good thing about Maven is it is XML so everyone wants to keep it as short as possible ;-)
(More seriously: with Maven shorter and more boring is a sign that everything is correctly configured. Maven works by the convention over configuration principle so if you don't configure something it means it follows the standard. Which again means if you see someone has configured for example a folder or something that usually isn't configured it means they have put something in a non standard location.) " -- https://news.ycombinator.com/item?id=37594735
---
random forum discussion with some opinions on build systems:
https://lobste.rs/s/qnb7xt/ninja_is_enough_build_system
---