Maybe the make authors could compile a list of options somewhere and ship it with their program, so users could read them? Something like a text file or using some typesetting language. This would make that knowledge much more accessible.
(`make --help` will only print the most common options)
Will give you the command line options. And GNU make has decent documentation online for everything else:
https://www.gnu.org/software/make/manual/html_node/index.htm...
However, if "make -j" is saturates a machine, and this is unintentional, I'd assume PEBKAC, or "holding it wrong", in general.
I get that the OS could mitigate this, but that’s often not an option in professional settings. The reality is that most of the time users are expecting ‘make -j $(N_PROC)’, get bit in the ass, and then the GNU maintainers say PEBKAC—wasting hundreds of hours of junior dev time.
I would put that in the “using it improperly” category. I never use⁰ --jobs without specifying a limit.
Perhaps there should have been a much more cautious default instead of the default being ∞, maybe something like four¹, or even just 2, and if people wanted infinite they could just specify something big enough to encompass all the tasks that could possibly run in the current process. Or perhaps --load-average should have defaulted to something like min(2, CPUs×2) when --jobs was in effect⁴.
The biggest bottleneck hit when using --jobs back then wasn't RAM or CPU though, it was random IO on traditional high-latency drives. A couple of parallel jobs could make much better use of even a single single-core CPU, by the CPU-crunching of a CPU-busy task or two and the IO of other tasks ending up parallel, but too many concurrent tasks would result in an IO flood that could practically stall the affected drives for a time, putting the CPU back into a state of waiting ages for IO (probably longer than it would be without multiple jobs running) - this would throttle a machine² before it ran out of RAM even with the small RAM we had back then compared to today. With modern IO and core counts, I can imagine RAM being the bigger issue now.
--------
[0] Well, used, I've not touched make for quite some time
[1] Back when I last used make much at all small USB sticks and SD cards were not uncommon, but SSDs big++quick+hardy enough for system or work drives were an expensive dream. With frisby-based drives I found a four job limit was often a good compromise, approaching but not hitting significantly diminishing returns if you had sufficient otherwise unused RAM, while keeping a near-zero chance of effectively completely stalling the machine due to a flood of random IO.
[2] Or every machine… I remember some fool³ bogging down the shared file server of most of the department with a vast parallel job, ignoring the standing request to run large jobs on local filesystems where possible anyway.
[3] Not me, I learned the lesson by DoSing my home PC!
[4] Though in the case of causing an IO storm on a remote filesystem, a load-average limit might be much less effective.
Personally, I don’t think these footguns need to exist.
They are junior because they are inexperienced, but being junior is the best place to make mistakes and learn good habits.
If somebody asks what is the most important thing I have learnt over the years, I’d say “read the manual and the logs”.
Make does not provide a sane way to run in parallel. You shouldn’t have to compose a command that parses /proc/cpuinfo to get the desired behavior of “fully utilize my system please”. This is not a detail that is particularly relevant to conditional compilation/dependency trees.
This feels like it’s straight out of the Unix Haters Handbook.
[0]: https://web.mit.edu/~simsong/www/ugh.pdf see p186
But not portable. Please don't use them outside of your own non-distributable toy projects.
EDIT: There's one exception, and that would be using Guile as an extension language, as that is often not available. However, thanks to conditionals (also not in POSIX, of course), it can be used optionally. I once sped up a Windows build by an order of magnitude by implementing certain things in Guile instead of calling shell (which is notoriously slow on Windows).
GNU Make is feature rich and is itself portable. It's also free software, as in freedom. Just use it.
Sounds like it's not overrated, then. You just prefer that other people write portable C and package GNU Make for all systems instead of you writing POSIX Make.
Just like optimization, it has its place and time.
Some GNU Make constructs, like pattern rules, are indispensable in all but the simplest projects, but can also be overused.
For some reason there's a strong urge to programmatically generate build rules. But like with SQL queries, going beyond the parameterization already built into the language can be counter productive. A good Makefile, like a good SQL query, should be easy to read on its face. Yes, it often means greater verbosity and even repetition, but that can be a benefit to be embraced (at least embraced more than is instinctively common).
EDIT: Computed variable references are well-defined as of POSIX-2024, including (AFAICT) on the left-hand side of a definition. In the discussion it was shown the semantics were already supported by all extant implementations.
Otherwise you face an ocean of choices that can be overwhelming, especially if you're not very experienced in the problem space. It's like the common refrain with C++: most developers settle on a subset of C++ to minimize code complexity; but which subset? (They can vary widely, across projects and time.) In the case of Make, you can just pick the POSIX and/or de facto portable subset as your target, avoiding alot of choice paralysis/anxiety (though you still face it when deciding when to break out of that box to leverage GNU extensions).
But there's another huge category: people who are automating something that's not open-source. Maybe it stays within the walls of their company, where it's totally fine to say "build machines will always be Ubuntu" or whatever other environment their company prefers.
GNU Make has a ton of powerful features, and it makes sense to take advantage of them if you know that GNU Make will always be the one you use.
Output synchronization which makes `make` print stdout/stderr only once a target finishes. Otherwise it's typically interleaved and hard to follow:
On busy / multi-user systems, the `-j` flag for jobs may not be best. Instead you can also limit parallelism based on load average: Randomizing the order in which targets are scheduled. This is useful for your CI to harden your Makefiles and see if you're missing dependencies between targets: