So makefiles. They suck. The original idea behind make was to do dependency calculation to speed up builds during development and avoid rebuilding the entire project when you've only touched one file.

This was long before open source, and the widespread distribution of source code. People who download source code do a "make all" and then delete the source directory, so for 99% of the actual users of the source code this dependency calculation is a relic.

This was also long before dual-processor 64 bit systems running at 2ghz. A modern laptop is literally millions of times faster than the original PDP-11 that C was invented on, but compiles take longer because of the compilers. Unfortunately, compilers like gcc have followed Gates' Law and slowed themselves down to absorb the extra cycles. (They claim it's because they're doing better optimizations, but disabling the optimizer doesn't speed them back up much.) Luckily, there are compilers written for speed (like tinycc or pcc) that can theoretically build an entire Linux kernel in under 10 seconds on modern hardware.

These days, if you're building a C++ project you wind up recompiling everything whenever you touch anything because that's just how C++ works. And if you're using C, there are compilers like tinycc that can theoretically rebuild the whole Linux kernel in about 15 seconds anyway. Plus you wind up doing "make clean; make all" on a lot of projects because the dependency calculations are subtly broken and actually trying to use them winds up causing weird bugs due to a stale dependency. You wind up debugging the makefile.

Makefiles themselves mix declarative and imperative code, which is never a good idea. The imperative code needs to know and control what order things happen in, and the only way to do that is to work out how the declarative code is actually implemented under the covers. Sometimes you actually _do_ have circular dependencies, which make can't handle. Sometimes you want to change a make variable from within a target. The widespread use of recursive make attempts to mitigate some of this, at the expense of causing the "make" invocation to sometimes take longer than the _compile_.

Debugging makefiles is a pain. If it doesn't do what you expect, it's hard to get a makefile to show you what it actually _did_. What dependencies were evaluated? What implicit $(shell blah) functions were run while setting variables behind the scenes?

Makefiles don't scale. As a project gets larger, you either wind up reimplementing your own build system on top of make (ala the Linux kernel's kbuild) or you replace it entirely (imake, cmake). But small projects are the ones that least benefit from the dependency calculation, because "make all" on a 100k line project takes five seconds with modern hardware, even using slow compilers.

Makefiles also deprive modern compilers of their full optimization capabilities, I.E. "build at once" mode. With gigabytes of ram, a compiler can parse all of a project's C files at once to create a giant parse tree containing the entire program, and then run the optimizer against the entire program as a single unit to find optimization opportunities (common code) otherwise hidden by the boundaries between fies. The -fwhole-program mode of gcc is one example. This means that compiling each .c file independently reduces the effectiveness of modern optimizers, producing a larger and slower program, so it's not the way you want to create the final binaries to ship and/or install. One again, make hurts the "make all" case.

Make wasn't designed to handle configuration. Users must manually implement configuration information in their makefiles to indicate which code is included in the final result, and which isn't.