They work on a different model, where only packages that are deemed "worthy" are included, and there's a small-ish set of maintainers that are authorized to make changes and/or accept change requests from the community. In contrast, programming language package managers like cargo, pip or npm let anybody upload new packages with little to no prior verification, and place the responsibility of maintaining them solely on their author.
The distribution way of doing things is sometimes necessary, as different distributions have different policies on what they allow in their repositories, might want to change compilation options or installation paths, backport bug and security fixes from newer project versions for compatibility, or even introduce small code changes to make the program work better (or work at all) on that system.
One example of such a repository, for the Alpine Linux distribution, is at https://github.com/alpinelinux/aports
The problem with it is you need the full git url in every file you import it. which is a pain if the repo changes locations, or you want to use a fork or a local version. Versioning is also tricky, to the point that go recommends creating a separate branch for a major/breaking version, which requires updating every import statement.
I think a good middle ground would be to have a central repository and/or package configuration file that maps package names to git repos and versions to commits (possibly via tags). And of course use hashes to lock the version to specific contents.
Bazel kind of does this, but it doesn't have any built in version resolution or transitive dependency resolution (although in some cases there are other tools that help). And it can add a lot of complexity that you may not need.
Not tried them but they look like a reasonable dep handling solution on paper - each module can declare it's own dependencies and bazel will figure it out for you like a package manager. Their old workspaces way of doing it was a nightmare, as while patterns emerged where repos would export a function to register their dependencies, the first declaration of any name would win and thus you weren't guaranteed to have a compatible set of workspaces at the end.
Because what you want exists and has a thriving community, and a package set that outclasses, well, statistically every other package manager in existence.
I swear, it's a daily occurrence for me to see software engineering challenges posited here as damn near impossible that Nix has been solving for over a decade.
What if you could run a single command and have exact insight to the source you're using for every single package on your system with the context of the dependency graph it exists in.
I cannot wait for this wave to crash and for people to realize how much engineering effort is reduced by using Nix. And that all of these things they know they want for years, already exists. But hey, the syntax takes time to get used to and how do you compare that against the countless blog posts and hours and institutional knowledge you need to actually use docker properly. And then later on some Go-based SBOM tool made by a VC-backed startup that fundamentally still does an inferior job to Nix. Sigh.
Well anyway I guess nix will keep being used by hedge funds, algorithmic traders, "advanced defensive capabilities" companies, literal (launched, in space) satellites, wallet manufacturers, etc, while everyone else listens to the syntax decriers.
But the code-source that is sent to crates.io is not necessarily the same as the one in the public repo linked to the crate.
It definitely does contain generated files, at least one crate has Rust code generated by a Python script that is not in the crate, only in the upstream Git repository.