But it's kind of cheating, because the Nix daemon actually handles per-machine scheduling and cross-machine orchestration for you.
Just set up some self-hosted runners with Nix and an appropriately configured remote builders configuration to get started.
If you really want to, you can graduate after that to a Kubernetes cluster where Nix is available on the nodes. Pass the Nix daemon socket through to your rootless containers, and you'll get caching in the Nix store for free even with your ephemeral containers. But you probably don't need all that anyway. Just buy or rent a big build server. Nix will use as many cores as you have by default. It will be a long time before you can't easily buy or rent a build server big enough.
My experience, when it gets time to actually build the thing. A one-liner (with args if you need them) is the best approach. If you really REALLY need to, you can have more than one script for doing it - depending on what path down the pipeline you take. Maybe it's
1) ./build.sh -config Release
2) ./deploy.sh -docker -registry=<$REGISTRY> --kick
Just try not to go too crazy. The larger the org, the larger this wrangling task can be. Look at Google and gclient/gn. Not saying it's bad, just saying it's complicated for a reason. You don't need that (you'll know if you do).The point I made is I hate when I see 42 lines in a build workflow yaml that isn't syntax highlighted because it's been |'d in there. I think the yaml's of your pipelines, etc, should be configuration for the pipeline and the actual execution should be outsourced to a script you provide.
over multiple machines? I'm not sure that a sh script can do that with github