I've had good success with machines that have NVMe storage (especially on cloud providers) but you still are paying the cost of fsync there even if it's a lot faster
edit: Or, even easier, just use the pre-built fail_function infrastructure (with retval = 0 instead of an error): https://docs.kernel.org/fault-injection/fault-injection.html
Actually in my experience with pulling very large images to run with docker it turns out that Docker doesn't really do any fsync-ing itself. The sync happens when it creates an overlayfs mount while creating a container because the overlayfs driver in the kernel does it.
A volatile flag to the kernel driver was added a while back, but I don't think Docker uses it yet https://www.redhat.com/en/blog/container-volatile-overlay-mo...
Unpacking the Docker image tarballs can be a bit expensive--especially with things like nodejs where you have tons of tiny files
Tearing down overlayfs is a huge issue, though
If you corrupt a CI node, whatever. Just rerun the step