Preferences

IIRC it was implemented to drive fast vm bootup/filesystem passthrough on some distributed HPC environments at IBM: https://landley.net/kdocs/ols/2010/ols2010-pages-109-120.pdf

While of course not as fast as a block device mapping, the reduction of copies (virtfs provides zero-copy data passthrough) makes virtfs considerably faster than NFS or CIFS from the guest. This means that even if you're pointing the virtfs at a network mount, you'll still see a speed improvement from the reduction in copies needed to get data on/off the wire.

Of course, the security model is dependent on getting the programming right, this is why the most common qemu execution paths use selinux/apparmor to implement access control on top of just the virtualization.

If you want a real fun mind bender, try making qemu-KVM work from inside docker, without running a full privileged container. It's doable, but something of a challenge. FWIW, QEMU itself doesn't need root, only /dev/kvm access. (Which has also needed security attention in the past).

In terms of speed sensitive HPC workloads though, I bet v9fs is definitely used in production. Hopefully those people are also careful enough to use mandatory access control to sandbox QEMU, since thats definitely not a default libvirt-style setup.


throwaway7767
Surprisingly, when I tried it, NFS outperformed virtio-9p handily across the board. I really was not expecting that result. Perhaps there is a way to tell the host to assume the VM is the only writer to that the exposed path, so the client can do efficient caching?

This item has no comments currently.