Preferences


devicenull
The v9fs code has been a major source of bugs. Hopefully no one's using that in production...

https://www.cvedetails.com/vulnerability-list.php?vendor_id=...

vetrom
IIRC it was implemented to drive fast vm bootup/filesystem passthrough on some distributed HPC environments at IBM: https://landley.net/kdocs/ols/2010/ols2010-pages-109-120.pdf

While of course not as fast as a block device mapping, the reduction of copies (virtfs provides zero-copy data passthrough) makes virtfs considerably faster than NFS or CIFS from the guest. This means that even if you're pointing the virtfs at a network mount, you'll still see a speed improvement from the reduction in copies needed to get data on/off the wire.

Of course, the security model is dependent on getting the programming right, this is why the most common qemu execution paths use selinux/apparmor to implement access control on top of just the virtualization.

If you want a real fun mind bender, try making qemu-KVM work from inside docker, without running a full privileged container. It's doable, but something of a challenge. FWIW, QEMU itself doesn't need root, only /dev/kvm access. (Which has also needed security attention in the past).

In terms of speed sensitive HPC workloads though, I bet v9fs is definitely used in production. Hopefully those people are also careful enough to use mandatory access control to sandbox QEMU, since thats definitely not a default libvirt-style setup.

throwaway7767
Surprisingly, when I tried it, NFS outperformed virtio-9p handily across the board. I really was not expecting that result. Perhaps there is a way to tell the host to assume the VM is the only writer to that the exposed path, so the client can do efficient caching?
problems
I think most people when running in production wind up using network based protocols instead, NFS, SMB, etc which are much more security and time tested.

I've never heard of anyone using this kind of thing outside of personal VMs or development.

koverstreet
v9fs has some major advantages - the protocol is much simpler and more lightweight, which makes accelerating it over virtio much more tractable and effective. And it's nicely integrated into qemu so you don't need to bother with setting up a server.

Really, it's not competing with NFS/SMB, it's just aimed at fast passthrough for local filesystems.

However, having read a decent amount of the code myself, it is pretty crappy/sketchy.

However, it's not a lot of code and the basic idea is sound and the protocol is simple and well designed, it really wouldn't take someone with the right skills long to rototill it and make some drastic improvements (there's performance gains to be had in there, too).

sitkack
Couldn't one start by linking a scrubbing bridge for the protocol in Rust and grow from there?
I'd like to be able to use Rust in QEMU one day, but just at the moment it doesn't support all the platforms QEMU does. The sheer range supported by a standard GCC C toolchain is very hard for a new language to catch up to.

(It mostly has support for the CPUs we support and for the OSes but not always in all combinations, eg no ARM FreeBSD support. It also sometimes doesn't support the range of OS versions we do, eg no OSX 10.6 support, no support for Windows before Windows 7. And some things are only in Rust's tier 3, like OpenBSD support. We'd also need to dump some of the "probably nobody's using this really" older platform support code, like ia64 or Solaris.)

Oh, and Rust would need to be in the stable/LTS releases of distros before we could use it. That's going to take time to percolate through.

koverstreet
It's under 7k lines of code, and what it needs to be doing is pretty straightforward. I'm very much hoping to be able to use Rust in the kernel someday, but really it's overkill for this.
vbernat
In this case, the bug is a logic error, not a resource-related error. I have hard time to see how Rust would have helped.
pwdisswordfish
> However, having read a decent amount of the code myself, it is pretty crappy/sketchy.

Have a look at vvfat.

sofaofthedamned
We used it for security reasons, as we didn't want the guests to have a network link to the host. I know, I know.
sofaofthedamned
Virtfs is a cluster fuck. Using rsync over it creates hundreds of thousands of file handles that never close. Reported to Ubuntu months ago, nothing fixed. Red Hat had the better idea of going nowhere near it. Canonical produce so much shovelware that they don't support, I won't get bit by this again.

https://bugs.launchpad.net/qemu/+bug/1336794

Yet another example where SELinux could have mitigated this effect.
ajross
Strictly, that's not true. The exploit here is access to the filesystem environment of the host process from the guest. That happens entirely within the Qemu process and is invisible to SELinux.

What you're saying, I think, is that the SELinux could have been used to encapsulate the host process from the rest of the system. Which is true, but sort of specious. Traditional unix security paradigms (e.g. run every guest instance in its own UID, whatever) would have worked fine to mitigate the damage too.

vbernat
I think you are mistaken. Anything syscall-related is seen by SELinux which may accept or deny the request. Checking file access is one of the major use of SELinux. libvirt comes with the appropriate integration with SELinux (and Apparmor) to ensure that a QEMU process won't be able to read or write a file it shouldn't (and to avoid spawning process, transitioning state or poking into the memory or disk of a neighbor QEMU).

SELinux/Apparmor defeat most of the latest vulnerabilities in QEMU.

ajross
You're talking about SELinux within the guest then? The exploit requires kernel execution, so it's not relevant. Anyone capable of running this would be capable of evading LSM hooks by definition.

Outside the guest, sure, you could use SELinux to firewall VMs from access to anything more than a minimal filesystem. But you could use any of a zillion other techniques too. It's a specious point not relevant to this exploit, which certainly cannot have been prevented by SELinux.

vbernat
I am talking on SELinux in the host which would totally prevent this exploit as SELinux would not allow you to browse the host. You seem to say that nobody does that. libvirt uses SELinux out of the box to do exactly that.
theamk
Maybe? I am not convinced. I have used qemu (with virtfs) in my interactive session to test possibly suspicious 3-rd party apps, running as my regular interactive user.

With this vulnerability, the app might have escaped the sandbox and added evil stuff to my .bashrc. The SELinux would be totally useless there.

In other circumstances, the damage would be much more localized -- I mean, if you run qemu as daemon, you are likely doing it as non-privileged user anyway, so even regular unix permissions would be effective.

Anyone using qemu has disabled 9p for a really long time.
bonzini
No one using qemu in production with untrusted VMs should be using 9pfs, but does anyone except RHEL/CentOS actually leave it out of the binary?
chrisper
How do I disable it or is it disabled by default?
bonzini
It is a device that you have to add explicitly to the virtual machine.
gbrown_
Semi hopping on the QEMU bashing train but recall Google ripped it out for GCE.

https://cloudplatform.googleblog.com/2017/01/7-ways-we-harde...

theamk
Well, Google rightfully said that GCE does not need most QEMU features, so they have made a simplified version. Is is sad they did not release it.

On the other hand, there are people who need QEMU features: we have a QEMU-based test script for boot images which actually uses a few more exotic features to simulate the target environment.

The Google version is supposedly coupled heavily to their infrastructure and therefore would give little benefit to opensource without also open sourcing a large portion of their infrastructure.
kuschku
> without also open sourcing a large portion of their infrastructure.

Well, that’d not necessarily be bad for the dev community.

bonzini
Red Hat also rips virtfs out of QEMU in RHEL, for what it's worth.
sofaofthedamned
Exactly. Ubuntu didn't and it wasted weeks of development for me.

https://bugs.launchpad.net/qemu/+bug/1336794

Ripped it out implies that at one point they used it.
stonogo
No, it means they run binaries compiled with source code from which this feature set has been removed.
brandon
That's not a correct interpretation either; it's noted in the linked article that GCE uses a proprietary VMM instead of QEMU which itself was "ripped out" of the standard KVM+QEMU arrangement.

I probably would have chosen gentler phrasing.

codebeaker
From the title I understood this was a feature that had arrived in QEMU, seems like it could have it's place on development machines/etc where the only reason you're using a VM is to get access to some alt. architecture.
Um, silly question. Does this affect people using QEMU to run ReactOS, FreeDOS, and their proprietary counterparts to play old games and use old programs that don't work as well in DOSBox?
chungy
If you run malicious binaries attacking this, sure.

If you're just playing old games, you are probably fine. (Likely wouldn't need to worry about other security exploits either for that case.)

bonzini
Neither ReactOS nor FreeDOS nor their proprietary counterparts have drivers for virtio-9p, so most likely you are not using the device and are not vulnerable.
9p as in Plan 9 from Bell Labs? I only use that from LiveCD - and mostly as a goof and a basic test for a machine.
well then, plan9 finally becomes useful for something!

This item has no comments currently.