My understanding was from Qubes choosing Xen and also AWS (they both deal with Xen advisories instead). The Qubes Architecture Specification goes into detail starting on page 11: https://www.qubes-os.org/attachment/wiki/QubesArchitecture/a...
KVM uses the open source qemu emulator for [I/O emulattion]. [...] The I/O emulator is a complex piece of software, and thus it is reasonable to assume that it contains bugs and that it can be exploited by the attacker. In fact both Xen and KVM assume that the I/O emulator can be compromised and they both try to protect the rest of the system from a potentially compromised I/O emulator.
Also pointed out elsewhere on thread: Google skipping QEMU.
Edit: I am digging into it more, but I don't see KVM+QEMU on any top-tier provider (GCE, AWS, Azure [Hyper-V-ish])? My understanding was the only time QEMU was required was to emulate processor architectures, eg. x86 on ARM or vice-versa. QEMU is also used by some reverse-engineering/anti-malware emulation stuff.
Xen also requires a hardware emulator to run HVM guests (including, but not limited to, Windows VMs). I don't know about now, but it definitely used to be QEMU for AWS.
QEMU can do emulation, but with KVM you use the hypervisor to run code at full speed until it has to interact with the emulated hardware.
The OpenStack aspect is true. Xen lacks support there.
A new Xen guest mode called PVH will remove QEMU when running Linux -- it is basically HVM without QEMU. Windows still requires QEMU.
I didn't dig too far into the AWS vulnerability list to try to find QEMU; XEN shows up right away! Ok: QEMU is last mentioned July 2015, and in none of the mentions is AWS vulnerable.
https://www.google.com/?q=site:https://aws.amazon.com/securi...
Yep, that's because most bugs are found in legacy devices that are never found in production. The big exception was a buffer overflow in the floppy device emulation (the "VENOM" vulnerability).
A lot of AWS security bulletins say "AWS customers' data and instances are not affected by these issues". I read it as "we knew about it a couple weeks in advance and have done a rolling upgrade". :)
Without a hardened kernel, LSM can be trivially bypassed and seccomp seems to whitelist everything under the sun. This only leaves us with QEMU code quality to rely on. Since Grsec is not longer available this becomes even more urgent.
Xen relies on stubdoms to isolate QEMU from their TCB which leaves them with bugs in the hypervisor itself as the only avenue of attack. The number of Xen-only bugs vs Linux is way fewer. Please correct me if I'm wrong.
@bonzini I use virtio-9p for shared folders all the time why did you dismiss that as a non-issue: https://www.hackerneue.com/item?id=13755021
If you are a KVM dev please look seriously into using an advanced, intelligent fuzzer like the DARPA Grand Challenge winner Shellphish. It can find security bugs and propose patches for them:
https://github.com/shellphish http://angr.io/
*
Security aside I find Libvirt more wanting in UX. The single biggest roadblock is the lack of a virtual appliance implementation that new comers can point and import to from Virt-Manager. I hope this gets resolved down the line.
It would clear things up if you have a table on your site showing which QEMU vulnerabilties affect a specified default configuration of a RHEL/Debian guest out of the box in libvirt. See this for example: https://www.qubes-os.org/security/xsa/
What I want to see:
* Adoption of QEMU-lite as the default mode for Linux guests. There's no point to running Linux in almost any emulated hardware.
* A builtin monitoring solution like Google has that detects excessive DRAM bitflips [1] and cache misses [2] and terminates the guests to foil rowhammer and covert channel attacks.
* A re-design of KSM thats not prone to rowhammer abuse [3]
*
[1] https://cloudplatform.googleblog.com/2017/01/7-ways-we-harde...
[2] https://www.usenix.org/system/files/conference/usenixsecurit...
* Rowhammer detection is interesting, but not really related to virtualization. Thanks to KVM's design any such monitoring solution would apply equally to Linux containers. This is not the case for Xen, for example.
* Besides Rowhammer, memory dedup is highly subject to side channel attacks. I think this is a much more important issue, and it already pretty much forces you to disable KSM in multi-tenant applications.
Most QEMU CVEs are related to devices that should never be used in cloud provider scenarios (you'll often find that they are disabled in RHEL for this exact reason). If anything, prompt handling of vulnerabilities in those devices is a sign of taking security seriously...
No doubt in my mind Google has the top-tier working on this, at least now that GCE is public-facing. I was impressed to read they actively monitor/mitigate Rowhammer, something I've not seen mentioned anywhere else (could just be my ignorance).
Xen uses a stripped-down QEMU to boot unpatched guest OSes. However, even Xen doesn't test its qemu-xen components extensively. Writing a new purpose-built emulator (assuming you know what you're doing) is a better idea.
edit: Or use PV guests, and skip all potential QEMU flaws.