That is incorrect. novm just implements the virtio-9p device that QEMU has supported for years.
Clear Linux does add something new: a pmem (NVDIMM) device that bypasses the guest kernel's page cache. This involves host kernel, guest kernel, and kvmtool changes.
The advantage of pmem is that short-lived VMs can directly access data from the host instead of copying in. But this feature needs to be added to QEMU/KVM anyway to support new persistent memory hardware (see http://pmem.io/) so it won't be unique for long.
Is it just me or does this sound really, really exploitable from the VM-to-host direction? I'm hoping there's some way to safeguard such a process.
I've heard that from others, but I wonder why that is. I would expect a file-level abstraction to be faster due to less I/O roundtrips for transfers and the removal of the block-abstraction. Is it just an inefficient protocol or are there some inherent bottlenecks in a file-level sharing protocol that I'm missing?
How slow is it really? Slower than NFS for example?
That makes sense, though it's surprising then that it's not possible to explicitly enable caching if that's the bottleneck and a coherent view from the host side is not a necessity for a given workload.
They started with HVM and PV, and have since evolved HVM toward PV by removing legacy support and software emulation and have now settled (for now?) on doing every the PV way except where hardware virtualization assistance is faster on modern hardware. Some of this shifting has been due to changes in hardware capabilities, and some of it has been due to earlier efforts being developed from an incomplete understanding of what techniques are faster.
VMware is closed source. The real Xen alternative is KVM. KVM is better than Xen in pretty much every way. There's a very big cost for big Xen shops to switch to KVM, but if you're not tied to Xen I can't imagine why you'd use it when KVM is better in every way (kernel integration, tooling, performance, etc).
With that being said, there have been exploits in the Xen hypervisor. As more hardware integration gets added, dom0 starts to look a lot more like a traditional kernel.
Personally, I use kvm for all my virtual machines, since I don't want to run everything under dom0.
Did you mean Xen?
Except for every single "prepackaged developer's workstation" solution I've seen so far. Seriously it works on all systems more or less the same, so I see it used all over the place.
Xen is meant for running a potentially large number of server VMs headless. VirtualBox is meant for running desktop VMs. You could make VirtualBox run headless (exposing a pseudo-screen over VRDP) to do what Xen does, but... eww.
Just look at the kinds of vulnerabilities regularly found in it. They're mostly run-of-the-mill buffer overflows or missing range checks in emulation. Simple stuff that should have been caught if they were serious about security.
Compare that to xen or kvm, which have of course also had vulnerabilities, but you can see people usually have to get a lot more creative when attacking those.
If you wouldn't run a program on your actual machine, you probably should not run it in a VirtualBox VM either.
Isn't it good that so many smart people are trying to solve these problems in so many different ways?
For example: containerization is a bit like taking a boat with a hole in its hull, and building a new boat to carry the old boat.
When instead, the real problem is that people's applications should be able to run in-place without having to take control of the entire operating system.
We have plenty of mechanisms - including cgroups - that allows you to achieve that.
What containerisation solutions solve is providing a convenient build and packaging solution that includes a decent level of isolation including preventing state from polluting the surrounding system.
The biggest problem is not lack of isolation mechanisms, but that most developers have no clue they even exist.
Try to get the average Linux developer to tell you what seccomp is, for example, and if they know what it is, try to get them to tell you how to use it [1]. There's plenty of room for innovation here, and plenty of room for more different solutions, but the biggest problem they will need to solve is how to make these mechanisms easy enough to use.
[1] An example here: http://blog.viraptor.info/post/seccomp-sandboxes-and-memcach...
This doesn't seem right, honestly. Half of the projects have the same approach or goals. (quick boot time and no legacy supported) So why do they die and get reinvented every single time?
Something is wrong...