Preferences

> it doesn't really support the light IO solutions (like novm's file, rather than block access)

That is incorrect. novm just implements the virtio-9p device that QEMU has supported for years.

Clear Linux does add something new: a pmem (NVDIMM) device that bypasses the guest kernel's page cache. This involves host kernel, guest kernel, and kvmtool changes.

The advantage of pmem is that short-lived VMs can directly access data from the host instead of copying in. But this feature needs to be added to QEMU/KVM anyway to support new persistent memory hardware (see http://pmem.io/) so it won't be unique for long.


sporkenfang
> The advantage of pmem is that short-lived VMs can directly access data from the host instead of copying in.

Is it just me or does this sound really, really exploitable from the VM-to-host direction? I'm hoping there's some way to safeguard such a process.

antocv
Also virtio-9p is slow as hell.
throwaway7767
> Also virtio-9p is slow as hell.

I've heard that from others, but I wonder why that is. I would expect a file-level abstraction to be faster due to less I/O roundtrips for transfers and the removal of the block-abstraction. Is it just an inefficient protocol or are there some inherent bottlenecks in a file-level sharing protocol that I'm missing?

How slow is it really? Slower than NFS for example?

dezgeg
If you want both the host and the VM to have a coherent view of the filesystem, the VM can't really do efficient caching.
throwaway7767
> If you want both the host and the VM to have a coherent view of the filesystem, the VM can't really do efficient caching.

That makes sense, though it's surprising then that it's not possible to explicitly enable caching if that's the bottleneck and a coherent view from the host side is not a necessity for a given workload.

antocv
It was slower than NFS for me.

I blame it on poor implementation, must be some bug somewhere, but I dont have the skills to find out.

This item has no comments currently.