Messages in this thread | | | Date | Tue, 21 Apr 2009 13:41:20 +0200 | From | Gerd Hoffmann <> | Subject | Re: Xenner design and kvm msr handling |
| |
On 04/21/09 12:14, Avi Kivity wrote: > Gerd Hoffmann wrote: >> xenner & pv-on-hvm >> ================== >> >> Once we have all this in qemu it is just a small step to also support >> xenish pv-on-hvm drivers in qemu using the xenner emulation bits. >> Hypercalls are handled by a small pic binary loaded into the hypercall >> pages. Loading of the binary is triggered by the msr writes discussed. >> Size of the binary is only two pages: one hypercall entry points, one >> code. Communication path is the very same ioport interface also used >> by emu, i.e. it does *not* use vmcall and thus no opcode changes are >> needed on migration. >> > This gives a good case for exporting MSRs to userspace. > > Can you explain the protocol used by this MSR? How does the guest know > how many pages to load? How does the kernel know which type of page to > put where?
Sure.
(1) cpuid 0x40000000, check vmm signature (2) cpuid 0x40000002 -> returns # of pages (eax) and msr (ebx) (3) allocate pages (normal ram) (4) foreach page (wrmsr "guest physical address | pageno")
Xen uses msr 0x40000000. Due to the msr being queried via cpuid it should be possible to use another one. Modulo guest bugs of course ...
> Note that it would still be interesting to have the guest call the > kernel, so it can kick the host kernel Xen netback driver directly > instead of going through qemu (and the userspace netback + tap).
With the current codebase netback isn't involved at all. Backend lives in qemu like virtio-net. It is a very simple one (no GSO support, ...), there are tons of opportunities for optimizations. Don't feel like tackeling that right now though. My patch queue is already deep enougth as-is. Also as time passes the qemu network layer tweaks for virtio-net should make that job easier ;)
cheers, Gerd
| |