lkml.org 
[lkml]   [2019]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On Mon, Dec 16, 2019 at 11:08:15AM +0100, Paolo Bonzini wrote:
> > Although now because we have kvm_get_running_vcpu() all cases for [&]
> > should be fine without changing anything, but I tend to add another
> > patch in the next post to convert all the [&] cases explicitly to pass
> > vcpu pointer instead of kvm pointer to be clear if no one disagrees,
> > then we verify that against kvm_get_running_vcpu().
>
> This is a good idea but remember not to convert those to
> kvm_vcpu_write_guest, because you _don't_ want these writes to touch
> SMRAM (most of the addresses are OS-controlled rather than
> firmware-controlled).

OK. I think I only need to pass in vcpu* instead of kvm* in
kvm_write_guest_page() just like kvm_vcpu_write_guest(), however we
still keep to only write to address space id==0 for that.

>
> > init_rmode_tss or init_rmode_identity_map. But I've marked them as
> > unimportant because they should only happen once at boot.
>
> We need to check if userspace can add an arbitrary number of entries by
> calling KVM_SET_TSS_ADDR repeatedly. I think it can; we'd have to
> forbid multiple calls to KVM_SET_TSS_ADDR which is not a problem in general.

Will do that altogether with the series. I can further change both of
these calls to not track dirty at all, which shouldn't be hard, after
all userspace didn't even know them, as you mentioned below.

Is there anything to explain what KVM_SET_TSS_ADDR is used for? This
is the thing I found that is closest to useful (from api.txt):

This ioctl is required on Intel-based hosts. This is needed
on Intel hardware because of a quirk in the virtualization
implementation (see the internals documentation when it pops
into existence).

So... has it really popped into existance somewhere? It would be good
at least to know why it does not need to be migrated.

> >> I don't think that's possible, most writes won't come from a page fault
> >> path and cannot retry.
> >
> > Yep, maybe I should say it in the other way round: we only wait if
> > kvm_get_running_vcpu() == NULL. Then in somewhere near
> > vcpu_enter_guest(), we add a check to wait if per-vcpu ring is full.
> > Would that work?
>
> Yes, that should work, especially if we know that kvmgt is the only case
> that can wait. And since:
>
> 1) kvmgt doesn't really need dirty page tracking (because VFIO devices
> generally don't track dirty pages, and because kvmgt shouldn't be using
> kvm_write_guest anyway)
>
> 2) the real mode TSS and identity map shouldn't even be tracked, as they
> are invisible to userspace
>
> it seems to me that kvm_get_running_vcpu() lets us get rid of the per-VM
> ring altogether.

Yes, it would be perfect if so.

--
Peter Xu

\
 
 \ /
  Last update: 2019-12-16 19:55    [W:0.335 / U:0.744 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site