lkml.org 
[lkml]   [2022]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [EXTERNAL] [PATCH v2] KVM: Don't actually set a request when evicting vCPUs for GFN cache invd
From
Date
On Fri, 2022-02-25 at 17:27 +0000, Sean Christopherson wrote:
> On Fri, Feb 25, 2022, David Woodhouse wrote:
> > On Fri, 2022-02-25 at 16:13 +0000, Sean Christopherson wrote:
> > > On Fri, Feb 25, 2022, Woodhouse, David wrote:
> > > > Since we need an active vCPU context to do dirty logging (thanks, dirty
> > > > ring)... and since any time vcpu_run exits to userspace for any reason
> > > > might be the last time we ever get an active vCPU context... I think
> > > > that kind of fundamentally means that we must flush dirty state to the
> > > > log on *every* return to userspace, doesn't it?
> > >
> > > I would rather add a variant of mark_page_dirty_in_slot() that takes a vCPU, which
> > > we whould have in all cases. I see no reason to require use of kvm_get_running_vcpu().
> >
> > We already have kvm_vcpu_mark_page_dirty(), but it can't use just 'some
> > vcpu' because the dirty ring is lockless. So if you're ever going to
> > use anything other than kvm_get_running_vcpu() we need to add locks.
>
> Heh, actually, scratch my previous comment. I was going to respond that
> kvm_get_running_vcpu() is mutually exclusive with all other ioctls() on the same
> vCPU by virtue of vcpu->mutex, but I had forgotten that kvm_get_running_vcpu()
> really should be "kvm_get_loaded_vcpu()". I.e. as long as KVM is in a vCPU-ioctl
> path, kvm_get_running_vcpu() will be non-null.
>
> > And while we *could* do that, I don't think it would negate the
> > fundamental observation that *any* time we return from vcpu_run to
> > userspace, that could be the last time. Userspace might read the dirty
> > log for the *last* time, and any internally-cached "oh, at some point
> > we need to mark <this> page dirty" is lost because by the time the vCPU
> > is finally destroyed, it's too late.
>
> Hmm, isn't that an existing bug? I think the correct fix would be to flush all
> dirty vmcs12 pages to the memslot in vmx_get_nested_state(). Userspace _must_
> invoke that if it wants to migrated a nested vCPU.

Yes, AFAICT it's an existing bug in the way the kvm_host_map code works
today. Your suggestion makes sense as *long* as we consider it OK to
retrospectively document that userspace must extract the nested state
*before* doing the final read of the dirty log.

I am not aware that we have a clearly documented "the dirty log may
keep changing until XXX" anyway. But you're proposing that we change
it, I think. There may well be VMMs which assume that no pages will be
dirtied unless they are actually *running* a vCPU.

Which is why I was proposing that we flush the dirty status to the log
*every* time we leave vcpu_run back to userspace. But I'll not die on
that hill, if you make a good case for your proposal being OK.

> > I think I'm going to rip out the 'dirty' flag from the gfn_to_pfn_cache
> > completely and add a function (to be called with an active vCPU
> > context) which marks the page dirty *now*.
>
> Hrm, something like?
>
> 1. Drop @dirty from kvm_gfn_to_pfn_cache_init()
> 2. Rename @dirty => @old_dirty in kvm_gfn_to_pfn_cache_refresh()
> 3. Add an API to mark the associated slot dirty without unmapping
>
> I think that makes sense.

Except I'll drop 'dirty' from kvm_gfn_to_pfn_cache_refresh() too.
There's no scope for a deferred "oh, I meant to tell you that was
dirty" even in that case, is there? Use the API we add in your #3.


[unhandled content-type:application/pkcs7-signature]
\
 
 \ /
  Last update: 2022-02-25 19:42    [W:0.074 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site