lkml.org 
[lkml]   [2019]   [Nov]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] KVM: MMU: Do not treat ZONE_DEVICE pages as being reserved
On Wed, Nov 06, 2019 at 10:09:29PM +0100, Paolo Bonzini wrote:
> On 06/11/19 19:04, Dan Williams wrote:
> > On Wed, Nov 6, 2019 at 9:07 AM Sean Christopherson
> > <sean.j.christopherson@intel.com> wrote:
> > This is racy unless you can be certain that the pfn and resulting page
> > has already been pinned by get_user_pages().
>
> What is the race exactly?

The check in kvm_get_pfn() is guaranteed to be racy, but that's already
broken with respect to ZONE_DEVICE.

> In general KVM does not use pfn's until after having gotten them from
> get_user_pages (or follow_pfn for VM_IO | VM_PFNMAP vmas, for which
> get_user_pages fails, but this is not an issue here). It then creates
> the page tables and releases the reference to the struct page.
>
> Anything else happens _after_ the reference has been released, but still
> from an mmu notifier; this is why KVM uses pfn_to_page quite pervasively.
>
> If this is enough to avoid races, then I prefer Sean's patch. If it is
> racy, we need to fix kvm_set_pfn_accessed and kvm_set_pfn_dirty first,
> and second at transparent_hugepage_adjust and kvm_mmu_zap_collapsible_spte:

transparent_hugepage_adjust() would be ok, it's called before the original
reference is put back.

> - if accessed/dirty state need not be tracked properly for ZONE_DEVICE,
> then I suppose David's patch is okay (though I'd like to have a big
> comment explaining all the things that went on in these emails). If
> they need to work, however, Sean's patch 1 is the right thing to do.
>
> - if we need Sean's patch 1, but it is racy to use is_zone_device_page,
> we could first introduce a helper similar to kvm_is_hugepage_allowed()
> from his patch 2, but using pfn_to_online_page() to filter out
> ZONE_DEVICE pages. This would cover both transparent_hugepage_adjust
> and kvm_mmu_zap_collapsible_spte.

After more thought, I agree with Paolo that adding kvm_is_zone_device_pfn()
is preferably if blocking with mmu_notifier is sufficient to avoid races.
The only caller that isn't protected by mmu_notifier (or holding a gup()-
obtained referece) is kvm_get_pfn(), but again, that's completely borked
anyways. Adding a WARN_ON(page_count()) in kvm_is_zone_device_pfn() would
help with the auditing issue.

The scope of the changes required to completely avoid is_zone_device_page()
is massive, and probably would result in additional trickery :-(

> > This is why I told David
> > to steer clear of adding more is_zone_device_page() usage, it's
> > difficult to audit. Without an existing pin the metadata to determine
> > whether a page is ZONE_DEVICE or not could be in the process of being
> > torn down. Ideally KVM would pass around a struct { struct page *page,
> > unsigned long pfn } tuple so it would not have to do this "recall
> > context" dance on every pfn operation.
>
> Unfortunately once KVM has created its own page tables, the struct page*
> reference is lost, as the PFN is the only thing that is stored in there.

Ya, the horrible idea I had in mind was to steal a bit from KVM_PFN_ERR_MASK
to track whether or not the PFN is associated with a struct page.

\
 
 \ /
  Last update: 2019-11-06 22:31    [W:0.098 / U:0.448 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site