lkml.org 
[lkml]   [2023]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH v10 23/50] KVM: SEV: Make AVIC backing, VMSA and VMCB memory allocation SNP safe
    From
    On 10/16/23 15:27, Michael Roth wrote:
    > From: Brijesh Singh <brijesh.singh@amd.com>
    >
    > Implement a workaround for an SNP erratum where the CPU will incorrectly
    > signal an RMP violation #PF if a hugepage (2mb or 1gb) collides with the
    > RMP entry of a VMCB, VMSA or AVIC backing page.
    >
    > When SEV-SNP is globally enabled, the CPU marks the VMCB, VMSA, and AVIC
    > backing pages as "in-use" via a reserved bit in the corresponding RMP
    > entry after a successful VMRUN. This is done for _all_ VMs, not just
    > SNP-Active VMs.
    >
    > If the hypervisor accesses an in-use page through a writable
    > translation, the CPU will throw an RMP violation #PF. On early SNP
    > hardware, if an in-use page is 2mb aligned and software accesses any
    > part of the associated 2mb region with a hupage, the CPU will
    > incorrectly treat the entire 2mb region as in-use and signal a spurious
    > RMP violation #PF.
    >
    > The recommended is to not use the hugepage for the VMCB, VMSA or
    > AVIC backing page for similar reasons. Add a generic allocator that will
    > ensure that the page returns is not hugepage (2mb or 1gb) and is safe to

    This is a bit confusing wording as we are not avoiding "using a
    hugepage" but AFAIU, avoiding using a (4k) page that has a hugepage
    aligned physical address, right?

    > be used when SEV-SNP is enabled. Also implement similar handling for the
    > VMCB/VMSA pages of nested guests.
    >
    > Co-developed-by: Marc Orr <marcorr@google.com>
    > Signed-off-by: Marc Orr <marcorr@google.com>
    > Reported-by: Alper Gun <alpergun@google.com> # for nested VMSA case
    > Co-developed-by: Ashish Kalra <ashish.kalra@amd.com>
    > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
    > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
    > [mdr: squash in nested guest handling from Ashish]
    > Signed-off-by: Michael Roth <michael.roth@amd.com>
    > ---

    <snip>

    > +
    > +struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu)
    > +{
    > + unsigned long pfn;
    > + struct page *p;
    > +
    > + if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
    > + return alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
    > +
    > + /*
    > + * Allocate an SNP safe page to workaround the SNP erratum where
    > + * the CPU will incorrectly signal an RMP violation #PF if a
    > + * hugepage (2mb or 1gb) collides with the RMP entry of VMCB, VMSA
    > + * or AVIC backing page. The recommeded workaround is to not use the
    > + * hugepage.

    Same here "not use the hugepage"

    > + *
    > + * Allocate one extra page, use a page which is not 2mb aligned
    > + * and free the other.

    This makes more sense.

    > + */
    > + p = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, 1);
    > + if (!p)
    > + return NULL;
    > +
    > + split_page(p, 1);

    Yeah I think that's a sensible use of split_page(), as we don't have
    support for forcefully non-aligned allocations or specific "page
    coloring" in the page allocator.
    So even with my wording concerns:

    Acked-by: Vlastimil Babka <vbabka@suse.cz>

    > +
    > + pfn = page_to_pfn(p);
    > + if (IS_ALIGNED(pfn, PTRS_PER_PMD))
    > + __free_page(p++);
    > + else
    > + __free_page(p + 1);
    > +
    > + return p;
    > +}
    > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
    > index 1e7fb1ea45f7..8e4ef0cd968a 100644
    > --- a/arch/x86/kvm/svm/svm.c
    > +++ b/arch/x86/kvm/svm/svm.c
    > @@ -706,7 +706,7 @@ static int svm_cpu_init(int cpu)
    > int ret = -ENOMEM;
    >
    > memset(sd, 0, sizeof(struct svm_cpu_data));
    > - sd->save_area = alloc_page(GFP_KERNEL | __GFP_ZERO);
    > + sd->save_area = snp_safe_alloc_page(NULL);
    > if (!sd->save_area)
    > return ret;
    >
    > @@ -1425,7 +1425,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
    > svm = to_svm(vcpu);
    >
    > err = -ENOMEM;
    > - vmcb01_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
    > + vmcb01_page = snp_safe_alloc_page(vcpu);
    > if (!vmcb01_page)
    > goto out;
    >
    > @@ -1434,7 +1434,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
    > * SEV-ES guests require a separate VMSA page used to contain
    > * the encrypted register state of the guest.
    > */
    > - vmsa_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
    > + vmsa_page = snp_safe_alloc_page(vcpu);
    > if (!vmsa_page)
    > goto error_free_vmcb_page;
    >
    > @@ -4876,6 +4876,16 @@ static int svm_vm_init(struct kvm *kvm)
    > return 0;
    > }
    >
    > +static void *svm_alloc_apic_backing_page(struct kvm_vcpu *vcpu)
    > +{
    > + struct page *page = snp_safe_alloc_page(vcpu);
    > +
    > + if (!page)
    > + return NULL;
    > +
    > + return page_address(page);
    > +}
    > +
    > static struct kvm_x86_ops svm_x86_ops __initdata = {
    > .name = KBUILD_MODNAME,
    >
    > @@ -5007,6 +5017,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
    >
    > .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
    > .vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons,
    > + .alloc_apic_backing_page = svm_alloc_apic_backing_page,
    > };
    >
    > /*
    > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
    > index c13070d00910..b7b8bf73cbb9 100644
    > --- a/arch/x86/kvm/svm/svm.h
    > +++ b/arch/x86/kvm/svm/svm.h
    > @@ -694,6 +694,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm);
    > void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
    > void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
    > void sev_es_unmap_ghcb(struct vcpu_svm *svm);
    > +struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu);
    >
    > /* vmenter.S */
    >

    \
     
     \ /
      Last update: 2023-12-11 14:25    [W:2.677 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site