lkml.org 
[lkml]   [2022]   [Dec]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH Part2 v6 07/49] x86/sev: Invalid pages from direct map when adding it to RMP table
    On Wed, Jul 27, 2022 at 07:01:34PM +0200, Borislav Petkov wrote:
    > On Mon, Jun 20, 2022 at 11:03:07PM +0000, Ashish Kalra wrote:
    >
    > > Subject: x86/sev: Invalid pages from direct map when adding it to RMP table
    >
    > "...: Invalidate pages from the direct map when adding them to the RMP table"
    >
    > > +static int restore_direct_map(u64 pfn, int npages)
    > > +{
    > > + int i, ret = 0;
    > > +
    > > + for (i = 0; i < npages; i++) {
    > > + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
    >
    > set_memory_p() ?

    We implemented this approach for v7, but it causes a fairly significant
    performance regression, particularly for the case for npages > 1 which
    this change was meant to optimize.

    I still need to dig in a big but I'm guessing it's related to flushing
    behavior.

    It would however be nice to have a set_direct_map_default_noflush()
    variant that accepted a 'npages' argument, since it would be more
    performant here and also would potentially allow for restoring the 2M
    direct mapping in some cases. Will look into this more for v8.

    -Mike

    >
    > > + if (ret)
    > > + goto cleanup;
    > > + }
    > > +
    > > +cleanup:
    > > + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i);
    >
    > Warn for each pfn?!
    >
    > That'll flood dmesg mightily.
    >
    > > + return ret;
    > > +}
    > > +
    > > +static int invalid_direct_map(unsigned long pfn, int npages)
    > > +{
    > > + int i, ret = 0;
    > > +
    > > + for (i = 0; i < npages; i++) {
    > > + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i));
    >
    > As above, set_memory_np() doesn't work here instead of looping over each
    > page?
    >
    > > @@ -2462,11 +2494,38 @@ static int rmpupdate(u64 pfn, struct rmpupdate *val)
    > > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
    > > return -ENXIO;
    > >
    > > + level = RMP_TO_X86_PG_LEVEL(val->pagesize);
    > > + npages = page_level_size(level) / PAGE_SIZE;
    > > +
    > > + /*
    > > + * If page is getting assigned in the RMP table then unmap it from the
    > > + * direct map.
    > > + */
    > > + if (val->assigned) {
    > > + if (invalid_direct_map(pfn, npages)) {
    > > + pr_err("Failed to unmap pfn 0x%llx pages %d from direct_map\n",
    >
    > "Failed to unmap %d pages at pfn 0x... from the direct map\n"
    >
    > > + pfn, npages);
    > > + return -EFAULT;
    > > + }
    > > + }
    > > +
    > > /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */
    > > asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE"
    > > : "=a"(ret)
    > > : "a"(paddr), "c"((unsigned long)val)
    > > : "memory", "cc");
    > > +
    > > + /*
    > > + * Restore the direct map after the page is removed from the RMP table.
    > > + */
    > > + if (!ret && !val->assigned) {
    > > + if (restore_direct_map(pfn, npages)) {
    > > + pr_err("Failed to map pfn 0x%llx pages %d in direct_map\n",
    >
    > "Failed to map %d pages at pfn 0x... into the direct map\n"
    >
    > Thx.
    >
    > --
    > Regards/Gruss,
    > Boris.
    >
    > https://people.kernel.org/tglx/notes-about-netiquette

    \
     
     \ /
      Last update: 2022-12-19 16:02    [W:30.602 / U:0.132 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site