lkml.org 
[lkml]   [2019]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [RFC PATCH v2 2/3] x86/mm/tlb: Defer PTI flushes
    On Tue, Aug 27, 2019 at 4:55 PM Nadav Amit <namit@vmware.com> wrote:
    >
    > > On Aug 27, 2019, at 4:13 PM, Andy Lutomirski <luto@kernel.org> wrote:
    > >
    > > On Fri, Aug 23, 2019 at 11:13 PM Nadav Amit <namit@vmware.com> wrote:
    > >> INVPCID is considerably slower than INVLPG of a single PTE. Using it to
    > >> flush the user page-tables when PTI is enabled therefore introduces
    > >> significant overhead.
    > >>
    > >> Instead, unless page-tables are released, it is possible to defer the
    > >> flushing of the user page-tables until the time the code returns to
    > >> userspace. These page tables are not in use, so deferring them is not a
    > >> security hazard.
    > >
    > > I agree and, in fact, I argued against ever using INVPCID in the
    > > original PTI code.
    > >
    > > However, I don't see what freeing page tables has to do with this. If
    > > the CPU can actually do speculative page walks based on the contents
    > > of non-current-PCID TLB entries, then we have major problems, since we
    > > don't actively flush the TLB for non-running mms at all.
    >
    > That was not my concern.
    >
    > >
    > > I suppose that, if we free a page table, then we can't activate the
    > > PCID by writing to CR3 before flushing things. But we can still defer
    > > the flush and just set the flush bit when we write to CR3.
    >
    > This was my concern. I can change the behavior so the code would flush the
    > whole TLB instead. I just tried not to change the existing behavior too
    > much.
    >

    We do this anyway if we don't have INVPCID_SINGLE, so it doesn't seem
    so bad to also do it if there's a freed page table.

    \
     
     \ /
      Last update: 2019-08-28 02:31    [W:6.071 / U:1.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site