lkml.org 
[lkml]   [2018]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 4/7] x86,tlb: make lazy TLB mode lazier
    On Fri, Jun 22, 2018 at 8:15 AM Rik van Riel <riel@surriel.com> wrote:
    >
    > On Fri, 2018-06-22 at 08:04 -0700, Andy Lutomirski wrote:
    > > On Wed, Jun 20, 2018 at 12:57 PM Rik van Riel <riel@surriel.com>
    > > wrote:
    > > >
    > > > Lazy TLB mode can result in an idle CPU being woken up by a TLB
    > > > flush,
    > > > when all it really needs to do is reload %CR3 at the next context
    > > > switch,
    > > > assuming no page table pages got freed.
    > > >
    > > > This patch deals with that issue by introducing a third TLB state,
    > > > TLBSTATE_FLUSH, which causes %CR3 to be reloaded at the next
    > > > context
    > > > switch.
    > > >
    > > > Atomic compare and exchange is used to close races between the TLB
    > > > shootdown code and the context switch code. Keying off just the
    > > > tlb_gen is likely to not be enough, since that would not give
    > > > lazy_clb_can_skip_flush() information on when it is facing a race
    > > > and has to send the IPI to a CPU in the middle of a LAZY -> OK
    > > > switch.
    > > >
    > > > Unlike the 2016 version of this patch, CPUs in TLBSTATE_LAZY are
    > > > not
    > > > removed from the mm_cpumask(mm), since that would prevent the TLB
    > > > flush IPIs at page table free time from being sent to all the CPUs
    > > > that need them.
    > >
    > > Eek, this is so complicated. In the 2016 version of the patches, you
    > > needed all this. But I rewrote the whole subsystem to make it easier
    > > now :) I think that you can get rid of all of this and instead just
    > > revert the relevant parts of:
    > >
    > > b956575bed91ecfb136a8300742ecbbf451471ab
    > >
    > > All the bookkeeping is already in place -- no need for new state.
    >
    > I looked at using your .tlb_gen stuff, but we need a
    > way to do that race free. I suppose setting the
    > tlbstate to !lazy before checking .tlb_gen might do
    > the trick, if we get the ordering right at the tlb
    > invalidation site, too?

    Oh, right.

    >
    > Something like this:
    >
    > context switch tlb invalidation
    >
    > advance mm->context.tlb_gen
    > send IPI to cpus with !is_lazy tlb
    >
    >
    > tlbstate.is_lazy = FALSE
    > *need_flush = .tlb_gen < next_tlb_gen
    >
    > Do you see any holes in that?

    Logically, is_lazy is (with your patches) just like mm_cpumask in
    terms of ordering. So I think your idea above is fine. But I think
    you need to make sure there's a full barrier between is_lazy = false
    and reading .tlb_gen.

    --Andy

    \
     
     \ /
      Last update: 2018-06-22 17:36    [W:6.685 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site