lkml.org 
[lkml]   [2018]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 0/7] x86/mm/tlb: make lazy TLB mode even lazier
From
Date
On Mon, 2018-09-24 at 14:37 -0400, Rik van Riel wrote:
> Linus asked me to come up with a smaller patch set to get the
> benefits
> of lazy TLB mode, so I spent some time trying out various
> permutations
> of the code, with a few workloads that do lots of context switches,
> and
> also happen to have a fair number of TLB flushes a second.

I made a nice list of which patches this code
is based on, but I forgot to copy it into my
intro email.

The patches are based on current -tip, plus:
- tip x86/core: 012e77a903d ("x86/nmi: Fix NMI uaccess race against CR3
switching")
- arm64 tlb/asm-generic branch, including
- faaadaf315b4 ("asm-generic/tlb: Guard with #ifdef CONFIG_MMU")
- 22a61c3c4f13 ("asm-generic/tlb: Track freeing of page-table
directories in struct mmu_gather")
- a6d60245d6d9 ("asm-generic/tlb: Track which levels of the page
tables have been cleared")

--
All Rights Reversed.
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-09-24 20:51    [W:0.088 / U:31.384 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site