lkml.org 
[lkml]   [2012]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:x86/mm] mm/mmu_gather: enable tlb flush range in generic mmu_gather
    Commit-ID:  6d80ee50c281d02be090cf429294b32dab0b23b7
    Gitweb: http://git.kernel.org/tip/6d80ee50c281d02be090cf429294b32dab0b23b7
    Author: Alex Shi <alex.shi@intel.com>
    AuthorDate: Mon, 25 Jun 2012 14:08:24 +0800
    Committer: H. Peter Anvin <hpa@zytor.com>
    CommitDate: Mon, 25 Jun 2012 20:53:19 -0700

    mm/mmu_gather: enable tlb flush range in generic mmu_gather

    This patch enabled the tlb flush range support in generic mmu layer.

    Most of arch has self tlb flush range support, like ARM/IA64 etc.
    X86 arch has no this support in hardware yet. But another instruction
    'invlpg' can implement this function in some degree. So, enable this
    feather in generic layer for x86 now. and maybe useful for other archs
    in further.

    Generic mmu_gather struct is protected by macro
    HAVE_GENERIC_MMU_GATHER. Other archs that has flush range supported
    own self mmu_gather struct. So, now this change is safe for them.

    In future we may unify this struct and related functions on multiple
    archs.

    Thanks for Peter Zijlstra time and time reminder for multiple
    architecture code safe!

    Signed-off-by: Alex Shi <alex.shi@intel.com>
    Link: http://lkml.kernel.org/r/1340604507-11214-7-git-send-email-alex.shi@intel.com
    Signed-off-by: H. Peter Anvin <hpa@zytor.com>
    ---
    include/asm-generic/tlb.h | 2 ++
    mm/memory.c | 9 +++++++++
    2 files changed, 11 insertions(+), 0 deletions(-)

    diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
    index 75e888b..ed6642a 100644
    --- a/include/asm-generic/tlb.h
    +++ b/include/asm-generic/tlb.h
    @@ -86,6 +86,8 @@ struct mmu_gather {
    #ifdef CONFIG_HAVE_RCU_TABLE_FREE
    struct mmu_table_batch *batch;
    #endif
    + unsigned long start;
    + unsigned long end;
    unsigned int need_flush : 1, /* Did free PTEs */
    fast_mode : 1; /* No batching */

    diff --git a/mm/memory.c b/mm/memory.c
    index 1b7dc66..32c9943 100644
    --- a/mm/memory.c
    +++ b/mm/memory.c
    @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm)
    tlb->mm = mm;

    tlb->fullmm = fullmm;
    + tlb->start = -1UL;
    + tlb->end = 0;
    tlb->need_flush = 0;
    tlb->fast_mode = (num_possible_cpus() == 1);
    tlb->local.next = NULL;
    @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
    {
    struct mmu_gather_batch *batch, *next;

    + tlb->start = start;
    + tlb->end = end;
    tlb_flush_mmu(tlb);

    /* keep the page table cache within bounds */
    @@ -1204,6 +1208,11 @@ again:
    */
    if (force_flush) {
    force_flush = 0;
    +
    +#ifdef HAVE_GENERIC_MMU_GATHER
    + tlb->start = addr;
    + tlb->end = end;
    +#endif
    tlb_flush_mmu(tlb);
    if (addr != end)
    goto again;

    \
     
     \ /
      Last update: 2012-06-26 18:01    [W:4.177 / U:0.412 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site