lkml.org 
[lkml]   [2015]   [Jul]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] mm: Flush the TLB for a single address in a huge page
Date
When the page table entry is a huge page (and not a table), there is no
need to flush the TLB by range. This patch changes flush_tlb_range() to
flush_tlb_page() in functions where we know the pmd entry is a huge
page.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
---

Hi,

That's just a minor improvement but it saves iterating over each small
page in a huge page when a single TLB entry is used (we already have a
similar assumption in __tlb_adjust_range).

Thanks.

mm/pgtable-generic.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 6b674e00153c..ff17eca26211 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -67,7 +67,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
if (changed) {
set_pmd_at(vma->vm_mm, address, pmdp, entry);
- flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ flush_tlb_page(vma, address);
}
return changed;
#else /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -101,7 +101,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
young = pmdp_test_and_clear_young(vma, address, pmdp);
if (young)
- flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ flush_tlb_page(vma, address);
return young;
}
#endif
@@ -128,7 +128,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
VM_BUG_ON(!pmd_trans_huge(*pmdp));
pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
- flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ flush_tlb_page(vma, address);
return pmd;
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -143,7 +143,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
set_pmd_at(vma->vm_mm, address, pmdp, pmd);
/* tlb flush only to serialize against gup-fast */
- flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ flush_tlb_page(vma, address);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif
@@ -195,7 +195,7 @@ void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
{
pmd_t entry = *pmdp;
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
- flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ flush_tlb_page(vma, address);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif

\
 
 \ /
  Last update: 2015-07-22 19:21    [W:0.068 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site