lkml.org 
[lkml]   [2019]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v6 10/18] sh/tlb: Convert SH to generic mmu_gather
On Wed, Dec 04, 2019 at 04:07:53PM +0100, Geert Uytterhoeven wrote:
> On Wed, Dec 4, 2019 at 2:35 PM Peter Zijlstra <peterz@infradead.org> wrote:

> > Does this fare better?
>
> Yes. Migo-R is happy again.
> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
>
> > --- a/arch/sh/include/asm/pgalloc.h
> > +++ b/arch/sh/include/asm/pgalloc.h
> > @@ -36,9 +36,7 @@ do { \
> > #if CONFIG_PGTABLE_LEVELS > 2
> > #define __pmd_free_tlb(tlb, pmdp, addr) \
> > do { \
> > - struct page *page = virt_to_page(pmdp); \
> > - pgtable_pmd_page_dtor(page); \
> > - tlb_remove_page((tlb), page); \
> > + pmd_free((tlb)->mm, (pmdp)); \
> > } while (0);
> > #endif

OK, so I was going to write a Changelog to go with that, but then I
realized that while this works and is similar to before the patch, I'm
not sure this is in fact correct.

With this on (and also before) we're freeing the PMD before we've done
the TLB invalidate, that seems wrong!

Looking at the size of that pmd_cache, that looks to be 30-(12+12-3)+3
== 12, which is exactly 1 page, for PAGE_SIZE_4K, less for the larger
pages.

I'm thinking perhaps we should do something like the below instead?


---
arch/sh/mm/pgtable.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/sh/mm/pgtable.c b/arch/sh/mm/pgtable.c
index 5c8f9247c3c2..fac7e822fd0c 100644
--- a/arch/sh/mm/pgtable.c
+++ b/arch/sh/mm/pgtable.c
@@ -5,9 +5,6 @@
#define PGALLOC_GFP GFP_KERNEL | __GFP_ZERO

static struct kmem_cache *pgd_cachep;
-#if PAGETABLE_LEVELS > 2
-static struct kmem_cache *pmd_cachep;
-#endif

void pgd_ctor(void *x)
{
@@ -23,11 +20,6 @@ void pgtable_cache_init(void)
pgd_cachep = kmem_cache_create("pgd_cache",
PTRS_PER_PGD * (1<<PTE_MAGNITUDE),
PAGE_SIZE, SLAB_PANIC, pgd_ctor);
-#if PAGETABLE_LEVELS > 2
- pmd_cachep = kmem_cache_create("pmd_cache",
- PTRS_PER_PMD * (1<<PTE_MAGNITUDE),
- PAGE_SIZE, SLAB_PANIC, NULL);
-#endif
}

pgd_t *pgd_alloc(struct mm_struct *mm)
@@ -48,11 +40,7 @@ void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)

pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
- return kmem_cache_alloc(pmd_cachep, PGALLOC_GFP);
-}
-
-void pmd_free(struct mm_struct *mm, pmd_t *pmd)
-{
- kmem_cache_free(pmd_cachep, pmd);
+ BUILD_BUG_ON(PTRS_PER_PMD * (1<<PTE_MAGNITUDE) <= PAGE_SIZE);
+ return (pmd_t *)__get_free_page(PGALLOC_GFP);
}
#endif /* PAGETABLE_LEVELS > 2 */
\
 
 \ /
  Last update: 2019-12-04 17:42    [W:0.058 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site