lkml.org 
[lkml]   [2011]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 01/20] mm: mmu_gather rework
From
Date
On Tue, 2011-04-19 at 13:06 -0700, Andrew Morton wrote:
> On Fri, 01 Apr 2011 14:12:59 +0200
> Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>
> > Remove the first obstackle towards a fully preemptible mmu_gather.
> >
> > The current scheme assumes mmu_gather is always done with preemption
> > disabled and uses per-cpu storage for the page batches. Change this to
> > try and allocate a page for batching and in case of failure, use a
> > small on-stack array to make some progress.
> >
> > Preemptible mmu_gather is desired in general and usable once
> > i_mmap_lock becomes a mutex. Doing it before the mutex conversion
> > saves us from having to rework the code by moving the mmu_gather
> > bits inside the pte_lock.
> >
> > Also avoid flushing the tlb batches from under the pte lock,
> > this is useful even without the i_mmap_lock conversion as it
> > significantly reduces pte lock hold times.
>
> There doesn't seem much point in reviewing this closely, as a lot of it
> gets tossed away later in the series..

That's a result of breaking patches along concept boundaries :/

> > free_pages_and_swap_cache(tlb->pages, tlb->nr);
>
> It seems inappropriate that this code uses
> free_page[s]_and_swap_cache(). It should go direct to put_page() and
> release_pages()? Please review this code's implicit decision to pass
> "cold==0" into release_pages().

Well, that isn't new with this patch, however it does look to be
correct. We're freeing user pages, those could indeed still be part of
the swapcache. Furthermore, the PAGEVEC_SIZE split in
free_pages_and_swap_cache() alone makes it worth calling that over
release_pages().

As to the cold==0, I think that too is correct since we don't actually
touch the pages themselves and we have no inkling as to their cache
state, we're simply wiping out user pages.

> > -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
> > +static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
>
> I wonder if all the inlining which remains in this code is needed and
> desirable.

Probably not, the big plan was to make everybody use the generic code
and then move it into mm/memory.c or so.

But I guess I can have asm-generic/tlb.h define HAVE_GENERIC_MMU_GATHER
and make the compilation in mm/memory.c conditional on that (or generate
lots of Kconfig churn).


\
 
 \ /
  Last update: 2011-04-20 10:51    [W:0.300 / U:0.604 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site