lkml.org 
[lkml]   [2010]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 06/13] mm: Preemptible mmu_gather
From
Date
On Fri, 2010-04-09 at 13:25 +1000, Nick Piggin wrote:
> On Thu, Apr 08, 2010 at 09:17:43PM +0200, Peter Zijlstra wrote:
> > @@ -39,30 +33,48 @@
> > struct mmu_gather {
> > struct mm_struct *mm;
> > unsigned int nr; /* set to ~0U means fast mode */
> > + unsigned int max; /* nr < max */
> > unsigned int need_flush;/* Really unmapped some ptes? */
> > unsigned int fullmm; /* non-zero means full mm flush */
> > - struct page * pages[FREE_PTE_NR];
> > +#ifdef HAVE_ARCH_MMU_GATHER
> > + struct arch_mmu_gather arch;
> > +#endif
> > + struct page **pages;
> > + struct page *local[8];
>
> Have you done some profiling on this? What I would like to see, if
> it's not too much complexity, is to have a small set of pages to
> handle common size frees, and then use them up first by default
> before attempting to allocate more.
>
> Also, it would be cool to be able to chain allocations to avoid
> TLB flushes even on big frees (overridable by arch of course, in
> case they're doing some non-preeemptible work or you wish to break
> up lock hold times). But that might be just getting over engineered.

Did no profiling at all, back when I wrote this I was in a hurry to get
this working for -rt.

But yes, those things do look like something we want to look into, we
can easily add a head structure to these pages like we did for the RCU
batches.

But as it stands I think we can do those things as incrementals on top
of this, no?

What kind of workload would you recommend I use to profile this?



\
 
 \ /
  Last update: 2010-04-09 10:21    [W:0.279 / U:0.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site