lkml.org 
[lkml]   [2009]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] SLQB slab allocator
From
Date
On Thu, 2009-01-22 at 11:27 +0200, Pekka Enberg wrote:
> Hi Christoph,
>
> On Mon, 19 Jan 2009, Nick Piggin wrote:
> > > > > You only go to the allocator when the percpu queue goes empty though, so
> > > > > if memory policy changes (eg context switch or something), then subsequent
> > > > > allocations will be of the wrong policy.
> > > >
> > > > The per cpu queue size in SLUB is limited by the queues only containing
> > > > objects from the same page. If you have large queues like SLAB/SLQB(?)
> > > > then this could be an issue.
> > >
> > > And it could be a problem in SLUB too. Chances are that several allocations
> > > will be wrong after every policy switch. I could describe situations in which
> > > SLUB will allocate with the _wrong_ policy literally 100% of the time.
>
> On Wed, 2009-01-21 at 19:13 -0500, Christoph Lameter wrote:
> > No it cannot because in SLUB objects must come from the same page.
> > Multiple objects in a queue will only ever require a single page and not
> > multiple like in SLAB.
>
> There's one potential problem with "per-page queues", though. The bigger
> the object, the smaller the "queue" (i.e. less objects per page). Also,
> partial lists are less likely to help for big objects because they get
> emptied so quickly and returned to the page allocator. Perhaps we should
> do a small "full list" for caches with large objects?
That helps definitely. We could use a batch to control the list size.




\
 
 \ /
  Last update: 2009-01-22 10:33    [W:0.164 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site