lkml.org 
[lkml]   [2009]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch 2/3] slub: scan partial list for free slabs when thrashing
From
Date
On Sun, 29 Mar 2009, David Rientjes wrote:
> > Whenever a cpu cache satisfies a fastpath allocation, a fastpath counter
> > is incrememted. This counter is cleared whenever the slowpath is
> > invoked. This tracks how many fastpath allocations the cpu slab has
> > fulfilled before it must be refilled.

On Mon, 2009-03-30 at 10:37 -0400, Christoph Lameter wrote:
> That adds fastpath overhead and it shows for small objects in your tests.

Yup, and looking at this:

+ u16 fastpath_allocs; /* Consecutive fast allocs before slowpath */
+ u16 slowpath_allocs; /* Consecutive slow allocs before watermark */

How much do operations on u16 hurt on, say, x86-64? It's nice that
sizeof(struct kmem_cache_cpu) is capped at 32 bytes but on CPUs that
have bigger cache lines, the types could be wider.

Christoph, why is struct kmem_cache_cpu not __cacheline_aligned_in_smp
btw?

Pekka



\
 
 \ /
  Last update: 2009-03-31 09:17    [W:0.097 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site