lkml.org 
[lkml]   [2000]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: kmalloc optimization
Hi,

> > What do you have in mind? Still using slabs for kmalloc, but adding
> > non-power-of-two-sizes? Adding buckets of size (2^n)*1.5 would be
> > straightforward, and should get most of the benefit, if there is
> > benefit to be had.
>
> The best numbers have to be determined, it requires extensive profiling runs.

It is possible to add the necessary data capture abilities to
kmalloc() and friends, with a build time option. This would allow users
to gather the stats for their own work loads.

My latest version of the Slab allocator (still haven't got round to
finishing the thing off), allows new sizes to be added dynamically without
needing to have a lock guarding the linkages between the different general
cache size nodes (gnodes).
The standard (power-of-2) general caches' gnodes are from contigious
memory, so they can be indexed into after a find-highes-set-bit operation.
Also, slabs can be shared between caches where the objects are of
similar size. This allows a specific cache to also be exported as a
general cache (still giving correct object usage counts for each sharing
cache). The only disadvantage there is increased lock contention inside
my slab_info structures, but heavily used caches can insure they don't
share via a SLAB_PERFORMANCE creation flag.

> Also the most heavy users are probably better converted to direct calls
> of kmem_cache_alloc (looking at my /proc/slabinfo there must be some heavy
> user who doesn't do that for a size <=32bytes)

Many of the remaining, general size, allocations are not performance
critical. Of the ones which are, networking (the data buffers for
skbufs) are by far the most important one to tackle.

> 1500bytes slab unfortunately is not too useful, because it does not fit well
> in 4K pages (you would need 8K or 16K page allocations, which the mm system
> does not like much due to fragmentation)

Doesn't matter (too much).
I've a scheme where slabs of different orders can be mixed within a
cache (even a shared cache). By alternating between the different orders,
and falling back on the order on allocation failures, allows for large
slabs to be used with protection of smaller slabs when the VM can't cope.
Couple this with some "background" slab shuffling (to reduce
fragmentation), seems to work well on both high and low memory machines.

Mark

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.125 / U:1.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site