lkml.org 
[lkml]   [2015]   [Feb]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/3] Slab infrastructure for array operations
On Fri, 13 Feb 2015, Joonsoo Kim wrote:
>
> I also think that this implementation is slub-specific. For example,
> in slab case, it is always better to access local cpu cache first than
> page allocator since slab doesn't use list to manage free objects and
> there is no cache line overhead like as slub. I think that,
> in kmem_cache_alloc_array(), just call to allocator-defined
> __kmem_cache_alloc_array() is better approach.

What do you mean by "better"? Please be specific as to where you would see
a difference. And slab definititely manages free objects although
differently than slub. SLAB manages per cpu (local) objects, per node
partial lists etc. Same as SLUB. The cache line overhead is there but no
that big a difference in terms of choosing objects to get first.

For a large allocation it is beneficial for both allocators to fist reduce
the list of partial allocated slab pages on a node.

Going to the local objects first is enticing since these are cache hot but
there are only a limited number of these available and there are issues
with acquiring a large number of objects. For SLAB the objects dispersed
and not spatially local. For SLUB the number of objects is usually much
more limited than SLAB (but that is configurable these days via the cpu
partial pages). SLUB allocates spatially local objects from one page
before moving to the other. This is an advantage. However, it has to
traverse a linked list instead of an array (SLAB).




\
 
 \ /
  Last update: 2015-02-13 17:01    [W:0.241 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site