Messages in this thread |  | | Date | Wed, 28 Jan 2015 20:29:22 +0200 | From | Pekka Enberg <> | Subject | Re: [PATCH -mm v2 1/3] slub: never fail to shrink cache |
| |
On 1/28/15 6:31 PM, Christoph Lameter wrote: > On Wed, 28 Jan 2015, Vladimir Davydov wrote: > >> This patch therefore makes __kmem_cache_shrink() allocate the array on >> stack instead of calling kmalloc, which may fail. The array size is >> chosen to be equal to 32, because most SLUB caches store not more than >> 32 objects per slab page. Slab pages with <= 32 free objects are sorted >> using the array by the number of objects in use and promoted to the head >> of the partial list, while slab pages with > 32 free objects are left in >> the end of the list without any ordering imposed on them. > Acked-by: Christoph Lameter <cl@linux.com> Acked-by: Pekka Enberg <penberg@kernel.org>
|  |