[lkml]   [2018]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH v2] mm: don't warn about large allocations for slab
From: Dmitry Vyukov <>

Slub does not call kmalloc_slab() for sizes > KMALLOC_MAX_CACHE_SIZE,
instead it falls back to kmalloc_large().
kmalloc_slab() for all allocations relying on NULL return value
for over-sized allocations.
This inconsistency leads to unwanted warnings from kmalloc_slab()
for over-sized allocations for slab. Returning NULL for failed
allocations is the expected behavior.

Make slub and slab code consistent by checking size >
KMALLOC_MAX_CACHE_SIZE in slab before calling kmalloc_slab().

While we are here also fix the check in kmalloc_slab().
We should check against KMALLOC_MAX_CACHE_SIZE rather than
KMALLOC_MAX_SIZE. It all kinda worked because for slab the
constants are the same, and slub always checks the size against
KMALLOC_MAX_CACHE_SIZE before kmalloc_slab().
But if we get there with size > KMALLOC_MAX_CACHE_SIZE anyhow
bad things will happen. For example, in case of a newly introduced
bug in slub code.

Also move the check in kmalloc_slab() from function entry
to the size > 192 case. This partially compensates for the additional
check in slab code and makes slub code a bit faster
(at least theoretically).

Also drop __GFP_NOWARN in the warning check.
This warning means a bug in slab code itself,
user-passed flags have nothing to do with it.

Nothing of this affects slob.

Signed-off-by: Dmitry Vyukov <>
Cc: Christoph Lameter <>
Cc: Pekka Enberg <>
Cc: David Rientjes <>
Cc: Joonsoo Kim <>
Cc: Andrew Morton <>


Changes since v1:
- everything has changed, re-review
mm/slab.c | 4 ++++
mm/slab_common.c | 12 ++++++------
2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 9515798f37b2d..2a5654bb3b3ff 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3675,6 +3675,8 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
struct kmem_cache *cachep;
void *ret;

+ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
+ return NULL;
cachep = kmalloc_slab(size, flags);
if (unlikely(ZERO_OR_NULL_PTR(cachep)))
return cachep;
@@ -3710,6 +3712,8 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
struct kmem_cache *cachep;
void *ret;

+ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
+ return NULL;
cachep = kmalloc_slab(size, flags);
if (unlikely(ZERO_OR_NULL_PTR(cachep)))
return cachep;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 1f903589980f9..7eb8dc136c1cb 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1023,18 +1023,18 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
unsigned int index;

- if (unlikely(size > KMALLOC_MAX_SIZE)) {
- WARN_ON_ONCE(!(flags & __GFP_NOWARN));
- return NULL;
- }
if (size <= 192) {
if (!size)

index = size_index[size_index_elem(size)];
- } else
+ } else {
+ if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) {
+ WARN_ON(1);
+ return NULL;
+ }
index = fls(size - 1);
+ }

return kmalloc_caches[kmalloc_type(flags)][index];
 \ /
  Last update: 2018-09-27 19:16    [W:0.082 / U:3.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site