lkml.org 
[lkml]   [2016]   [Mar]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4 5/7] mm, kasan: Stackdepot implementation. Enable stackdepot for SLAB
From
Date


On 02/29/2016 08:12 PM, Dmitry Vyukov wrote:

>>> diff --git a/lib/Makefile b/lib/Makefile
>>> index a7c26a4..10a4ae3 100644
>>> --- a/lib/Makefile
>>> +++ b/lib/Makefile
>>> @@ -167,6 +167,13 @@ obj-$(CONFIG_SG_SPLIT) += sg_split.o
>>> obj-$(CONFIG_STMP_DEVICE) += stmp_device.o
>>> obj-$(CONFIG_IRQ_POLL) += irq_poll.o
>>>
>>> +ifeq ($(CONFIG_KASAN),y)
>>> +ifeq ($(CONFIG_SLAB),y)
>>
>> Just try to imagine that another subsystem wants to use stackdepot. How this gonna look like?
>>
>> We have Kconfig to describe dependencies. So, this should be under CONFIG_STACKDEPOT.
>> So any user of this feature can just do 'select STACKDEPOT' in Kconfig.
>>
>>> + obj-y += stackdepot.o
>>> + KASAN_SANITIZE_slub.o := n
_stackdepot.o


>>
>>> +
>>> + stack->hash = hash;
>>> + stack->size = size;
>>> + stack->handle.slabindex = depot_index;
>>> + stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
>>> + __memcpy(stack->entries, entries, size * sizeof(unsigned long));
>>
>> s/__memcpy/memcpy/
>
> memcpy should be instrumented by asan/tsan, and we would like to avoid
> that instrumentation here.

KASAN_SANITIZE_* := n already takes care about this.
__memcpy() is a special thing solely for kasan internals and some assembly code.
And it's not available generally.


>>> + if (unlikely(!smp_load_acquire(&next_slab_inited))) {
>>> + if (!preempt_count() && !in_irq()) {
>>
>> If you trying to detect atomic context here, than this doesn't work. E.g. you can't know
>> about held spinlocks in non-preemptible kernel.
>> And I'm not sure why need this. You know gfp flags here, so allocation in atomic context shouldn't be problem.
>
>
> We don't have gfp flags for kfree.
> I wonder how CONFIG_DEBUG_ATOMIC_SLEEP handles this. Maybe it has the answer.

It hasn't. It doesn't guarantee that atomic context always will be detected.

> Alternatively, we can always assume that we are in atomic context in kfree.
>

Or do this allocation in separate context, put in work queue.

>
>
>>> + alloc_flags &= (__GFP_RECLAIM | __GFP_IO | __GFP_FS |
>>> + __GFP_NOWARN | __GFP_NORETRY |
>>> + __GFP_NOMEMALLOC | __GFP_DIRECT_RECLAIM);
>>
>> I think blacklist approach would be better here.
>>
>>> + page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
>>
>> STACK_ALLOC_ORDER = 4 - that's a lot. Do you really need that much?
>
> Part of the issue the atomic context above. When we can't allocate
> memory we still want to save the stack trace. When we have less than
> STACK_ALLOC_ORDER memory, we try to preallocate another
> STACK_ALLOC_ORDER in advance. So in the worst case, we have
> STACK_ALLOC_ORDER memory and that should be enough to handle all
> kmalloc/kfree in the atomic context. 1 page does not look enough. I
> think Alex did some measuring of the failure race (when we are out of
> memory and can't allocate more).
>

A lot of 4-order pages will lead to high fragmentation. You don't need physically contiguous memory here,
so try to use vmalloc(). It is slower, but fragmentation won't be problem.

And one more thing. Take a look at mempool, because it's generally used to solve the problem you have here
(guaranteed allocation in atomic context).


\
 
 \ /
  Last update: 2016-03-01 13:41    [W:0.111 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site