lkml.org 
[lkml]   [2016]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 1/2] mm, kasan: improve double-free detection
From
Date


On 06/10/2016 08:09 PM, Dmitry Vyukov wrote:
> On Fri, Jun 10, 2016 at 7:03 PM, Andrey Ryabinin
> <aryabinin@virtuozzo.com> wrote:
>>
>>
>> On 06/09/2016 08:00 PM, Andrey Ryabinin wrote:
>>> On 06/07/2016 09:03 PM, Kuthonuzo Luruo wrote:
>>>
>>> Next time, when/if you send patch series, send patches in one thread, i.e. patches should be replies to the cover letter.
>>> Your patches are not linked together, which makes them harder to track.
>>>
>>>
>>>> Currently, KASAN may fail to detect concurrent deallocations of the same
>>>> object due to a race in kasan_slab_free(). This patch makes double-free
>>>> detection more reliable by serializing access to KASAN object metadata.
>>>> New functions kasan_meta_lock() and kasan_meta_unlock() are provided to
>>>> lock/unlock per-object metadata. Double-free errors are now reported via
>>>> kasan_report().
>>>>
>>>> Per-object lock concept from suggestion/observations by Dmitry Vyukov.
>>>>
>>>
>>>
>>> So, I still don't like this, this too way hacky and complex.
>>> I have some thoughts about how to make this lockless and robust enough.
>>> I'll try to sort this out tomorrow.
>>>
>>
>>
>> So, I something like this should work.
>> Tested very briefly.
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index ac4b3c4..8691142 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -75,6 +75,8 @@ struct kasan_cache {
>> int kasan_module_alloc(void *addr, size_t size);
>> void kasan_free_shadow(const struct vm_struct *vm);
>>
>> +void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
>> +
>> size_t ksize(const void *);
>> static inline void kasan_unpoison_slab(const void *ptr) { ksize(ptr); }
>>
>> @@ -102,6 +104,9 @@ static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
>> static inline void kasan_poison_object_data(struct kmem_cache *cache,
>> void *object) {}
>>
>> +static inline void kasan_init_slab_obj(struct kmem_cache *cache,
>> + const void *object) { }
>> +
>> static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
>> static inline void kasan_kfree_large(const void *ptr) {}
>> static inline void kasan_poison_kfree(void *ptr) {}
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 6845f92..ab0fded 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -388,11 +388,9 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>> *size += sizeof(struct kasan_alloc_meta);
>>
>> /* Add free meta. */
>> - if (cache->flags & SLAB_DESTROY_BY_RCU || cache->ctor ||
>> - cache->object_size < sizeof(struct kasan_free_meta)) {
>> - cache->kasan_info.free_meta_offset = *size;
>> - *size += sizeof(struct kasan_free_meta);
>> - }
>> + cache->kasan_info.free_meta_offset = *size;
>> + *size += sizeof(struct kasan_free_meta);
>> +
>
>
> Why?!
> Please don't worsen runtime characteristics of KASAN. We run real
> systems with it.
> Most objects are small. This can lead to significant memory consumption.
>

Yeah, this is a temp hack actually, because I didn't finish this part yet.
Basically, I want to make free stack always available (i.e. always save it in redzone),
because the is always better to have more information. Also this makes bug
report code a bit easier.

Of course, increasing memory usage is not what we want, so my plan is to make this:
- remove alloc_size, because we already now object size. I mean cache->object_size.
For kmalloc()'ed objects object_size is rounded up size, but exact size of allocation
usually is not valuable information (Personally, I can't remember it being useful).

- Unify allocation stack and free stack and keep them both in redzone. This is exactly 16-bytes, so this won't
increase memory usage. So only quarantine pointer may be stored in freed object.

Proposed changes will actually decrease memory usage, because 8-byte objects will occupy less space.






\
 
 \ /
  Last update: 2016-06-15 19:01    [W:0.285 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site