lkml.org 
[lkml]   [2015]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Crash in crc32_le during kmemleak_scan()
From

> On Thu, May 21, 2015 at 06:45:17AM +0100, vigneshr@codeaurora.org wrote:
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index 5ec8b71..4455bb8 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -959,7 +959,7 @@ void __ref kmemleak_free(const void *ptr)
>> {
>> pr_debug("%s(0x%p)\n", __func__, ptr);
>>
>> - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
>> + if (kmemleak_enabled && ptr && !IS_ERR(ptr) && !kmemleak_error)
>> delete_object_full((unsigned long)ptr);
>> else if (kmemleak_early_log)
>> log_early(KMEMLEAK_FREE, ptr, 0, 0);
>
> That's the problem we try to avoid, if we block kmemleak_free on
> kmemleak_error (that was the same as the kmemleak_enabled case before),
> scanning may still be in progress for an object but the object unmapped
> by something like vfree.


OOps ! Ya my bad. Sorry about that.

> So for the error case, we want:
>
> 1. Allow object freeing during a memory scan
> 2. Block kmemleak_free() being entered once the scanning stops and the
> clean-up starts
>
> What I missed is that the clean-up calls delete_object_full() and this
> can race with a kmemleak_free() on the same object. The same could
> probably happen if buggy kernel code would call kfree() on the same
> object from different CPUs. Covering this case is more complicated, I
> have to properly think of the locking.
>
> But assuming that the callers are safe, we need to disable kmemleak
> before the clean-up starts. We can safely set kmemleak_enabled to 0
> after the scanning thread is stopped. So on top of my previous patch:
>
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index dcba05812678..52a38eed50e2 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -1757,16 +1757,15 @@ static void kmemleak_do_cleanup(struct work_struct
> *work)
> mutex_lock(&scan_mutex);
> stop_scan_thread();
>
> + /* stop any memory operation tracing */
> + kmemleak_enabled = 0;
> +
> if (!kmemleak_found_leaks)
> __kmemleak_do_cleanup();
> else
> pr_info("Kmemleak disabled without freeing internal data. "
> "Reclaim the memory with \"echo clear >
> /sys/kernel/debug/kmemleak\"\n");
> mutex_unlock(&scan_mutex);
> -
> - /* stop any memory operation tracing */
> - kmemleak_enabled = 0;
> -
> }
>
> static DECLARE_WORK(cleanup_work, kmemleak_do_cleanup);
>
>> @@ -1019,7 +1019,7 @@ void __ref kmemleak_not_leak(const void *ptr)
>> {
>> pr_debug("%s(0x%p)\n", __func__, ptr);
>>
>> - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
>> + if (kmemleak_enabled && ptr && !IS_ERR(ptr) && !kmemleak_error)
>> make_gray_object((unsigned long)ptr);
>> else if (kmemleak_early_log)
>> log_early(KMEMLEAK_NOT_LEAK, ptr, 0, 0);
>
> That's needed as well. Actually, all the kmemleak entry points apart
> from kmemleak_free() need to bail out on kmemleak_error (e.g.
> kmemleak_ignore).
>
> So I think we need a separate kmemleak_free_enabled. Can you try the
> patch below against mainline please (so revert the previous one)? I
> haven't bothered with kmemleak_free_part() since this is only called
> during early memboot allocations, so we don't have any scanning thread
> running.


Ah! Thanks for the explanation. This makes sense.

> BTW, I'll be on holiday for a week, back on the 1st of June.
>
> ----8<------------------
>
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 5405aff5a590..7913386ca506 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -194,6 +194,8 @@ static struct kmem_cache *scan_area_cache;
>
> /* set if tracing memory operations is enabled */
> static int kmemleak_enabled;
> +/* same as above but only for the kmemleak_free() callback */
> +static int kmemleak_free_enabled;
> /* set in the late_initcall if there were no errors */
> static int kmemleak_initialized;
> /* enables or disables early logging of the memory operations */
> @@ -941,7 +943,7 @@ void __ref kmemleak_free(const void *ptr)
> {
> pr_debug("%s(0x%p)\n", __func__, ptr);
>
> - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
> delete_object_full((unsigned long)ptr);
> else if (kmemleak_early_log)
> log_early(KMEMLEAK_FREE, ptr, 0, 0);
> @@ -981,7 +983,7 @@ void __ref kmemleak_free_percpu(const void __percpu
> *ptr)
>
> pr_debug("%s(0x%p)\n", __func__, ptr);
>
> - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
> for_each_possible_cpu(cpu)
> delete_object_full((unsigned long)per_cpu_ptr(ptr,
> cpu));
> @@ -1749,6 +1751,12 @@ static void kmemleak_do_cleanup(struct work_struct
> *work)
> mutex_lock(&scan_mutex);
> stop_scan_thread();
>
> + /*
> + * Once the scan thread has stopped, it is safe to no longer track
> + * object freeing.
> + */
> + kmemleak_free_enabled = 0;
> +
> if (!kmemleak_found_leaks)
> __kmemleak_do_cleanup();
> else
> @@ -1775,6 +1783,8 @@ static void kmemleak_disable(void)
> /* check whether it is too early for a kernel thread */
> if (kmemleak_initialized)
> schedule_work(&cleanup_work);
> + else
> + kmemleak_free_enabled = 0;
>
> pr_info("Kernel memory leak detector disabled\n");
> }
> @@ -1839,8 +1849,10 @@ void __init kmemleak_init(void)
> if (kmemleak_error) {
> local_irq_restore(flags);
> return;
> - } else
> + } else {
> kmemleak_enabled = 1;
> + kmemleak_free_enabled = 1;
> + }
> local_irq_restore(flags);
>
> /*
>
> -------------------8<-----------------
>

We have tested the patch provided above and it was clean report with no
crashes that were seen earlier. I guess we can go ahead with this one if
its okay with you.

Tested-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>

Thanks and regards,
Vignesh Radhakrishnan

--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux
Foundation Collaborative Project





\
 
 \ /
  Last update: 2015-06-01 07:41    [W:0.506 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site