lkml.org 
[lkml]   [2019]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] locking/lockdep: Fix potential buffer overrun problem in stack_trace[]
From
Date
On 12/19/19 9:57 PM, Bart Van Assche wrote:
> On 2019-12-19 10:28, Waiman Long wrote:
>> If the lockdep code is really running out of the stack_trace entries,
>> there is a possiblity that buffer overrun can happen and corrupt the
> ^^^^^^^^^^
> possibility?
>> data immediately after stack_trace[].
>>
>> If there is less than LOCK_TRACE_SIZE_IN_LONGS entries left before
>> the call to save_trace(), the max_entries computation will leave it
>> with a very large positive number because of its unsigned nature. The
>> subsequent call to stack_trace_save() will then corrupt the data after
>> stack_trace[]. Fix that by changing max_entries to a signed integer
>> and check for negative value before calling stack_trace_save().
>>
>> Fixes: 12593b7467f9 ("locking/lockdep: Reduce space occupied by stack traces")
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>> kernel/locking/lockdep.c | 7 +++----
>> 1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
>> index 32282e7112d3..56e260a7582f 100644
>> --- a/kernel/locking/lockdep.c
>> +++ b/kernel/locking/lockdep.c
>> @@ -482,7 +482,7 @@ static struct lock_trace *save_trace(void)
>> struct lock_trace *trace, *t2;
>> struct hlist_head *hash_head;
>> u32 hash;
>> - unsigned int max_entries;
>> + int max_entries;
>>
>> BUILD_BUG_ON_NOT_POWER_OF_2(STACK_TRACE_HASH_SIZE);
>> BUILD_BUG_ON(LOCK_TRACE_SIZE_IN_LONGS >= MAX_STACK_TRACE_ENTRIES);
>> @@ -490,10 +490,8 @@ static struct lock_trace *save_trace(void)
>> trace = (struct lock_trace *)(stack_trace + nr_stack_trace_entries);
>> max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries -
>> LOCK_TRACE_SIZE_IN_LONGS;
>> - trace->nr_entries = stack_trace_save(trace->entries, max_entries, 3);
>>
>> - if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES -
>> - LOCK_TRACE_SIZE_IN_LONGS - 1) {
>> + if (max_entries < 0) {
>> if (!debug_locks_off_graph_unlock())
>> return NULL;
>>
>> @@ -502,6 +500,7 @@ static struct lock_trace *save_trace(void)
>>
>> return NULL;
>> }
>> + trace->nr_entries = stack_trace_save(trace->entries, max_entries, 3);
>>
>> hash = jhash(trace->entries, trace->nr_entries *
>> sizeof(trace->entries[0]), 0);
> I'm not sure whether it is useful to call stack_trace_save() if
> max_entries == 0. How about changing the "max_entries < 0" test into
> "max_entries <= 0"?

I have actually added some instrumentation code to check the
distribution of stack trace lengths. I did get hits (about 40) on
zero-length stack traces after system bootup. But I am fine changing it
to <= 0.

Cheers,
Longman


\
 
 \ /
  Last update: 2019-12-20 14:35    [W:0.046 / U:0.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site