lkml.org 
[lkml]   [2017]   [Nov]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCHv3 1/1] locking/qspinlock/x86: Avoid test-and-set when PV_DEDICATED is set
2017-11-10 15:59 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
> On Fri, Nov 10, 2017 at 10:07:56AM +0800, Wanpeng Li wrote:
>
>> >> Also, you should not put cpumask_t on stack, that's 'broken'.
>>
>> Thanks pointing out this. I found a useful comments in arch/x86/kernel/irq.c:
>>
>> /* These two declarations are only used in check_irq_vectors_for_cpu_disable()
>> * below, which is protected by stop_machine(). Putting them on the stack
>> * results in a stack frame overflow. Dynamically allocating could result in a
>> * failure so declare these two cpumasks as global.
>> */
>> static struct cpumask affinity_new, online_new;
>
> That code no longer exists.. Also not entirely sure how it would be
> helpful.
>
> What you probably want to do is have a per-cpu cpumask, since
> flush_tlb_others() is called with preemption disabled. But you probably
> don't want an unconditionally allocated one, since most kernels will not
> in fact be PV.
>
> So you'll want something like:
>
> static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
>
> And then you need something like:
>
> for_each_possible_cpu(cpu) {
> zalloc_cpumask_var_node(per_cpu_ptr(&__pb_tlb_mask, cpu),
> GFP_KERNEL, cpu_to_node(cpu));
> }
>
> before you set the pv-op or so.

Thanks Peterz, :) I will have a try.

Regards,
Wanpeng Li

\
 
 \ /
  Last update: 2017-11-10 09:05    [W:0.060 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site