lkml.org 
[lkml]   [2019]   [Sep]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC PATCH 1/2] Fix: sched/membarrier: p->mm->membarrier_state racy load
----- On Sep 4, 2019, at 7:49 AM, Peter Zijlstra peterz@infradead.org wrote:

> On Wed, Sep 04, 2019 at 01:28:19PM +0200, Peter Zijlstra wrote:
>> @@ -196,6 +198,17 @@ static int membarrier_register_global_expedited(void)
>> */
>> smp_mb();
>> } else {
>> + struct task_struct *g, *t;
>> +
>> + read_lock(&tasklist_lock);
>> + do_each_thread(g, t) {
>> + if (t->mm == mm) {
>> + atomic_or(MEMBARRIER_STATE_GLOBAL_EXPEDITED,
>> + &t->membarrier_state);
>> + }
>> + } while_each_thread(g, t);
>> + read_unlock(&tasklist_lock);
>> +
>> /*
>> * For multi-mm user threads, we need to ensure all
>> * future scheduler executions will observe the new
>
> Arguably, because this is exposed to unpriv users and a potential
> preemption latency issue, we could do it in 3 passes:
>
> - RCU, mark all found lacking, count
> - RCU, mark all found lacking, count
> - if count of last pass, tasklist_lock
>
> That way, it becomes much harder to trigger the bad case.
>
> Do we worry about that?

Allowing unprivileged processes to iterate over all processes/threads
with the tasklist lock held is something I try to avoid.

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
\
 
 \ /
  Last update: 2019-09-04 17:27    [W:0.065 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site