lkml.org 
[lkml]   [2014]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched: Fix race between task_group and sched_task_group
On 29.10.2014 01:52, Oleg Nesterov wrote:
> On 10/28, Kirill Tkhai wrote:
>>
>> Shouldn't we do that in separate patch? How about this?
>
> Up to Peter, but I think a separate patch is fine.
>
>> [PATCH]sched: Remove lockdep check in sched_move_task()
>>
>> sched_move_task() is the only interface to change sched_task_group:
>> cpu_cgrp_subsys methods and autogroup_move_group() use it.
>
> Yes, but...
>
>> Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
>> is ordered with other users of sched_move_task(). This means we do
>> no need RCU here: if we've dereferenced a tg here, the .attach method
>> hasn't been called for it yet.
>>
>> Thus, we should pass "true" to task_css_check() to silence lockdep
>> warnings.
>
> In theory, I am not sure.
>
> However, I never really understood this code and today I forgot everything,
> please correct me.
>
>> @@ -7403,8 +7403,12 @@ void sched_move_task(struct task_struct *tsk)
>> if (unlikely(running))
>> put_prev_task(rq, tsk);
>>
>> - tg = container_of(task_css_check(tsk, cpu_cgrp_id,
>> - lockdep_is_held(&tsk->sighand->siglock)),
>> + /*
>> + * All callers are synchronized by task_rq_lock(); we do not use RCU
>> + * which is pointless here. Thus, we pass "true" to task_css_check()
>> + * to prevent lockdep warnings.
>> + */
>> + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
>> struct task_group, css);
>
> Why this can't race with cgroup_task_migrate() if it is called by
> cgroup_post_fork() ?

It can race, but which problem is there? The only thing is
cgroup_post_fork()'s or ss->attach()'s call of sched_move_task() will be
NOOP.

cgroup_migrate_add_src()

cgroup_task_migrate()
cgroup_post_fork();
rcu_assign_pointer(tsk->cgroups, new_cset);
sched_move_task();
css->ss->attach(css, &tset);

sched_move_task();

cgroup_migrate_finish()

> And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> we should not rely on implementation details.

Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
possible there, because old_cset->refcount is lager than 1. We increment
it in cgroup_migrate_add_src() and real freeing happens in
cgroup_migrate_finish(). These functions are around task_migrate(), they
are pair brackets.

> task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
> race with migrate then ->attach() was not called. But it seems that in
> theory it is not safe to dereference tsk->cgroups.

old_cset can't be freed in cgroup_task_migrate(), so we can safely
dereference it. If we've got old_cset in
cgroup_post_fork()->sched_move_task(), the right sched_task_group will
be installed by attach->sched_move_task().

Kirill


\
 
 \ /
  Last update: 2014-10-29 04:41    [W:0.075 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site