lkml.org 
[lkml]   [2011]   [Sep]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: + cgroups-more-safe-tasklist-locking-in-cgroup_attach_proc.patch added to -mm tree
On Thu, Sep 08, 2011 at 07:35:59PM +0200, Oleg Nesterov wrote:
> On 09/07, Ben Blum wrote:
> >
> > On Fri, Sep 02, 2011 at 05:55:34PM +0200, Oleg Nesterov wrote:
> > > On 09/02, Ben Blum wrote:
> > > >
> > > > But I don't think the check becomes pointless? If a sub-thread execs
> > > > right before read_lock(&tasklist_lock) (but after the find_task_by_vpid
> > > > in attach_task_by_pid), that causes the case that the comment refers to.
> > >
> > > How so? The comment says:
> > >
> > > * a race with de_thread from another thread's exec() may strip
> > > * us of our leadership, making while_each_thread unsafe
> > >
> > > This is not true.
> >
> > Sorry, the comment is unclear.
>
> No, the comment is clear. In fact it was me who pointed out we can't
> do while_each_thread() blindly. And now I am tried to confuse you ;)
>
> So, sorry for noise, and thanks for correcting me. Somehow I forgot
> this is not safe even under tasklist.
>
> Partly I was confused because I was thinking about the patch I suggested,
> if we use ->siglock we are safe. If lock_task_sighand(task) succeeds,
> this task should be on list.
>
> Anyway, I was wrong, sorry.
>
> Oleg.

All right, no problem.

As for the patch below (which is the same as it was last time?): did you
mean for Andrew to replace the old tasklist_lock patch with this one, or
should one of us rewrite this against it? either way, it should have
something like the comment I proposed in the first thread.

Thanks,
Ben

>
> --- x/kernel/cgroup.c
> +++ x/kernel/cgroup.c
> @@ -2000,6 +2000,7 @@ int cgroup_attach_proc(struct cgroup *cg
> /* threadgroup list cursor and array */
> struct task_struct *tsk;
> struct flex_array *group;
> + unsigned long flags;
> /*
> * we need to make sure we have css_sets for all the tasks we're
> * going to move -before- we actually start moving them, so that in
> @@ -2027,19 +2028,10 @@ int cgroup_attach_proc(struct cgroup *cg
> goto out_free_group_list;
>
> /* prevent changes to the threadgroup list while we take a snapshot. */
> - rcu_read_lock();
> - if (!thread_group_leader(leader)) {
> - /*
> - * a race with de_thread from another thread's exec() may strip
> - * us of our leadership, making while_each_thread unsafe to use
> - * on this task. if this happens, there is no choice but to
> - * throw this task away and try again (from cgroup_procs_write);
> - * this is "double-double-toil-and-trouble-check locking".
> - */
> - rcu_read_unlock();
> - retval = -EAGAIN;
> + retval = -EAGAIN;
> + if (!lock_task_sighand(leader, &flags))
> goto out_free_group_list;
> - }
> +
> /* take a reference on each task in the group to go in the array. */
> tsk = leader;
> i = 0;
> @@ -2055,9 +2047,9 @@ int cgroup_attach_proc(struct cgroup *cg
> BUG_ON(retval != 0);
> i++;
> } while_each_thread(leader, tsk);
> + unlock_task_sighand(leader, &flags);
> /* remember the number of threads in the array for later. */
> group_size = i;
> - rcu_read_unlock();
>
> /*
> * step 1: check that we can legitimately attach to the cgroup.
>
>


\
 
 \ /
  Last update: 2011-09-09 01:05    [W:0.614 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site