lkml.org 
[lkml]   [2008]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH] sched: fix another race when reading /proc/sched_debug
    From
    Date
    On Mon, 2008-12-15 at 09:25 +0800, Li Zefan wrote:
    > Peter Zijlstra wrote:
    > > On Sun, 2008-12-14 at 10:54 +0800, Li Zefan wrote:
    > >>> i merged it up in tip/master, could you please check whether it's ok?
    > >>>
    > >> Sorry, though this patch avoids accessing a half-created cgroup, but I found
    > >> current code may access a cgroup which has been destroyed.
    > >>
    > >> The simplest fix is to take cgroup_lock() before for_each_leaf_cfs_rq.
    > >>
    > >> Could you revert this patch and apply the following new one? My box has
    > >> survived for 16 hours with it applied.
    > >>
    > >> ==========
    > >>
    > >> From: Li Zefan <lizf@cn.fujitsu.com>
    > >> Date: Sun, 14 Dec 2008 09:53:28 +0800
    > >> Subject: [PATCH] sched: fix another race when reading /proc/sched_debug
    > >>
    > >> I fixed an oops with the following commit:
    > >>
    > >> | commit 24eb089950ce44603b30a3145a2c8520e2b55bb1
    > >> | Author: Li Zefan <lizf@cn.fujitsu.com>
    > >> | Date: Thu Nov 6 12:53:32 2008 -0800
    > >> |
    > >> | cgroups: fix invalid cgrp->dentry before cgroup has been completely removed
    > >> |
    > >> | This fixes an oops when reading /proc/sched_debug.
    > >>
    > >> The above commit fixed a race that reading /proc/sched_debug may access
    > >> NULL cgrp->dentry if a cgroup is being removed (via cgroup_rmdir), but
    > >> hasn't been destroyed (via cgroup_diput).
    > >>
    > >> But I found there's another different race, in that reading sched_debug
    > >> may access a cgroup which is being created or has been destroyed, and thus
    > >> dereference NULL cgrp->dentry!
    > >>
    > >> task_group is added to the global list while the cgroup is being created,
    > >> and is removed from the global list while the cgroup is under destruction.
    > >> So running through the list should be protected by cgroup_lock(), if
    > >> cgroup data will be accessed (here by calling cgroup_path).
    > >
    > > Can't we detect a dead task-group and skip those instead of adding this
    > > global lock?
    > >
    >
    > I tried it, but I don't think it's feasable, without lock syncronization:
    >
    > | print_cfs_rq()
    > | check task_group is dead
    > cgroup_diput() |
    > .. |
    > mark task_group as dead |
    > .. |
    > kfree(cgrp) |
    > | call cgroup_path()

    rcu free cgrp





    \
     
     \ /
      Last update: 2008-12-15 09:15    [W:0.027 / U:121.300 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site