Messages in this thread |  | | Subject | Re: query: [PATCH 2/2] cgroup: Remove call to synchronize_rcu in cgroup_attach_task | From | Mike Galbraith <> | Date | Wed, 13 Apr 2011 18:56:59 +0200 |
| |
On Wed, 2011-04-13 at 15:16 +0200, Paul Menage wrote: > On Wed, Apr 13, 2011 at 5:11 AM, Mike Galbraith <efault@gmx.de> wrote: > > If the user _does_ that rmdir(), it's more or less back to square one. > > RCU grace periods should not impact userland, but if you try to do > > create/attach/detach/destroy, you run into the same bottleneck, as does > > any asynchronous GC, though that's not such a poke in the eye. I tried > > a straight forward move to schedule_work(), and it seems to work just > > fine. rmdir() no longer takes ~30ms on my box, but closer to 20us. > > > + /* > > + * Release the subsystem state objects. > > + */ > > + for_each_subsys(cgrp->root, ss) > > + ss->destroy(ss, cgrp); > > + > > + cgrp->root->number_of_cgroups--; > > + mutex_unlock(&cgroup_mutex); > > + > > + /* > > + * Drop the active superblock reference that we took when we > > + * created the cgroup > > + */ > > + deactivate_super(cgrp->root->sb); > > + > > + /* > > + * if we're getting rid of the cgroup, refcount should ensure > > + * that there are no pidlists left. > > + */ > > + BUG_ON(!list_empty(&cgrp->pidlists)); > > + > > + kfree(cgrp); > > We might want to punt this through RCU again, in case the subsystem > destroy() callbacks left anything around that was previously depending > on the RCU barrier. > > Also, I'd be concerned that subsystems might get confused by the fact > that a new group called 'foo' could be created before the old 'foo' > has been cleaned up? (And do any subsystems rely on being able to > access the cgroup dentry up until the point when destroy() is called?
Yeah, I already have head scratching sessions planned for these, why I used 'seems' to work fine, and Not-signed-off-by: :)
-Mike
|  |