lkml.org 
[lkml]   [2010]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] memcg: dirty pages accounting and limiting infrastructure
On Mon, Feb 22, 2010 at 09:22:42AM +0900, KAMEZAWA Hiroyuki wrote:

[snip]

> > +static unsigned long get_dirty_bytes(struct mem_cgroup *memcg)
> > +{
> > + struct cgroup *cgrp = memcg->css.cgroup;
> > + unsigned long dirty_bytes;
> > +
> > + /* root ? */
> > + if (cgrp->parent == NULL)
> > + return vm_dirty_bytes;
>
> We have mem_cgroup_is_root() macro.
>
> > +
> > + spin_lock(&memcg->reclaim_param_lock);
> > + dirty_bytes = memcg->dirty_bytes;
> > + spin_unlock(&memcg->reclaim_param_lock);
> > +
> > + return dirty_bytes;
> > +}
> Hmm...do we need spinlock ? You use "unsigned long", then, read-write
> is always atomic if not read-modify-write.

I think I simply copy&paste the memcg->swappiness case. But I agree,
read-write should be atomic.

-Andrea


\
 
 \ /
  Last update: 2010-02-22 19:03    [W:0.180 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site