lkml.org 
[lkml]   [2016]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/5] mm: memcontrol: generalize locking for the page->mem_cgroup binding
On Wed, Jan 27, 2016 at 05:30:45PM +0300, Vladimir Davydov wrote:
> On Tue, Jan 26, 2016 at 04:00:02PM -0500, Johannes Weiner wrote:
>
> > @@ -683,17 +683,17 @@ int __set_page_dirty_buffers(struct page *page)
> > } while (bh != head);
> > }
> > /*
> > - * Use mem_group_begin_page_stat() to keep PageDirty synchronized with
> > - * per-memcg dirty page counters.
> > + * Lock out page->mem_cgroup migration to keep PageDirty
> > + * synchronized with per-memcg dirty page counters.
> > */
> > - memcg = mem_cgroup_begin_page_stat(page);
> > + memcg = lock_page_memcg(page);
> > newly_dirty = !TestSetPageDirty(page);
> > spin_unlock(&mapping->private_lock);
> >
> > if (newly_dirty)
> > __set_page_dirty(page, mapping, memcg, 1);
>
> Do we really want to pass memcg to __set_page_dirty and then to
> account_page_dirtied, increasing stack/regs usage even in case memory
> cgroup is disabled? May be, it'd be better to make
> mem_cgroup_update_page_stat take a page instead of a memcg?

I'll look into that. It will need changing migration to leave the
page->mem_cgroup binding of live pages alone, but that's something
worth doing anyway. It's beyond the scope of these patches, though.

Thanks

\
 
 \ /
  Last update: 2016-01-29 18:01    [W:0.977 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site