lkml.org 
[lkml]   [2008]   [Aug]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH][RFC] dirty balancing for cgroups
From
Date
On Wed, 2008-08-06 at 17:20 +0900, YAMAMOTO Takashi wrote:
> hi,
>
> > On Fri, 11 Jul 2008 17:34:46 +0900 (JST)
> > yamamoto@valinux.co.jp (YAMAMOTO Takashi) wrote:
> >
> > > hi,
> > >
> > > > > my patch penalizes heavy-writer cgroups as task_dirty_limit does
> > > > > for heavy-writer tasks. i don't think that it's necessary to be
> > > > > tied to the memory subsystem because i merely want to group writers.
> > > > >
> > > > Hmm, maybe what I need is different from this ;)
> > > > Does not seem to be a help for memory reclaim under memcg.
> > >
> > > to implement what you need, i think that we need to keep track of
> > > the numbers of dirty-pages in each memory cgroups as a first step.
> > > do you agree?
> > >
> > yes, I think so, now.
> >
> > may be not difficult but will add extra overhead ;( Sigh..
>
> the following is a patch to add the overhead. :)
> any comments?
>
> YAMAMOTO Takashi

It _might_ (depends on the uglyness of the result) make sense to try and
stick the mem_cgroup_*_page_dirty() stuff into the *PageDirty() macros.


> @@ -485,7 +502,10 @@ unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
> if (PageUnevictable(page) ||
> (PageActive(page) && !active) ||
> (!PageActive(page) && active)) {
> - __mem_cgroup_move_lists(pc, page_lru(page));
> + if (try_lock_page_cgroup(page)) {
> + __mem_cgroup_move_lists(pc, page_lru(page));
> + unlock_page_cgroup(page);
> + }
> continue;
> }

This chunk seems unrelated and lost....


> @@ -772,6 +792,38 @@ void mem_cgroup_end_migration(struct page *newpage)
> mem_cgroup_uncharge_page(newpage);
> }
>
> +void mem_cgroup_set_page_dirty(struct page *pg)
> +{
> + struct page_cgroup *pc;
> +
> + lock_page_cgroup(pg);
> + pc = page_get_page_cgroup(pg);
> + if (pc != NULL && (pc->flags & PAGE_CGROUP_FLAG_DIRTY) == 0) {
> + struct mem_cgroup *mem = pc->mem_cgroup;
> + struct mem_cgroup_stat *stat = &mem->stat;
> +
> + pc->flags |= PAGE_CGROUP_FLAG_DIRTY;
> + __mem_cgroup_stat_add(stat, MEM_CGROUP_STAT_DIRTY, 1);
> + }
> + unlock_page_cgroup(pg);
> +}
> +
> +void mem_cgroup_clear_page_dirty(struct page *pg)
> +{
> + struct page_cgroup *pc;
> +
> + lock_page_cgroup(pg);
> + pc = page_get_page_cgroup(pg);
> + if (pc != NULL && (pc->flags & PAGE_CGROUP_FLAG_DIRTY) != 0) {
> + struct mem_cgroup *mem = pc->mem_cgroup;
> + struct mem_cgroup_stat *stat = &mem->stat;
> +
> + pc->flags &= ~PAGE_CGROUP_FLAG_DIRTY;
> + __mem_cgroup_stat_add(stat, MEM_CGROUP_STAT_DIRTY, -1);
> + }
> + unlock_page_cgroup(pg);
> +}
> +
> /*
> * A call to try to shrink memory usage under specified resource controller.
> * This is typically used for page reclaiming for shmem for reducing side


I presonally dislike the != 0, == 0 comparisons for bitmask operations,
they seem to make it harder to read somewhow. I prefer to write !(flags
& mask) and (flags & mask), instead.

I guess taste differs,...


\
 
 \ /
  Last update: 2008-08-07 15:39    [W:0.067 / U:3.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site