lkml.org 
[lkml]   [2011]   [Jun]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BUGFIX][PATCH 5/5] memcg: fix percpu cached charge draining frequency
On Tue, 14 Jun 2011 12:04:12 +0200
Johannes Weiner <jweiner@redhat.com> wrote:

> On Mon, Jun 13, 2011 at 12:16:48PM +0900, KAMEZAWA Hiroyuki wrote:
> > @@ -1670,8 +1670,8 @@ static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem,
> > victim = mem_cgroup_select_victim(root_mem);
> > if (victim == root_mem) {
> > loop++;
> > - if (loop >= 1)
> > - drain_all_stock_async();
> > + if (!check_soft && loop >= 1)
> > + drain_all_stock_async(root_mem);
>
> I agree with Michal, this should be a separate change.
>

Hm, ok, I'll do.

> > @@ -2008,26 +2011,50 @@ static void refill_stock(struct mem_cgroup *mem, unsigned int nr_pages)
> > * expects some charges will be back to res_counter later but cannot wait for
> > * it.
> > */
> > -static void drain_all_stock_async(void)
> > +static void drain_all_stock_async(struct mem_cgroup *root_mem)
> > {
> > - int cpu;
> > - /* This function is for scheduling "drain" in asynchronous way.
> > - * The result of "drain" is not directly handled by callers. Then,
> > - * if someone is calling drain, we don't have to call drain more.
> > - * Anyway, WORK_STRUCT_PENDING check in queue_work_on() will catch if
> > - * there is a race. We just do loose check here.
> > + int cpu, curcpu;
> > + /*
> > + * If someone calls draining, avoid adding more kworker runs.
> > */
> > - if (atomic_read(&memcg_drain_count))
> > + if (!mutex_trylock(&percpu_charge_mutex))
> > return;
> > /* Notify other cpus that system-wide "drain" is running */
> > - atomic_inc(&memcg_drain_count);
> > get_online_cpus();
> > +
> > + /*
> > + * get a hint for avoiding draining charges on the current cpu,
> > + * which must be exhausted by our charging. But this is not
> > + * required to be a precise check, We use raw_smp_processor_id()
> > + * instead of getcpu()/putcpu().
> > + */
> > + curcpu = raw_smp_processor_id();
> > for_each_online_cpu(cpu) {
> > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> > - schedule_work_on(cpu, &stock->work);
> > + struct mem_cgroup *mem;
> > +
> > + if (cpu == curcpu)
> > + continue;
> > +
> > + mem = stock->cached;
> > + if (!mem)
> > + continue;
> > + if (mem != root_mem) {
> > + if (!root_mem->use_hierarchy)
> > + continue;
> > + /* check whether "mem" is under tree of "root_mem" */
> > + rcu_read_lock();
> > + if (!css_is_ancestor(&mem->css, &root_mem->css)) {
> > + rcu_read_unlock();
> > + continue;
> > + }
> > + rcu_read_unlock();
>
> css_is_ancestor() takes the rcu read lock itself already.
>

you're right.

I'll post an update.

Thanks,
-Kame



\
 
 \ /
  Last update: 2011-06-15 02:27    [W:0.105 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site