Messages in this thread | | | Date | Thu, 31 Mar 2011 15:28:15 +0200 | From | Oleg Nesterov <> | Subject | Re: [PATCH,RFC] perf: panic due to inclied cpu context task_ctx value |
| |
On 03/30, Peter Zijlstra wrote: > > -atomic_t perf_sched_events __read_mostly; > +atomic_t perf_sched_events_in __read_mostly; > +atomic_t perf_sched_events_out __read_mostly; > static DEFINE_PER_CPU(atomic_t, perf_cgroup_events); > > +static void perf_sched_events_inc(void) > +{ > + jump_label_inc(&perf_sched_events_out); > + jump_label_inc(&perf_sched_events_in); > +} > + > +static void perf_sched_events_dec(void) > +{ > + jump_label_dec(&perf_sched_events_in); > + JUMP_LABEL(&perf_sched_events_in, no_sync); > + synchronize_sched(); > +no_sync: > + jump_label_dec(&perf_sched_events_out); > +}
OK, synchronize_sched() can't work. How about
static int force_perf_event_task_sched_out(void *unused) { struct task_struct *curr = current;
__perf_event_task_sched_out(curr, task_rq(curr)->idle);
return 0; }
void synchronize_perf_event_task_sched_out(void) { stop_machine(force_perf_event_task_sched_out, NULL, cpu_possible_mask); }
instead?
- stop_machine(cpu_possible_mask) ensures that each cpu does the context switch and calls _sched_out
- force_perf_event_task_sched_out() is only needed because the migration thread can have the counters too.
Note, I am not sure this is the best solution. Just in case we don't find something better.
In any case, do you think this can work or I missed something again?
Oleg.
| |