lkml.org 
[lkml]   [2011]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC] remove jump_label optimization for perf sched events
On Thu, Nov 17, 2011 at 01:49:19PM +0100, Peter Zijlstra wrote:
> On Thu, 2011-11-17 at 14:30 +0200, Gleb Natapov wrote:
> > jump_lable patching is very expensive operation that involves pausing all
> > cpus. The patching of perf_sched_events jump_label is easily controllable
> > from userspace by unprivileged user. When user runs loop like this
> > "while true; do perf stat -e cycles true; done" the performance of my
> > test application that just increments a counter for one second drops by
> > 4%. This is on a 16 cpu box with my test application using only one of
> > them. An impact on a real server doing real work will be much worse.
> > Performance of KVM PMU drops nearly 50% due to jump_lable for "perf
> > record" since KVM PMU implementation creates and destroys perf event
> > frequently.
>
> Ideally we'd fix text_poke to not use stop_machine() we know how to, but
> we haven't had the green light from Intel/AMD yet.
>
> Rostedt was going to implement it anyway and see if anything breaks.
>
Hmm interesting.

> Also, virt might be able to pull something smart on text_poke() dunno.
>
The problem with virt is not text_poke() in a guest, but the one in a
host. The guest I am testing with has only one cpu. Basically creating
fist perf event/destroying last perf event is very expensive currently
and when "perf record" is running in a guest this happens a lot in a
host.

> That said, I'd much rather throttle this particular jump label than
> remove it altogether, some people really don't like all this scheduler
> hot path crap.
What about moving perf_event_task_sched() to sched_(in|out)_preempt_notifiers?
preempt notifiers checking is already on the scheduler hot path, so no
additional overhead for perf case.

--
Gleb.


\
 
 \ /
  Last update: 2011-11-17 14:03    [W:0.105 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site