lkml.org 
[lkml]   [2011]   [Apr]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] sched: recover sched_yield task running time increase
From
Date
On Wed, 2011-04-06 at 14:15 +0800, Alex,Shi wrote:
> On Wed, 2011-04-06 at 13:07 +0800, Rik van Riel wrote:
> > On 04/05/2011 06:33 PM, Alex Shi wrote:
> > > commit ac53db596cc08ecb8040c removed the sched_yield task running
> > > time increase, so the yielded task get more opportunity to be launch
> > > again. That may not the caller want to be. And this also causes
> > > volano benchmark drop 50~80 percent performance on core2/NHM/WSM
> > > machines. This patch recover the sched_yield task vruntime up.
> > >
> > > Signed-off-by: alex.shi@intel.com
> >
> > NACK
> >
> > This was switched off by default and under
> > the sysctl sched_compat_yield for a reason.
> >
> > Reintroducing it under that sysctl option
> > may be acceptable, but by default it would
> > be doing the wrong thing for other workloads.
>
> I can implement this as sysctl option. But when I checked again the man
> page of sched_yield. I have some concerns on this.
>
> ----
> int sched_yield(void);
>
> DESCRIPTION
> A process can relinquish the processor voluntarily without blocking by calling sched_yield().
> The process will then be moved to the end of the queue for its static priority and a new process
> gets to run.
> ----
>
> If a application calls sched_yield system call, most of time it is not
> want to be launched again right now. so the man page said "the caller
> process will then be moved to the _end_ of the queue..."

Moving a yielding nice 0 task behind a SCHED_IDLE (or nice 19) task
could be incredibly painful.

-Mike



\
 
 \ /
  Last update: 2011-04-06 09:03    [W:2.062 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site