lkml.org 
[lkml]   [2003]   [Sep]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Minor scheduler fix to get rid of skipping in xmms


John Yau wrote:

>>The rationale behind Ingo's patch is to "break up" the timeslices to give
>>
>better scheduling latency to
>
>>multiple tasks at the same priority.
>>So it is not "unnecessary context switches," just "extra context switches."
>>
>
>Hmm...my reasoning is that those switches are unnecessary because the
>interactivity bonus/penalty will take care of breaking the timeslices up in
>case of a CPU hog, albeit not at precise 25 ms granularity. Though having
>regularity in scheduling is nice, I think Ingo's patch somewhat negates the
>purpose of having heterogenous time slice lengths. I suspect Ingo's
>approach will thrash the caches quite a bit more than mine; we should
>definitely test this a bit to find out for sure. Any suggestions on how to
>go about that?
>
>If we're going to do a context switch every 25 ms no matter what, we might
>as well just make the scheduler a true real time scheduler, dump having
>different time slice lengths and interactivity recalculations, and go
>completely round robin with strictly enforced priorities and a single class
>of time slice somewhere 1 to 5 ms long.
>

Heh, your logic is entertaining. I don't know how you got from step 1
to step 3 ;)

Anyway, you don't have to dump different timeslice lengths because you
don't really have them to begin with. See how "Nick's scheduler policy
v12" fixes your problems by mostly reducing complexity, not adding to
it.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:48    [W:0.076 / U:2.504 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site