lkml.org 
[lkml]   [2003]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: 2.6.0-test2-mm1 results
Date
On Fri, 1 Aug 2003 02:01, Martin J. Bligh wrote:
> > On Fri, 1 Aug 2003 01:19, Martin J. Bligh wrote:
> >> >> Does this help interactivity a lot, or was it just an experiment?
> >> >> Perhaps it could be less agressive or something?
> >> >
> >> > Well basically this is a side effect of selecting out the correct cpu
> >> > hogs in the interactivity estimator. It seems to be working ;-) The
> >> > more cpu hogs they are the lower dynamic priority (higher number) they
> >> > get, and the more likely they are to be removed from the active array
> >> > if they use up their full timeslice. The scheduler in it's current
> >> > form costs more to resurrect things from the expired array and restart
> >> > them, and the cpu hogs will have to wait till other less cpu hogging
> >> > tasks run.
> >> >
> >> > How do we get around this? I'll be brave here and say I'm not sure we
> >> > need to, as cpu hogs have a knack of slowing things down for everyone,
> >> > and it is best not just for interactivity for this to happen, but for
> >> > fairness.
> >> >
> >> > I suspect a lot of people will have something to say on this one...
> >>
> >> Well, what you want to do is prioritise interactive tasks over cpu hogs.
> >> What *seems* to be happening is you're just switching between cpu hogs
> >> more ... that doesn't help anyone really. I don't have an easy answer
> >> for how to fix that, but it doesn't seem desireable to me - we need some
> >> better way of working out what's interactive, and what's not.
> >
> > Indeed and now that I've thought about it some more, there are 2 other
> > possible contributors
> >
> > 1. Tasks also round robin at 25ms. Ingo said he's not sure if that's too
> > low, and it definitely drops throughput measurably but slightly.
> > A simple experiment is changing the timeslice granularity in sched.c and
> > see if that fixes it to see if that's the cause.
> >
> > 2. Tasks waiting for 1 second are considered starved, so cpu hogs running
> > with their full timeslice used up when something is waiting that long
> > will be expired. That used to be 10 seconds.
> > Changing starvation limit will show if that contributes.
>
> Ah. If I'm doing a full "make -j" I have almost 100 tasks per cpu.
> if it's 25ms or 100ms timeslice that's 2.5 or 10s to complete the
> timeslice. Won't that make *everyone* seem starved? Not sure that's
> a good idea ... reminds me of Dilbert: "we're going to focus particularly
> on ... everything!" ;-)

The starvation thingy is also dependent on number of running tasks.

I quote from the master engineer Ingo's codebook:

#define EXPIRED_STARVING(rq) \
(STARVATION_LIMIT && ((rq)->expired_timestamp && \
(jiffies - (rq)->expired_timestamp >= \
STARVATION_LIMIT * ((rq)->nr_running) + 1)))

Where STARVATION_LIMIT is 1 second.

Con

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:47    [W:0.061 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site