lkml.org 
[lkml]   [2010]   [Feb]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
On Thu, Jan 28, 2010 at 08:26:08PM -0800, Paul Turner wrote:
> On Thu, Jan 28, 2010 at 7:49 PM, Bharata B Rao <bharata.rao@gmail.com=
> wrote:
> > On Sat, Jan 9, 2010 at 2:15 AM, Paul Turner <pjt@google.com> wrote:
> >>
> >> What are your thoughts on using a separate mechanism for the gener=
al case. =A0A
> >> draft proposal follows:
> >>
> >> - Maintain a global run-time pool for each tg. =A0The runtime spec=
ified by the
> >> =A0user represents the value that this pool will be refilled to ea=
ch period.
> >> - We continue to maintain the local notion of runtime/period in ea=
ch cfs_rq,
> >> =A0continue to accumulate locally here.
> >>
> >> Upon locally exceeding the period acquire new credit from the glob=
al pool
> >> (either under lock or more likely using atomic ops). =A0This can e=
ither be in
> >> fixed steppings (e.g. 10ms, could be tunable) or following some qu=
asi-curve
> >> variant with historical demand.
> >>
> >> One caveat here is that there is some over-commit in the system, t=
he local
> >> differences of runtime vs period represent additional over the glo=
bal pool.
> >> However it should not be possible to consistently exceed limits si=
nce the rate
> >> of refill is gated by the runtime being input into the system via =
the per-tg
> >> pool.
> >>
> >
> > We borrow from what is actually available as spare (spare =3D unuse=
d or
> > remaining). With global pool, I see that would be difficult.
> > Inability/difficulty in keeping the global pool in sync with the
> > actual available spare time is the reason for over-commit ?
> >
>=20
> We maintain two pools, a global pool (new) and a per-cfs_rq pool
> (similar to existing rt_bw).
>=20
> When consuming time you charge vs your local bandwidth until it is
> expired, at this point you must either refill from the global pool, o=
r
> throttle.
>=20
> The "slack" in the system is the sum of unconsumed time in local pool=
s
> from the *previous* global pool refill. This is bounded above by the
> size of time you refill a local pool at each expiry. We call the siz=
e
> of refill a 'slice'.
>=20
> e.g.
>=20
> Task limit of 50ms, slice=3D10ms, 4cpus, period of 500ms
>=20
> Task A runs on cpus 0 and 1 for 5ms each, then blocks.
>=20
> When A first executes on each cpu we take slice=3D10ms from the globa=
l
> pool of 50ms and apply it to the local rq. Execution then proceeds v=
s
> local pool.
>=20
> Current state is: 5 ms in local pools on {0,1}, 30ms remaining in glo=
bal pool
>=20
> Upon period expiration we issue a global pool refill. At this point =
we have:
> 5 ms in local pools on {0,1}, 50ms remaining in global pool.
>=20
> That 10ms of slack time is over-commit in the system. However it
> should be clear that this can only be a local effect since over any
> period of time the rate of input into the system is limited by global
> pool refill rate.

With the same setup as above consider 5 such tasks which block after
consuming 5ms each. So now we have 25ms slack time. In the next bandwid=
th
period if 5 cpu hogs start running and they would consume this 25ms and=
the
50ms from this period. So we gave 50% extra to a group in a bandwidth p=
eriod.
Just wondering how common such scenarious could be.

>=20
> There are also some strategies that we are exploring to improve
> behavior further here. One idea is that if we maintain a generation
> counter then on voluntary dequeue (e.g. tasks block) we can return
> local time to the global period pool or expire it (if generations
> don't match), this greatly reduces the noise (via slack vs ideal
> limit) that a busty application can induce.

Why not clear the remaining runtime during bandwidth refresh ?

>=20
> >> This would also naturally associate with an interface change that =
would mean the
> >> runtime limit for a group would be the effective cpurate within th=
e period.
> >>
> >> e.g. by setting a runtime of 200000us on a 100000us period it woul=
d effectively
> >> allow you to use 2 cpus worth of wall-time on a multicore system.
> >>
> >> I feel this is slightly more natural than the current definition w=
hich due to
> >> being local means that values set will not result in consistent be=
havior across
> >> machines of different core counts. =A0It also has the benefit of b=
eing consistent
> >> with observed exports of time consumed, e.g. rusage, (indirectly) =
time, etc.
> >
> > Though runtimes are enforced locally per-cpu, that's only the
> > implementation. The definition of runtime and period is still
> > system-wide/global. A runtime/period=3D0.25/0.5 will mean 0.25s of
> > system wide runtime within a period of 0.5s. Talking about consiste=
nt
> > definition, I would say this consistently defines half of system wi=
de
> > wall-time on all configurations :)
>=20
> This feels non-intuitive suppose you have a non-homogeneous fleet of
> systems. It is also difficult to express limits in terms of cores,
> suppose I'm an admin trying to jail my users (maybe I rent out virtua=
l
> time ala EC2 for example). The fractions I have to use to represent
> integer core amounts are going to become quite small on large systems=
=2E
> For example, 1 core on a 64 core system would mean 3.905ms/250ms
> period. What's the dependency here between your time and the current
> cpuset topology also, if I'm only allowed on half the system does thi=
s
> fraction then refer to global resources or what I'm constrained to?
> This seems a difficult data dependency to manage.
>=20
> My (personal) ideology is that we are metering at the cpu level as
> opposed to the system level -- which means N seconds of cpu-time make=
s
> more sense to me. I feel it has advantages in that it can be
> specified more directly relative to the period and is independent of
> system partitioning.
>=20
> I'd be interested to hear other opinions on this.

We need a consensus here, will wait to see what others think about this=
=2E

>=20
> > If it means 2 CPUs worth wall-time
> > in 4 core machine, it would mean 4 CPUs on a 8 CPU machine. =A0At t=
his
> > point, I am inclined to go with this and let the admins/tools work =
out
> > the actual CPUs part of it. However I would like to hear what other=
s
> > think about this interface.
> >
> >>
> >> For future scalability as machine size grows this could potentiall=
y be
> >> partitioned below the tg level along the boundaries of sched_domai=
ns (or
> >> something similar). =A0However for an initial draft given current =
machine sizes
> >> the contention on the global pool should hopefully be fairly low.
> >
> > One of the alternatives I have in mind is to be more aggressive whi=
le
> > borrowing. While keeping the current algorithm (of iterating thro' =
all
> > CPUs when borrowing) intact, we could potentially borrow more from
> > those CPUs which don't have any running task from the given group. =
I
> > just experimented with borrowing half of the available runtime from
> > such CPUs and found that number of iterations are greatly reduced a=
nd
> > the source runtime quickly converges to its max possible value. Do =
you
> > see any issues with this ?
> >
>=20
> I strongly believe that this is going to induce significant lock
> contention and is not a scalable solution over time. While using a
> faster converging series for time may help I think there are
> confounding factors that limit effect here. Consider the 1 core on a
> 64 core system example above. With only 3.905ms per pool we are goin=
g
> to quickly hit the case where we are borrowing not-useful periods of
> time while thrashing locks.
>=20
> We are in the midst of an implementation for proposal above which
> we'll have ready post to here for consideration next week. We have
> maintained your existing approach with respect to handling throttled
> entities and layered on top of that the proposed alternate
> local/global bandwidth scheme. Initial tests show very promising
> results!

Nice. Look forward to your patches.

Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"=
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-02-01 09:24    [W:0.079 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site