lkml.org 
[lkml]   [2008]   [Jan]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC][PATCH] per-task I/O throttling
From
Date

On Sat, 2008-01-12 at 16:27 +0530, Balbir Singh wrote:
> * Peter Zijlstra <a.p.zijlstra@chello.nl> [2008-01-12 10:46:37]:
>
> >
> > On Fri, 2008-01-11 at 23:57 -0500, Valdis.Kletnieks@vt.edu wrote:
> > > On Fri, 11 Jan 2008 17:32:49 +0100, Andrea Righi said:
> > >
> > > > The interesting feature is that it allows to set a priority for each
> > > > process container, but AFAIK it doesn't allow to "partition" the
> > > > bandwidth between different containers (that would be a nice feature
> > > > IMHO). For example it would be great to be able to define per-container
> > > > limits, like assign 10MB/s for processes in container A, 30MB/s to
> > > > container B, 20MB/s to container C, etc.
> > >
> > > Has anybody considered allocating based on *seeks* rather than bytes moved,
> > > or counting seeks as "virtual bytes" for the purposes of accounting (if the
> > > disk can do 50mbytes/sec, and a seek takes 5millisecs, then count it as 100K
> > > of data)?
> >
> > I was considering a time scheduler, you can fill your time slot with
> > seeks or data, it might be what CFQ does, but I've never even read the
> > code.
> >
>
> So far the definition of I/O bandwidth has been w.r.t time. Not all IO
> devices have sectors; I'd prefer bytes over a period of time.

Doing a time based one would only require knowing the (avg) delay of
seeks, whereas doing a bytes based one would also require knowing the
(avg) speed of the device.

That is, if you're also interested in providing a latency guarantee.
Because that'd force you to convert bytes to time again.

I'm not sure thats a good way to go with as long as a majority of
devices still have a non-0 seek penalty (SSDs just aren't there yet for
most of us).



\
 
 \ /
  Last update: 2008-01-12 12:13    [W:0.364 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site