lkml.org 
[lkml]   [2008]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] cgroup: limit block I/O bandwidth
On Sun, Jan 20 2008, Andrea Righi wrote:
> Andrea Righi wrote:
> > Naveen Gupta wrote:
> >>> Paul Menage wrote:
> >>>> On Jan 18, 2008 7:36 AM, Dhaval Giani <dhaval@linux.vnet.ibm.com> wrote:
> >>>>> On Fri, Jan 18, 2008 at 12:41:03PM +0100, Andrea Righi wrote:
> >>>>>> Allow to limit the block I/O bandwidth for specific process containers
> >>>>>> (cgroups) imposing additional delays on I/O requests for those processes
> >>>>>> that exceed the limits defined in the control group filesystem.
> >>>>>>
> >>>>>> Example:
> >>>>>> # mkdir /dev/cgroup
> >>>>>> # mount -t cgroup -oio-throttle io-throttle /dev/cgroup
> >>>>> Just a minor nit, can't we name it as io, keeping in mind that other
> >>>>> controllers are known as cpu and memory?
> >>>> Or maybe "blockio"?
> >>> Agree, blockio seems better. Not all I/O is performed on block devices
> >>> and in this case we're considering block devices only.
> >> Here we want to rate limit in block layer, I would think I/O scheduler
> >> is the place where we are in much better position to do this kind of
> >> limiting.
> >>
> >> Also we are changing the behavior of application by adding sleeps to
> >> it during request submission. Moreover, we will prevent requests from
> >> being merged since we won't allow them to be submitted in this case.
> >>
> >> Since bulk of submission for writes is done in background kernel
> >> threads and we throttle based on limits on current, we will end up
> >> throttling these threads and not the actual processes submitting i/o.
> >
> > Yep, that's true! This works for read operations only... at the very
> > least, if I've understood well, we could throttle I/O reads in the
> > submit_bio() path and write operations in __set_page_dirty(). But this
> > would change the applications behavior, so probably the best approcah
> > could be to just get I/O statistics from TASK_IO_ACCOUNTING stuff and
> > implement task delays at the I/O scheduler layer...
>
> OK, to better figure the concept I tried to put the I/O throttling
> mechanism inside the simplest scheduler: noop. But this raises a new
> problem: under certain conditions (typically with write requests) a
> delay imposed on a I/O request of a process could impact on all the
> other I/O requests of other processes, causing the whole system to hang
> until the first request is completed, so it seem that we've to deal with
> a classic priority inversion problem.
>
> Obviously the problem doesn't occur if the limited cgroup performs read
> operations only.
>
> I'm posting the modified patch below only for discussion purposes, you
> could test it if you want, but you've been warned that the whole system
> could hang for certain amounts of time.

Your approach is totally flawed, imho. For instance, you don't want a
process to be able to dirty memory at foo mb/sec but only actually
write them out at bar mb/sec.

The noop-iosched changes are also very buggy. The queue back pointer
breaks reference counting and the task pointer storage assumes the task
will also always be around. That's of course not the case.

IOW, you are doing this at the wrong level.

What problem are you trying to solve?

--
Jens Axboe



\
 
 \ /
  Last update: 2008-01-20 15:35    [W:0.170 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site