lkml.org 
[lkml]   [2013]   [Jun]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC v1] add new io-scheduler to use cgroup on high-speed device
On Tue, Jun 04, 2013 at 08:03:37PM -0700, Tejun Heo wrote:
> (cc'ing Kent. Original posting at
> http://thread.gmane.org/gmane.linux.kernel/1502484 )
>
> Hello,
>
> On Wed, Jun 05, 2013 at 10:09:31AM +0800, Robin Dong wrote:
> > We want to use blkio.cgroup on high-speed device (like fusionio) for our mysql clusters.
> > After testing different io-scheduler, we found that cfq is too slow and deadline can't run on cgroup.
> > So we developed a new io-scheduler: tpps (Tiny Parallel Proportion Scheduler).It dispatch requests
> > only by using their individual weight and total weight (proportion) therefore it's simply and efficient.
> >
> > Test case: fusionio card, 4 cgroups, iodepth-512
>
> So, while I understand the intention behind it, I'm not sure a
> separate io-sched for this is what we want. Kent and Jens have been
> thinking about this lately so they'll probably chime in. From my POV,
> I see a few largish issues.
>
> * It has to be scalable with relatively large scale SMP / NUMA
> configurations. It better integrate with blk-mq support currently
> being brewed.
>
> * It definitely has to support hierarchy. Nothing which doesn't
> support full hierarchy can be added to cgroup at this point.
>
> * We already have separate implementations in blk-throtl and
> cfq-iosched. Maybe it's too late and too different for cfq-iosched
> given that it's primarily targeted at disks, but I wonder whether we
> can make blk-throtl generic and scalable enough to cover all other
> use cases.

A generic implementation at block layer also has the advantage that we
can use it for any block device which are not using IO scheduler (dm,md)
and we can enforce the algorithm higher up in the stack.

Thanks
Vivek


\
 
 \ /
  Last update: 2013-06-05 17:21    [W:0.091 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site