lkml.org 
[lkml]   [2012]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC 0/3]block: An IOPS based ioscheduler
From
Date
On Fri, 2012-01-06 at 15:37 +0100, Jan Kara wrote:
> On Fri 06-01-12 13:12:29, Shaohua Li wrote:
> > On Thu, 2012-01-05 at 14:50 +0800, Shaohua Li wrote:
> > > On Wed, 2012-01-04 at 18:19 +1100, Dave Chinner wrote:
> > > > On Wed, Jan 04, 2012 at 02:53:37PM +0800, Shaohua Li wrote:
> > > > > An IOPS based I/O scheduler
> > > > >
> > > > > Flash based storage has some different characteristics against rotate disk.
> > > > > 1. no I/O seek.
> > > > > 2. read and write I/O cost usually is much different.
> > > > > 3. Time which a request takes depends on request size.
> > > > > 4. High throughput and IOPS, low latency.
> > > > >
> > > > > CFQ iosched does well for rotate disk, for example fair dispatching, idle
> > > > > for sequential read. It also has optimization for flash based storage (for
> > > > > item 1 above), but overall it's not designed for flash based storage. It's
> > > > > a slice based algorithm. Since flash based storage request cost is very
> > > > > low, and drive has big queue_depth is quite popular now which makes
> > > > > dispatching cost even lower, CFQ's slice accounting (jiffy based)
> > > > > doesn't work well. CFQ doesn't consider above item 2 & 3.
> > > > >
> > > > > FIOPS (Fair IOPS) ioscheduler is trying to fix the gaps. It's IOPS based, so
> > > > > only targets for drive without I/O seek. It's quite similar like CFQ, but
> > > > > the dispatch decision is made according to IOPS instead of slice.
> > > > >
> > > > > The algorithm is simple. Drive has a service tree, and each task lives in
> > > > > the tree. The key into the tree is called vios (virtual I/O). Every request
> > > > > has vios, which is calculated according to its ioprio, request size and so
> > > > > on. Task's vios is the sum of vios of all requests it dispatches. FIOPS
> > > > > always selects task with minimum vios in the service tree and let the task
> > > > > dispatch request. The dispatched request's vios is then added to the task's
> > > > > vios and the task is repositioned in the sevice tree.
> > > > >
> > > > > The series are orgnized as:
> > > > > Patch 1: separate CFQ's io context management code. FIOPS will use it too.
> > > > > Patch 2: The core FIOPS.
> > > > > Patch 3: request read/write vios scale. This demontrates how the vios scale.
> > > > >
> > > > > To make the code simple for easy view, some scale code isn't included here,
> > > > > some not implementated yet.
> > > > >
> > > > > TODO:
> > > > > 1. ioprio support (have patch already)
> > > > > 2. request size vios scale
> > > > > 3. cgroup support
> > > > > 4. tracing support
> > > > > 5. automatically select default iosched according to QUEUE_FLAG_NONROT.
> > > > >
> > > > > Comments and suggestions are welcome!
> > > >
> > > > Benchmark results?
> > > I didn't have data yet. The patches are still in earlier stage, I want
> > > to focus on the basic idea first.
> > since you asked, I tested in a 4 socket machine with 12 X25M SSD jbod,
> > fs is ext4.
> >
> > workload percentage change with fiops against cfq
> > fio_sync_read_4k -2
> > fio_mediaplay_64k 0
> > fio_mediaplay_128k 0
> > fio_mediaplay_rr_64k 0
> > fio_sync_read_rr_4k 0
> > fio_sync_write_128k 0
> > fio_sync_write_64k -1
> > fio_sync_write_4k -2
> > fio_sync_write_64k_create 0
> > fio_sync_write_rr_64k_create 0
> > fio_sync_write_128k_create 0
> > fio_aio_randread_4k -4
> > fio_aio_randread_64k 0
> > fio_aio_randwrite_4k 1
> > fio_aio_randwrite_64k 0
> > fio_aio_randrw_4k -1
> > fio_aio_randrw_64k 0
> > fio_tpch 9
> > fio_tpcc 0
> > fio_mmap_randread_4k -1
> > fio_mmap_randread_64k 1
> > fio_mmap_randread_1k -8
> > fio_mmap_randwrite_4k 35
> > fio_mmap_randwrite_64k 22
> > fio_mmap_randwrite_1k 28
> > fio_mmap_randwrite_4k_halfbusy 24
> > fio_mmap_randrw_4k 23
> > fio_mmap_randrw_64k 4
> > fio_mmap_randrw_1k 22
> > fio_mmap_randrw_4k_halfbusy 35
> > fio_mmap_sync_read_4k 0
> > fio_mmap_sync_read_64k -1
> > fio_mmap_sync_read_128k -1
> > fio_mmap_sync_read_rr_64k 5
> > fio_mmap_sync_read_rr_4k 3
> >
> > The fio_mmap_randread_1k has regression against 3.2-rc7, but no
> > regression against 3.2-rc6 kernel, still checking why. The fiops has
> > improvement for read/write mixed workload. CFQ is known not good for
> > read/write mixed workload.
> Nice, but this is only about throughput, isn't it? Part of the reason why
> CFQ takes hit in the throughput is that it prefers sync IO (i.e. reads and
> synchronous writes) against other IO. Does your scheduler do anything like
> that?
It has. The vios will be scaled based on read/write. By default, I'll
give read 2.5 times more share than write, which matches CFQ. It's quite
easy to change the ratio with sysfs knobs.

> Could you for example compare a latency of reads while running heavy
> background writing between CFQ and your scheduler? Loads like this where
> original motivation for CFQ I believe.
CFQ supports preemption, FIOPS doesn't, so I suppose read latency of CFQ
is still better in such workload.
In this initial post, I just want to demonstrate the basic idea of the
ioscheduler. I'll post more data for both latency and throughput in next
round.

Thanks,
Shaohua



\
 
 \ /
  Last update: 2012-01-09 02:13    [W:4.600 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site