lkml.org 
[lkml]   [2010]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
>
> > On Tue, Jul 13, 2010 at 03:38:11PM -0400, Jeff Moyer wrote:
> >> Corrado Zoccolo <czoccolo@gmail.com> writes:
> >>
> >> > Can you test the attached patch, where I also added your changes to
> >> > make jbd(2) to perform sync writes?
> >>
> >> I got new storage, so I have new numbers. I only re-ran deadline and
> >> vanilla cfq for the fs_mark only test. The average of 10 runs comes out
> >> like so:
> >>
> >> deadline: 571.98
> >> vanilla cfq: 107.42
> >> patched cfq: 460.9
> >>
> >> Mixed workload results with your suggested patch:
> >>
> >> fs_mark: 15.65 files/sec
> >> fio: 132.5 MB/s
> >>
> >> So, again, not looking great for the mixed workload, but the patch
> >> does improve the fs_mark only case. Looking at the blktrace data shows
> >> that the jbd2 thread preempts the fs_mark thread at all the right
> >> times. The only thing holding throughput back is the whole notion that
> >> we need to only dispatch from one queue (even though the storage is
> >> capable of serving both the reads and writes simultaneously).
> >>
> >> I added in the patch that allows the simultaneous dispatch of both reads
> >> and writes, and here are the results from that run:
> >>
> >> fs_mark: 15.975 files/sec
> >> fio: 132.4 MB/s
> >>
> >> So, it looks like that didn't help. The reason this patch doesn't come
> >> close to the yield patch in the mixed workload is because the yield
> >> patch set allows the fs_mark process to continue to issue I/O. With
> >> your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
> >> the journal commit, and then the fio process runs again. Given that the
> >> fs_mark process typically only uses a small fraction of its time slice,
> >> you end up with an unfair balance.
> >
> > Hi Jeff,
> >
> > This is little strange. Given the fact that now both fs_mark and jbd
> > threads are on sync-noidle tree, we should have idled on sync-noidle
> > tree to provide fairness and that should have made sure that fs_mark/jbd
> > do more IO and slice is not lost to fio thread.
> >
> > Not sure what is happening though in practice. Only you can look at
> > traces more closely and see if timer is being armed or not.
>
> Vivek, if you want to look at traces, just ask. I'd be happy to show
> them to you, upload them, whatever. I'm not sure why you think
> otherwise (though I wouldn't blame you for not wanting to look at
> them!).

I don't mind looking at traces. Do let me know where can I access those.

>
> Now, to answer your question, the jbd2 thread runs and issues a barrier,
> which causes a forced dispatch of requests. After that a new queue is
> selected, and since the fs_mark thread is blocked on the journal commit,
> it's always the fio process that gets to run.

Ok, that explains it. So somehow after the barrier, fio always wins
as issues next read request before the fs_mark is able to issue the
next set of writes.

>
> This, of course, raises the question of why the blk_yield patches didn't
> run into the same problem. Looking back at some saved traces, I don't
> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> supported by my last storage system.

I think that blk_yield patches will also run into the same issue if
barriers are enabled.

Thanks
Vivek


\
 
 \ /
  Last update: 2010-07-13 22:45    [W:0.097 / U:0.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site