Messages in this thread Patch in this message | | | Date | Wed, 8 Apr 2009 08:44:10 +0800 | From | Wu Fengguang <> | Subject | Re: [PATCH 0/7] Per-bdi writeback flusher threads |
| |
[CC Jens]
On Tue, Apr 07, 2009 at 10:03:38PM +0800, Jos Houtman wrote: > > I tried the write-back branch from the 2.6-block tree. > > And I can atleast confirm that it works, atleast in relation to the > writeback not keeping up when the device was congested before it wrote a > 1024 pages. > > See: http://lkml.org/lkml/2009/3/22/83 for a bit more information.
Hi Jos, you said that this simple patch solved the problem, however you mentioned somehow suboptimal performance. Can you elaborate that? So that I can push or improve it.
Thanks, Fengguang --- fs/fs-writeback.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- mm.orig/fs/fs-writeback.c +++ mm/fs/fs-writeback.c @@ -325,7 +325,8 @@ __sync_single_inode(struct inode *inode, * soon as the queue becomes uncongested. */ inode->i_state |= I_DIRTY_PAGES; - if (wbc->nr_to_write <= 0) { + if (wbc->nr_to_write <= 0 || + wbc->encountered_congestion) { /* * slice used up: queue for next turn */ > But the second problem seen in that thread, a write-starve-read problem does > not seem to solved. In this problem the writes of the writeback algorithm > starve the ongoing reads, no matter what io-scheduler is picked. > > For good measure I also applied the blk-latency patches on top of the > writeback branch, this did not improve anything. Nor did lowering > max_sectors_kb, as linus suggested in the IO latency thread. > > > As for a reproducible test-case: the simplest I could come up with was > modifying the fsync-tester not to fsync, but letting the normal writeback > handle it. And starting a separate process that tries to sequentially read a > file from the same device. The read performance drops to a bare minimum as > soon as the writeback algorithm kicks in. > > > Jos > > > >
| |