lkml.org 
[lkml]   [2011]   [Sep]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 10/18] writeback: dirty position control - bdi reserve area
> BTW, I also compared the IO-less patchset and the vanilla kernel's
> JBOD performance. Basically, the performance is lightly improved
> under large memory, and reduced a lot in small memory servers.
>
> vanillla IO-less
> --------------------------------------------------------------------------------
[...]
> 26508063 17706200 -33.2% JBOD-10HDD-thresh=100M/xfs-100dd-1M-16p-5895M-100M
> 23767810 23374918 -1.7% JBOD-10HDD-thresh=100M/xfs-10dd-1M-16p-5895M-100M
> 28032891 20659278 -26.3% JBOD-10HDD-thresh=100M/xfs-1dd-1M-16p-5895M-100M
> 26049973 22517497 -13.6% JBOD-10HDD-thresh=100M/xfs-2dd-1M-16p-5895M-100M
>
> There are still some itches in JBOD..

OK, in the dirty_bytes=100M case, I find that the bdi threshold _and_
writeout bandwidth may drop close to 0 in long periods. This change
may avoid one bdi being stuck:

/*
* bdi reserve area, safeguard against dirty pool underrun and disk idle
*
* It may push the desired control point of global dirty pages higher
* than setpoint. It's not necessary in single-bdi case because a
* minimal pool of @freerun dirty pages will already be guaranteed.
*/
- x_intercept = min(write_bw, freerun);
+ x_intercept = min(write_bw + MIN_WRITEBACK_PAGES, freerun);
if (bdi_dirty < x_intercept) {
if (bdi_dirty > x_intercept / 8) {
pos_ratio *= x_intercept;
do_div(pos_ratio, bdi_dirty);
} else
pos_ratio *= 8;
}

Thanks,
Fengguang


\
 
 \ /
  Last update: 2011-09-18 16:49    [W:0.343 / U:1.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site