lkml.org 
[lkml]   [2004]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: per-cpu blk_plug_list
jbarnes@sgi.com (Jesse Barnes) wrote:
>
> On Mon, Mar 01, 2004 at 01:18:40PM -0800, Chen, Kenneth W wrote:
> > blk_plug_list/blk_plug_lock manages plug/unplug action. When you have
> > lots of cpu simultaneously submits I/O, there are lots of movement with
> > the device queue on and off that global list. Our measurement showed
> > that blk_plug_lock contention prevents linux-2.6.3 kernel to scale pass
> > beyond 40 thousand I/O per second in the I/O submit path.
>
> This helped out our machines quite a bit too. Without the patch, we
> weren't able to scale above 80000 IOPS, but now we exceed 110000 (and
> parity with our internal XSCSI based tree).
>
> Maybe the plug lists and locks should be per-device though, rather than
> per-cpu? That would make the migration case easier I think. Is that
> possible?

It's possible, yes. It is the preferred solution. We need to identify all
the queues which need to be unplugged to permit a VFS-level IO request to
complete. It involves running down the device stack and running around all
the contributing queues at each level.

Relatively straightforward, but first those dang sempahores in device
mapper need to become spinlocks. I haven't looked into what difficulties
might be present in the RAID implementation.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:01    [W:0.048 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site