lkml.org 
[lkml]   [2008]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [rfc] direct IO submission and completion scalability issues
    On Mon, Feb 04 2008, Nick Piggin wrote:
    > On Mon, Feb 04, 2008 at 11:12:44AM +0100, Jens Axboe wrote:
    > > On Sun, Feb 03 2008, Nick Piggin wrote:
    > > > On Fri, Jul 27, 2007 at 06:21:28PM -0700, Suresh B wrote:
    > > >
    > > > Hi guys,
    > > >
    > > > Just had another way we might do this. Migrate the completions out to
    > > > the submitting CPUs rather than migrate submission into the completing
    > > > CPU.
    > > >
    > > > I've got a basic patch that passes some stress testing. It seems fairly
    > > > simple to do at the block layer, and the bulk of the patch involves
    > > > introducing a scalable smp_call_function for it.
    > > >
    > > > Now it could be optimised more by looking at batching up IPIs or
    > > > optimising the call function path or even mirating the completion event
    > > > at a different level...
    > > >
    > > > However, this is a first cut. It actually seems like it might be taking
    > > > slightly more CPU to process block IO (~0.2%)... however, this is on my
    > > > dual core system that shares an llc, which means that there are very few
    > > > cache benefits to the migration, but non-zero overhead. So on multisocket
    > > > systems hopefully it might get to positive territory.
    > >
    > > That's pretty funny, I did pretty much the exact same thing last week!
    >
    > Oh nice ;)
    >
    >
    > > The primary difference between yours and mine is that I used a more
    > > private interface to signal a softirq raise on another CPU, instead of
    > > allocating call data and exposing a generic interface. That put the
    > > locking in blk-core instead, turning blk_cpu_done into a structure with
    > > a lock and list_head instead of just being a list head, and intercepted
    > > at blk_complete_request() time instead of waiting for an already raised
    > > softirq on that CPU.
    >
    > Yeah I was looking at that... didn't really want to add the spinlock
    > overhead to the non-migration case. Anyway, I guess that sort of
    > fine implementation details is going to have to be sorted out with
    > results.

    As Andi mentions, we can look into making that lockless. For the initial
    implementation I didn't really care, just wanted something to play with
    that would nicely allow me to control both the submit and complete side
    of the affinity issue.

    --
    Jens Axboe



    \
     
     \ /
      Last update: 2008-02-04 11:37    [W:0.027 / U:29.996 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site