[lkml]   [2007]   [May]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH] improved locking performance in rt_run_flush()
    From: Dave Johnson <>
    Date: Sat, 12 May 2007 12:36:47 -0400

    > While testing adding/deleting large numbers of interfaces, I found
    > rt_run_flush() was the #1 cpu user in a kernel profile by far.
    > The below patch changes rt_run_flush() to only take each spinlock
    > protecting the rt_hash_table once instead of taking a spinlock for
    > every hash table bucket (and ending up taking the same small set
    > of locks over and over).
    > Deleting 256 interfaces on a 4-way SMP system with 16K buckets reduced
    > overall cpu-time more than 50% and reduced wall-time about 33%. I
    > suspect systems with large amounts of memory (and more buckets) will
    > see an even greater benefit.
    > Note there is a small change in that rt_free() is called while the
    > lock is held where before it was called without the lock held. I
    > don't think this should be an issue.
    > Signed-off-by: Dave Johnson <>

    Thanks for this patch.

    I'm not ignoring it I'm just trying to brainstorm whether there
    is a better way to resolve this inefficiency. :-)
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2007-05-14 12:07    [W:0.019 / U:11.876 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site