lkml.org 
[lkml]   [2006]   [Nov]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 1/4] - Potential performance bottleneck for Linxu TCP

* David Miller <davem@davemloft.net> wrote:

> > yeah, i like this one. If the problem is "too long locked section",
> > then the most natural solution is to "break up the lock", not to
> > "boost the priority of the lock-holding task" (which is what the
> > proposed patch does).
>
> Ingo you're mis-read the problem :-)

yeah, the problem isnt too long locked section but "too much time spent
holding a lock" and hence opening up ourselves to possible negative
side-effects of the scheduler's fairness algorithm when it forces a
preemption of that process context with that lock held (and forcing all
subsequent packets to be backlogged).

but please read my last mail - i think i'm slowly starting to wake up
;-) I dont think there is any real problem: a tweak to the scheduler
that in essence gives TCP-using tasks a preference changes the balance
of workloads. Such an explicit tweak is possible already.

furthermore, the tweak allows the shifting of processing from a
prioritized process context into a highest-priority softirq context.
(it's not proven that there is any significant /net win/ of performance:
all that was proven is that if we shift TCP processing from process
context into softirq context then TCP throughput of that otherwise
penalized process context increases.)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-11-30 07:53    [W:0.053 / U:1.840 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site