lkml.org 
[lkml]   [2009]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: >10% performance degradation since 2.6.18
On Fri, Jul 03, 2009 at 09:22:35PM +0200, Jens Axboe wrote:
> On Fri, Jul 03 2009, Matthew Wilcox wrote:
> > Yes, but the irqs/sec increase doesn't appear to be due to MPT interrupts.
> > In the /proc/interrupt summaries, RH5 did 388666895 IOC interrupts and
> > 2.6.30 did 378419042. As a percentage of interrupts, the IOC interrupts
> > were 59.4% with RH and 51.8% with 2.6.30.
>
> OK. So where are the extra irqs from?

Let's see:

Source 2.6.18 2.6.30 Delta
qla 0.8% 0.8% 0
eth 20% 27.6% +7.6%
ioc 59.4% 51.8% -7.6%
NMI 7.6% 7.9% +0.3%
LOC 12.2% 10% -2.2%
RES - 1.8% +1.8%

I wouldn't be surprised to find out that 2.6.18 accounted rescheduling
interrupts as 'LOC'. So the difference in interrupts is all about
the ethernet card. I believe these systems have an igb card.

The big difference between 2.6.18 and 2.6.30 is that the cards now have
eight interrupts in use each, instead of one each (four for rx queues
and four for tx queues). Distressingly, these interrupts are all affine
to the same CPUs (eth1's eight interrupts are all on CPU 9 and eth0's
interrupts are all on CPU 1). That would seem to be a fruitful avenue of
investigation -- whether limiting the cards to a single RX/TX interrupt
would be advantageous, or whether spreading the eight interrupts out
over the CPUs would be advantageous.

--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."


\
 
 \ /
  Last update: 2009-07-03 21:49    [W:0.101 / U:0.532 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site