Messages in this thread |  | | Date | Tue, 2 Oct 2001 22:34:45 +0200 (CEST) | From | Ingo Molnar <> | Subject | [patch] auto-limiting IRQ load, IRQ-polling, irq-rewrite-2.4.11-D9 |
| |
On Mon, 1 Oct 2001, Linus Torvalds wrote:
> And how do you select max_rate sanely? [...]
> Saying "hey, that's the users problem", is _not_ a solution. It needs > to have some automatic cut-off that finds the right sustainable rate > automatically, instead of hardcoding random default values and asking > the user to know the unknowable.
good point. I did not ignore this problem, i was just unable to find any solution that felt robust, so i convinced myself that max_rate is the best idea :-)
but a fresh start today, and a good idea from Arjan resulted in a pretty good 'irq load' estimator that implements the above cut-off dynamically:
the method is to detect in do_IRQ() whether we have interrupted an interrupt context of any sort or not. The number of 'total' and 'irq' interruptions are counted, and the 'irq load' is "irq / total". The irq code goes into per-irq and per-cpu 'overload mode' if the 'irq load' is higher than ~97%.
There is one case of 'understimation': hardirqs that do not enable interrupts (SA_INTERRUPT handlers) will not be interrupted. Fortunately, 99.9% of the network drivers (and other, high-rate irq drivers) enable interrupts in their hardirq handlers. The only significant SA_INTERRUPT user is the SCSI layer. Which is not an 'external' device.
but in the loads we care about, the irqs are irq-enabled and trigger softirqs - both are measured precisely via the in_interrupt() method.
this load calculation method has a few 'false triggers': eg. syscall-level code that uses disable_local_bh() will be triggered - but such code is not very common and overestimating irq-load slightly is always better than underestimating.
Other estimators, like context switches and rdtsc have more serious problems i believe. Context switches are imo not a 'perfect' indicator of 'progress', in a number of important situations. Eg. when a userspace process is not scheduling at all. There is no other indicator of progress in this case but the fact that it's userspace that we interrupt.
RDTSC, while 'perfect' indicator of actual system, irq and user load, is not generic and adds quite some overhead to the lowlevel code. The only 'generic' and accurate time measerment method, do_gettimeofday(), has way too much overhead to be used in lowlevel irq code.
another advantage of the in_interrupt() method is finegrained metrics: the counters are per-irq and per-cpu, so low-frequency interrupts like the keyboard interrupt or mouse interrupt are much less likely to be mitigated needlessly. Separate mitigation does not mean the global effects will not be measured correctly: if eg. 16 devices all produce a 10k irqs/sec load (which, in isolation, is not enough to trigger an overload), they together will starve non-irq contexts and will cause an overload in all 16 active irq handlers.
unfortunately there is also another, new problem, which got reported and which i can reproduce as well: due to the millisecs long disabling of ethernet interrupts, the receiver can overflow easily, and produces overruns and lost packets. The result of this phenomenon is an effectively frozen network: no incoming or outgoing TCP connection ever makes any reasonable progress. So by auto-mitigation alone we only exchanged a 'box lockup' against a 'network connection' lockup - a different kind of DoS but still a DoS.
the new patch (attached) provides a solution for this problem too, by introducing a hardirq-polling kernel thread: 'kpolld'. (kpolld is significantly different from ksoftirqd: it gets only triggered in truly hopeless situations, and it handles hardirq load in such cases. I've never seen it run under any 'good' loads i care about. Plus, polling can have significant performance advantages in dedicated networking environments.)
while this inevitably caused the introduction of an device-polling framework, it's hard to separate the two things - auto-mitigation alone is not useful without going into poll mode, unfortunately. Only the networking code uses the polling framework currently.
Another option would be to use the interrupt handlers themselves to do the polling - but this puts certain assumptions into existing IRQ handlers, which we cannot do for 2.4 i believe. Plus, the ->poll_controller() driver extension is also used by the netconsole, so we could get free testing for it :-) Another reason is that i think subsystems should have close control over the way they do polling. There are also a few examples of 'synchronous polling' points i added to the networking code: the device will only be polled once, and only if we are in IRQ overload mode.
about performance: while it certainly can be tuned further, the estimator works pretty well under the loads i tested. 97% proved to be a reasonable limit, which i was unable to reach via 'normal' loads - it took dedicated tools like updspam to trigger the overload. In overload mode, performance is still pretty good, and TCP connections over the network are snappy. But it would be very nice if those who reported packet drops and bad network performance when using yesterday's patch could re-test the same load situation with this patch applied to 2.4.11-pre2.
note: i still kept max_rate, but it's now scaled along cpu_khz - a 300 MHz box will have a default value of 30k irqs/sec, a 1 GHZ box will have a 100k irqs/sec limit. This limit still has the advantage to potentially catch runaway devices that cause irq storms. Another reason to keep max_rate was to enable router & appliance vendors to set it to a low value to force the system into 'polling mode'. For dedicated boxes this makes perfect sense.
note2: the patch includes the eepro100 driver patches from the -ac tree as well (Arjan's receiver(?)-hangup fix) - without those fixes i could easily hang my eepro100 cards after a few minutes of extreme load.
the patch can also be downloaded from:
http://redhat.com/~mingo/irq-rewrite/
i've stress-tested the patch on 2.4.11-pre1, on UP-PIC, UP-APIC, and SMP-APIC systems.
Comments, testing feedback welcome,
Ingo [unhandled content-type:application/x-bzip2]
|  |