Messages in this thread | | | Date | Wed, 21 Aug 2019 12:35:59 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts |
| |
On Wed, Aug 21, 2019 at 08:37:55AM +0000, Long Li wrote: > >>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU > >>>with flooded interrupts > >>> > >>>On Mon, Aug 19, 2019 at 11:14:29PM -0700, longli@linuxonhyperv.com > >>>wrote: > >>>> From: Long Li <longli@microsoft.com> > >>>> > >>>> When a NVMe hardware queue is mapped to several CPU queues, it is > >>>> possible that the CPU this hardware queue is bound to is flooded by > >>>> returning I/O for other CPUs. > >>>> > >>>> For example, consider the following scenario: > >>>> 1. CPU 0, 1, 2 and 3 share the same hardware queue 2. the hardware > >>>> queue interrupts CPU 0 for I/O response 3. processes from CPU 1, 2 and > >>>> 3 keep sending I/Os > >>>> > >>>> CPU 0 may be flooded with interrupts from NVMe device that are I/O > >>>> responses for CPU 1, 2 and 3. Under heavy I/O load, it is possible > >>>> that CPU 0 spends all the time serving NVMe and other system > >>>> interrupts, but doesn't have a chance to run in process context. > >>> > >>>Ideally -- and there is some code to affect this, the load-balancer will move > >>>tasks away from this CPU. > >>> > >>>> To fix this, CPU 0 can schedule a work to complete the I/O request > >>>> when it detects the scheduler is not making progress. This serves multiple > >>>purposes: > >>> > >>>Suppose the task waiting for the IO completion is a RT task, and you've just > >>>queued it to a regular work. This is an instant priority inversion. > > This is a choice. We can either not "lock up" the CPU, or finish the I/O on time from IRQ handler. I think throttling only happens in extreme conditions, which is rare. The purpose is to make the whole system responsive and happy.
Can you please use a sane MUA.. this is unreadable garbage.
| |