lkml.org 
[lkml]   [2018]   [Jan]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/2] genirq/affinity: try to make sure online CPU is assgined to irq vector
On Tue, Jan 16, 2018 at 12:25:19PM +0100, Thomas Gleixner wrote:
> On Tue, 16 Jan 2018, Ming Lei wrote:
>
> > On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > > Hi,
> > > >
> > > > These two patches fixes IO hang issue reported by Laurence.
> > > >
> > > > 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> > > > may cause one irq vector assigned to all offline CPUs, then this vector
> > > > can't handle irq any more.
> > >
> > > Well, that very much was the intention of managed interrupts. Why
> > > does the device raise an interrupt for a queue that has no online
> > > cpu assigned to it?
> >
> > It is because of irq_create_affinity_masks().
>
> That still does not answer the question. If the interrupt for a queue is
> assigned to an offline CPU, then the queue should not be used and never
> raise an interrupt. That's how managed interrupts have been designed.

Sorry for not answering it in 1st place, but later I realized that:

https://marc.info/?l=linux-block&m=151606896601195&w=2

Also wrt. HPSA's queue, looks they are not usual IO queue(such as NVMe's
hw queue) which supposes to be C/S model. And HPSA's queue is more like
a management queue, I guess, since HPSA is still a single queue HBA,
from blk-mq view.

Cc HPSA and SCSI guys.

Thanks,
Ming

\
 
 \ /
  Last update: 2018-01-16 13:24    [W:1.181 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site