lkml.org 
[lkml]   [2023]   [Oct]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC PATCH] genirq: Exclude managed irq during irq migration
    Date
    The managed IRQ will be shutdown and not be migrated to
    other CPUs during CPU offline. Later when the CPU is online,
    the managed IRQ will be re-enabled on this CPU. The managed
    IRQ can be used to reduce the IRQ migration during CPU hotplug.

    Before putting the CPU offline, the number of the already allocated
    IRQs on this offlining CPU will be compared to the total number
    of available IRQ vectors on the remaining online CPUs. If there is
    not enough slot for these IRQs to be migrated to, the CPU offline
    will be terminated. However, currently the code treats the managed
    IRQ as migratable, which is not true, and brings false negative
    during CPU hotplug and hibernation stress test.

    For example:

    cat /sys/kernel/debug/irq/domains/VECTOR

    name: VECTOR
    size: 0
    mapped: 338
    flags: 0x00000103
    Online bitmaps: 168
    Global available: 33009
    Global reserved: 83
    Total allocated: 255 <------
    System: 43: 0-21,50,128,192,233-236,240-242,244,246-255
    | CPU | avl | man | mac | act | vectors
    0 180 1 1 18 32-49
    1 196 1 1 2 32-33
    ...
    166 197 1 1 1 32
    167 197 1 1 1 32

    //put CPU167 offline
    pepc.standalone cpu-hotplug offline --cpus 167

    cat /sys/kernel/debug/irq/domains/VECTOR

    name: VECTOR
    size: 0
    mapped: 338
    flags: 0x00000103
    Online bitmaps: 167
    Global available: 32812
    Global reserved: 83
    Total allocated: 254 <------
    System: 43: 0-21,50,128,192,233-236,240-242,244,246-255
    | CPU | avl | man | mac | act | vectors
    0 180 1 1 18 32-49
    1 196 1 1 2 32-33
    ...
    166 197 1 1 1 32

    After CPU167 is offline, the number of allocated vectors
    decreases from 255 to 254. Since the only IRQ on CPU167 is
    managed(mac field), it is not migrated. But the current
    code thinks that there is 1 IRQ to be migrated.

    Fix the check by substracting the number of managed IRQ from
    allocated one.

    Fixes: 2f75d9e1c905 ("genirq: Implement bitmap matrix allocator")
    Reported-by: Wendy Wang <wendy.wang@intel.com>
    Signed-off-by: Chen Yu <yu.c.chen@intel.com>
    ---
    kernel/irq/matrix.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
    index 1698e77645ac..d245ad76661e 100644
    --- a/kernel/irq/matrix.c
    +++ b/kernel/irq/matrix.c
    @@ -475,7 +475,7 @@ unsigned int irq_matrix_allocated(struct irq_matrix *m)
    {
    struct cpumap *cm = this_cpu_ptr(m->maps);

    - return cm->allocated;
    + return cm->allocated - cm->managed_allocated;
    }

    #ifdef CONFIG_GENERIC_IRQ_DEBUGFS
    --
    2.25.1
    \
     
     \ /
      Last update: 2023-10-20 09:27    [W:7.481 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site