lkml.org 
[lkml]   [2008]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: IRQ affinities
Paul Jackson wrote:
> Peter wrote:
>> That's a new feature; and its quite common that new features require
>> code changes.
>
> It's common for new features to require code changes to take advantage
> of the new features.
>
> It's less desirable that taking advantage of such new features breaks
> existing, basically unrelated, code.
>
> My gut sense is that, in a misguided effort to find a "simple" answer
> to irq distribution, we (well, y'all) are trying to attach this
> feature to cpusets or cgroups.
>
> Let me ask a different question:
>
> What solutions would you (Max, Peter, Ingo, lurkers, ...) be
> suggesting for this 'IRQ affinity' problem if cpusets and
> cgroups didn't exist in any form whatsoever?

As Peter explained I'm focusing on the "CPU isolation" aspect. ie Shielding a
CPU (or a set of CPUs) from various kernel activities (load balancing, soft
and hard irq handling, workqueues, etc).

For the IRQs specifically all I need is to be able to tell the kernel to not
route IRQs to certain CPUs. That's mostly works already via
/proc/irq/N/smp_affinity, the problem is dynamically allocated irqs because
/proc/irq/N directory does not exist until those IRQs are allocated/enabled.

Originally I introduced global cpu_isolated_map. IRQ code was using that map
to exclude CPU(s) from IRQ routing. What I realized now is that all I need is
/proc/irq/default_smp_affinity. In other words I just need to export default
mask used by the IRQ layer. I think this makes sense regardless of what cpuset
based solution we'll come up with.

Max


\
 
 \ /
  Last update: 2008-05-21 03:17    [W:0.110 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site