lkml.org 
[lkml]   [2013]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC GIT PULL] softirq: Consolidation and stack overrun fix
On Sun, Sep 22, 2013 at 02:41:01PM +1000, Benjamin Herrenschmidt wrote:
> On Sun, 2013-09-22 at 14:39 +1000, Benjamin Herrenschmidt wrote:
> > How do you do your per-cpu on x86 ?

We use a segment offset. Something like:

inc %gs:var;

would be a per-cpu increment. The actual memory location used for the
memop is the variable address + GS offset.

And our GS offset is per cpu and points to the base of the per cpu
segment for that cpu.

> Also, do you have a half-decent way of getting to per-cpu from asm ?

Yes, see above :-)

Assuming we repurpose r13 as per-cpu base, you could do the whole
this_cpu_* stuff which is locally atomic -- ie. safe against IRQs and
preemption as:

loop:
lwarx rt, var, r13
inc rt
stwcx rt, var, r13
bne- loop

Except, I think your ll/sc pair is actually slower than doing:

local_irq_save(flags)
var++;
local_irq_restore(flags)

Esp. with the lazy irq disable you have.

And I'm fairly sure using them as generic per cpu accessors isn't sane,
but I'm not sure PPC64 has other memops with implicit addition like
that.

As to the problem of GCC moving r13 about, some archs have some
exceptions in the register allocator and leave some registers alone.
IIRC MIPS has this and uses one of those (istr there's 2) for the
per cpu base address.






\
 
 \ /
  Last update: 2013-09-22 18:41    [W:0.222 / U:1.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site