lkml.org 
[lkml]   [2002]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: NMI handling rework for x86
On Fri, 15 Nov 2002, Dipankar Sarma wrote:

> The RCU part is fairly simple - you want to avoid having to acquire
> a lock for every NMI event to walk the handler so you do it
> lockfree. If a process running in a different CPU tries to
> free an nmi handler, it removes it from the list, issues an
> rcu callback (to be invoked after all CPUs have gone through
> a context switch or executed user-level code ensuring that the
> deleted nmi handler can't be running) and waits for completion of

How are you so sure the handler isn't running? You can get an NMI after
any cpu instruction inbetween all of that happening, not to mention that
it can happen on multiple processors means since its a shared nmi handler
list, you're almost never going to find that list not being traversed at
some stage by a processor. Try synchronising the cpus for a removal when
they're all handling an NMI every millisecond.

> the callback. The rcu callback handler wakes it up.
> It is all hidden under list_add_rcu()/list_del_rcu() and __list_for_each_rcu().

I don't think you can rely on completion() to ensure this. Its hardly an
atomic operation in this context, whats wrong with spin_trylock(nmi_handler_lock)
and do an early bailout on failure?

Zwane
--
function.linuxpower.ca

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:31    [W:0.133 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site