lkml.org 
[lkml]   [2002]   [Jul]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] updated low-latency zap_page_range
From
Date
On Wed, 2002-07-24 at 17:45, Andrew Morton wrote:

> Robert Love wrote:
> >
> > +static inline void cond_resched_lock(spinlock_t * lock)
> > +{
> > + if (need_resched() && preempt_count() == 1) {
> > + _raw_spin_unlock(lock);
> > + preempt_enable_no_resched();
> > + __cond_resched();
> > + spin_lock(lock);
> > + }
> > +}
>
> Maybe I'm being thick. How come a simple spin_unlock() in here
> won't do the right thing?

It will, but we will check need_resched twice. And preempt_count
again. My original version just did the "unlock; lock" combo and thus
the checking was automatic... but if we want to check before we unlock,
we might as well be optimal about it.

> And this won't _really_ compile to nothing with CONFIG_PREEMPT=n,
> will it? It just does nothing because preempt_count() is zero?

I hope it compiles to nothing! There is a false in an if... oh, wait,
to preserve possible side-effects gcc will keep the need_resched() call
so I guess we should reorder it as:

if (preempt_count() == 1 && need_resched())

Then we get "if (0 && ..)" which should hopefully be evaluated away.
Then the inline is empty and nothing need be done.

Robert Love

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:27    [W:0.060 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site