lkml.org 
[lkml]   [2000]   [Mar]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: new IRQ scalability changes in 2.3.48
On Mon, 13 Mar 2000, Ingo Molnar wrote:

>spinlock_depth_need_resched is decremented by 1 if need_resched is changed
>from 0 to 1, and is incremented by 1 if need_resched goes from 1 to 0.
>schedule() in this case has to check:
>
> if (current->spinlock_depth_need_resched + current->need_resched)
> BUG();
>
>but all the fast path (setting need_resched is much more rare than using a
>spinlock) is still only 2 instructions again.

The above looks saner.

Anyway it looks more like your object is to put automatic conditional
schedules all over the place (when each locks gets dropped) than to take
advantage of getting rescheduled in kernel mode by an irq.

Such way is more generic but it's also costly because it does at runtime
what we could do at compile time.

For example if you do:

spin_lock(&lru_list_lock)
spin_lock(&hash_table_lock);
spin_unlock(&hash_table_lock);
spin_unlock(&lru_list_lock)

everybody will know that the spin_lock(&has_table_lock) doesn't need to
increase the lock_depth but you do increase it anyway and adding such
bloat is not so nice IMHO.

Andrea


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.175 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site