lkml.org 
[lkml]   [2004]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] preempt-smp.patch, 2.6.8-rc3-mm2
From
Date
Hi Ingo,

from the preempt-smp patch:

@@ -306,6 +306,21 @@ static int invalidate_list(struct list_h
struct list_head * tmp = next;
struct inode * inode;

+ /*
+ * Preempt if necessary. To make this safe we use a dummy
+ * inode as a marker - we can continue off that point.
+ * We rely on this sb's inodes (including the marker) not
+ * getting reordered within the list during umount. Other
+ * inodes might get reordered.
+ */
+ if (need_resched_lock()) {
+ list_add_tail(mark, next);
+ spin_unlock(&inode_lock);
+ cond_resched();
+ spin_lock(&inode_lock);
+ tmp = next = mark->next;
+ list_del(mark);
+ }
next = next->next;
if (tmp == head)
break;


why use cond_resched in the loop if you use need_resched_lock in the condition?
cond_resched does not do the cpu_relax. Nor is it quite nice to use
cond_resched_lock there since it would increment preempt_check_count again
causing the step to be 2 which in turn will make one miss the cpu_relax condition.

Peter Zijlstra



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:05    [W:0.045 / U:1.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site