lkml.org 
[lkml]   [2008]   [Apr]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH -mm] likely_prof: update to test_and_set_bit_lock / clear_bit_unlock
Because the _lock routines are faster and provide a better example to follow.

Signed-off-by: Roel Kluin <12o3l@tiscali.nl>
---
As suggested by Nick Piggin. To be applied after Daniels likeliness patches
and my previous likeliness-accounting-change-and-cleanup.patch,
The patch was checkpatch.pl, compile, sparse and run tested (uml).

diff --git a/lib/likely_prof.c b/lib/likely_prof.c
index c9a8d1d..0da3181 100644
--- a/lib/likely_prof.c
+++ b/lib/likely_prof.c
@@ -36,7 +36,7 @@ int do_check_likely(struct likeliness *likeliness, unsigned int ret)
* disable and it was a bit cleaner then using internal __raw
* spinlock calls.
*/
- if (!test_and_set_bit(0, &likely_lock)) {
+ if (!test_and_set_bit_lock(0, &likely_lock)) {
if (likeliness->label & LP_UNSEEN) {
likeliness->label &= (~LP_UNSEEN);
likeliness->next = likeliness_head;
@@ -44,8 +44,7 @@ int do_check_likely(struct likeliness *likeliness, unsigned int ret)
likeliness->caller = (unsigned long)
__builtin_return_address(0);
}
- smp_mb__before_clear_bit();
- clear_bit(0, &likely_lock);
+ clear_bit_unlock(0, &likely_lock);
}
}


\
 
 \ /
  Last update: 2008-04-04 13:31    [W:0.249 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site