lkml.org 
[lkml]   [1999]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: new semaphores
On Sat, 28 Aug 1999, Linus Torvalds wrote:

>Maybe. Be careful that you don't get this wrong, though: you should NOT
>think that semaphores are always mutexes, and there could be multiple
>concurrent "up()" calls with semaphore counts > 1 etc, and I'd rather be
>safe than sorry.

The fact that up() doesn't wakeup if the resulting count is > 0, yes, it
seems that I can have only one task at once in the critical section even
if there could be two tasks (anyway it doesn't seems to be a deadlock).

IMHO it would be better to have fast mutex than being forced to use slow
semaphores (since everybody uses semaphores as mutex).

Does somebody knows a place that uses the semaphores initialized with a
count > 1 ? I don't.

>down_trylock has already _failed_.

Supposing the seamphores are mutex you have two choices in down_trylock():

1) uncoditionally undo the count-decrease while syncing the count with the
sleepers and unconditionally wakeup the sleepers if somebody may get
the semaphore now.
2) avoid all wakeups calls and try to grab the semaphore. _Only_ if we
can't grab the semaphore then undo the count-decrease. No wakeup in any
case in down_trylock (in the mutex case at least).

But the interesting point is that my (2) is more efficent than your (1)
that will end always running a not necessary wakeup in the common case
(this mean overscheduling) just because it's not clever.

>That part of your patch is definitely bad. There's no way I'll ever add
>this: you increment the semaphore count without waking people up, which
>is just a sure way to set yourself up for nasty surprises.

Woops I seen a problem in my implementation: instead of

failed = atomic_add_negative(sem->sleepers, &sem->count);
sem->sleepers = 0;
if (failed)
/* we didn't got the semaphore so undo the count
decrement. */
atomic_inc(&sem->count);

it seems to me I should do:

failed = atomic_add_negative(sem->sleepers, &sem->count);
sem->sleepers = 0;
if (failed)
sem->sleepers = 1;

That will make sure that any underlaying up() will wakeup one sleeper.

To fix also the real-semaphore case I should do:

failed = atomic_add_negative(sem->sleepers, &sem->count);
sem->sleepers = 0;
if (failed)
sem->sleepers = 1;
else
wake_up(&sem->wait);

In general trying to get the semaphore in __down_trylock as I do is not
bad since it will avoid you a wakeup in the common case. See what happens
to your __down_trylock in the 99% of cases:

task1 task2 count sleepers
----- ----- ----- --------
down() 0 0
down_trylock() -1 0
sleepers = 0 -1 0
add_neg(1) 0 0
wake_up() <- not needed in 99.9% of cases

The above example has not sleepers so it won't cause overscheduling but
only some waste of CPU and some lock on the bus in wake_up. But the same
wakeup will happens even if there was sleepers and that will end in
plain _overscheduling_, see below:

task1 task2 task3 count sleepers
----- ----- ----- ----- --------
down() 0 0

down() -1 0
sleepers++ -1 1
add_neg(0) -1 1
sleepers = 1 -1 1
go to sleep

down_trylock() -2 1
sleepers = 0 -2 0
add_neg(2) 0 0
wake_up() <- not needed in 99.9% of cases

wakenup
add_neg(-1) -1 0
sleepers = 1 -1 1
go to sleep

My patch avoid completly the overscheduling that you "always" cause in
__down_trylock.

I agree that it's really annoying to rename all struct semaphore to struct
mutext to do the right thing and probably it's nicer to have real
semaphores called `struct semaphore'. So now I propose you this new patch
against 2.3.15. It only does the __down_trylock optimization and the
wake-one thing.

Comments?

--- 2.3.15/arch/i386/kernel/semaphore.c Thu Aug 26 02:54:55 1999
+++ 2.3.15-sem/arch/i386/kernel/semaphore.c Sun Aug 29 17:05:32 1999
@@ -49,8 +49,8 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
- add_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_UNINTERRUPTIBLE|TASK_EXCLUSIVE;
+ add_wait_queue_exclusive(&sem->wait, &wait);

spin_lock_irq(&semaphore_lock);
sem->sleepers++;
@@ -63,28 +63,28 @@
*/
if (!atomic_add_negative(sleepers - 1, &sem->count)) {
sem->sleepers = 0;
- wake_up(&sem->wait);
break;
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);

schedule();
- tsk->state = TASK_UNINTERRUPTIBLE;
+ tsk->state = TASK_UNINTERRUPTIBLE|TASK_EXCLUSIVE;
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
remove_wait_queue(&sem->wait, &wait);
tsk->state = TASK_RUNNING;
+ wake_up(&sem->wait);
}

int __down_interruptible(struct semaphore * sem)
{
- int retval;
+ int retval = 0;
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
- add_wait_queue(&sem->wait, &wait);
+ tsk->state = TASK_INTERRUPTIBLE|TASK_EXCLUSIVE;
+ add_wait_queue_exclusive(&sem->wait, &wait);

spin_lock_irq(&semaphore_lock);
sem->sleepers ++;
@@ -98,12 +98,10 @@
* it has contention. Just correct the count
* and exit.
*/
- retval = -EINTR;
if (signal_pending(current)) {
+ retval = -EINTR;
sem->sleepers = 0;
- if (atomic_add_negative(sleepers, &sem->count))
- break;
- wake_up(&sem->wait);
+ atomic_add(sleepers, &sem->count);
break;
}

@@ -114,8 +112,6 @@
* the lock.
*/
if (!atomic_add_negative(sleepers - 1, &sem->count)) {
- wake_up(&sem->wait);
- retval = 0;
sem->sleepers = 0;
break;
}
@@ -123,12 +119,13 @@
spin_unlock_irq(&semaphore_lock);

schedule();
- tsk->state = TASK_INTERRUPTIBLE;
+ tsk->state = TASK_INTERRUPTIBLE|TASK_EXCLUSIVE;
spin_lock_irq(&semaphore_lock);
}
spin_unlock_irq(&semaphore_lock);
tsk->state = TASK_RUNNING;
remove_wait_queue(&sem->wait, &wait);
+ wake_up(&sem->wait);
return retval;
}

@@ -142,17 +139,18 @@
*/
int __down_trylock(struct semaphore * sem)
{
- int retval, sleepers;
+ int failed;

spin_lock_irq(&semaphore_lock);
- sleepers = sem->sleepers + 1;
- sem->sleepers = 0;
-
/*
- * Add "everybody else" and us into it. They aren't
+ * Add "everybody else" into it. They aren't
* playing, because we own the spinlock.
*/
- if (!atomic_add_negative(sleepers, &sem->count))
+ failed = atomic_add_negative(sem->sleepers, &sem->count);
+ sem->sleepers = 0;
+ if (failed)
+ sem->sleepers = 1;
+ else
wake_up(&sem->wait);

spin_unlock_irq(&semaphore_lock);
Andrea


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.457 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site