lkml.org 
[lkml]   [2001]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: sys_sched_yield fast path

> This is the linux thread spinlock acquire :
>
>
> static void __pthread_acquire(int * spinlock)
> {
> int cnt = 0;
> struct timespec tm;
>
> while (testandset(spinlock)) {
> if (cnt < MAX_SPIN_COUNT) {
> sched_yield();
> cnt++;
> } else {
> tm.tv_sec = 0;
> tm.tv_nsec = SPIN_SLEEP_DURATION;
> nanosleep(&tm, NULL);
> cnt = 0;
> }
> }
> }
>
>
> Yes, it calls sched_yield() but this is not a std wait for mutex but for
> spinlocks that are hold a very short time. Real wait are implemented using
> signals. More, with the new implementation of sys_sched_yield() the task
> release all its time quantum so, even in a case where a task repeatedly calls
> sched_yield() the call rate is not so high if there is at least one process
> to spin. And if there isn't one task with goodness() > 0, nobody cares about
> sched_yield() performance.

The problem I found with sched_yield is that things break down with high
levels of contention. If you have 3 processes and one has a lock then
the other two can ping pong doing sched_yield() until their priority drops
below the process with the lock. eg in a run I just did then where 2
has the lock:

1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
2

Perhaps we need something like sched_yield that takes off some of
tsk->counter so the task with the spinlock will run earlier.

Anton
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:29    [W:0.183 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site