lkml.org 
[lkml]   [1997]   [Feb]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectComments on the i386 implementation of smp_lock.h/locks.S
Some comments on the i386 implementation of smp_lock.h/locks.S which
recently appeared in the kernel:

1. lock_kernel() can use a "call" instruction to call __lock_kernel,
instead of loading the return address into %eax.

2. In unlock_kernel(), it is faster to use "andl" than "btr" to clear
the bit in kernel_flag. The lock prefix works with andl. But since
you're only using one bit in kernel_flag, it would be even faster to
use "movl" which doesn't require locking (unless there's something
about SMP machines that I don't know about which is entirely
possible).

3. Similarly in __lock_kernel, it is faster to use "testl" than
"btl" to test the bit in kernel_flag.

4. In some other lock code, the initial "unlocked" value is -1 instead
of 0. This allows code like the following to be used in
lock_kernel():

incl %0
jnz 0f
call __lock_kernel
0:

In which case it may be better to retain the %eax return address
mechanism:

movl $0f, %eax
incl %0
jz __lock_kernel
0:

5. In __lock_kernel, if the first btsl is likely to find the bit clear
most of the time, the code should be arranged to fall through the jump
in the most likely case.

6. You might even consider moving the lock/btsl/movb into
lock_kernel() and just branching into __lock_kernel when you have to
spin.

7. I don't understand why you test bits without a lock, then test and
set/clear with a lock when things look promising.


Here's the code modified to do 1, 2, 3, and 5. I don't have an SMP
machine so I can't test it.

/* Locking the kernel */
extern __inline__ void lock_kernel(void)
{
int cpu = smp_processor_id();

__asm__ __volatile__("
pushfl
cli
cmpl $0, %0
jne 0f
call __lock_kernel
0:
incl %0
popfl
" :
: "m" (current_set[cpu]->lock_depth), "d" (cpu)
: "ax", "memory");
}

extern __inline__ void unlock_kernel(void)
{
__asm__ __volatile__("
pushfl
cli
decl %0
jnz 1f
movb %1, active_kernel_processor
movl $0, kernel_flag
1:
popfl
" : /* no outputs */
: "m" (current->lock_depth), "i" (NO_PROC_ID)
: "ax", "memory");
}



/* Caller does atomic increment on current->lock_depth,
* if it was found to originally be zero then we get here,
* %edx holds this cpu ID, %eax may be clobbered.
*/
ENTRY(__lock_kernel)
1:
lock
btsl $0, SYMBOL_NAME(kernel_flag)
jc 2f

movb %dl, SYMBOL_NAME(active_kernel_processor)
ret

2:
btl %dl, SYMBOL_NAME(smp_invalidate_needed)
jnc 0f
lock
btrl %dl, SYMBOL_NAME(smp_invalidate_needed)
jnc 0f
movl %cr3, %eax
movl %eax, %cr3
0:
testl $(1<<0), SYMBOL_NAME(kernel_flag)
jnz 2b
jmp 1b


Tom.

\
 
 \ /
  Last update: 2005-03-22 13:38    [W:0.025 / U:0.876 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site