lkml.org 
[lkml]   [2014]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] asm-generic: rwsem: ensure sem->cnt is only accessed via atomic_long_*
On 02/28/2014 07:13 AM, Will Deacon wrote:
> On Thu, Feb 27, 2014 at 05:28:24AM +0000, Davidlohr Bueso wrote:
>> On Fri, 2014-02-21 at 17:22 +0000, Will Deacon wrote:
>>> The asm-generic rwsem implementation directly acceses sem->cnt when
>>> performing a __down_read_trylock operation. Whilst this is probably safe
>>> on all architectures, we should stick to the atomic_long_* API and use
>>> atomic_long_read instead.
>>>
>>> Signed-off-by: Will Deacon <will.deacon@arm.com>
>>> ---
>>> include/asm-generic/rwsem.h | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
>>> index bb1e2cdeb9bf..75af612f54f8 100644
>>> --- a/include/asm-generic/rwsem.h
>>> +++ b/include/asm-generic/rwsem.h
>>> @@ -41,7 +41,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
>>> {
>>> long tmp;
>>>
>>> - while ((tmp = sem->count) >= 0) {
>>> + while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
>>
>> That's pretty ugly, how about having infinite look and just do the tmp
>> assign separately from the conditional?
>>
>> It also looks like a cpu_relax() could help here between iterations.

This is the read trylock so no cpu_relax().

>> Other than that:
>>
>> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com>
>
> Actually, we should make that cmpxchg an atomic_long_cmpxchg, so the extra
> diff ends up looking like below. It's ugly adding a cpu_relax(), since you
> only want it when the cmpxchg fails (and we don't have such logic in the
> asm-generic __atomic_add_unless, for example).
>
> Will
>
> --->8
>
> diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
> index 603a0a11e592..2b6401f9e428 100644
> --- a/include/asm-generic/rwsem.h
> +++ b/include/asm-generic/rwsem.h
> @@ -40,14 +40,16 @@ static inline void __down_read(struct rw_semaphore *sem)
> static inline int __down_read_trylock(struct rw_semaphore *sem)
> {
> long tmp;
> + atomic_long_t *cnt = (atomic_long_t *)&sem->count;

The shared rwsem failure paths (kernel/locking/rwsem_xadd.c) peek at
sem->count as long type, so this isn't really necessary.

>
> - while ((tmp = atomic_long_read((atomic_long_t *)&sem->count)) >= 0) {
> - if (tmp == cmpxchg(&sem->count, tmp,
> - tmp + RWSEM_ACTIVE_READ_BIAS)) {
> - return 1;
> - }
> - }
> - return 0;
> + do {
> + tmp = atomic_long_read(cnt);
> + if (tmp < 0)
> + return 0;
> + } while (tmp != atomic_long_cmpxchg(cnt, tmp,
> + tmp + RWSEM_ACTIVE_READ_BIAS));
> +
> + return 1;
> }

Regards,
Peter Hurley


\
 
 \ /
  Last update: 2014-02-28 14:21    [W:0.151 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site