lkml.org 
[lkml]   [2019]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v8 16/19] locking/rwsem: Guard against making count negative
On Mon, May 20, 2019 at 04:59:15PM -0400, Waiman Long wrote:
> static struct rw_semaphore __sched *
> +rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long adjustment)
> {
> + long count;
> bool wake = false;
> struct rwsem_waiter waiter;
> DEFINE_WAKE_Q(wake_q);
>
> + if (unlikely(!adjustment)) {
> + /*
> + * This shouldn't happen. If it does, there is probably
> + * something wrong in the system.
> + */
> + WARN_ON_ONCE(1);

if (WARN_ON_ONCE(!adjustment)) {

> +
> + /*
> + * An adjustment of 0 means that there are too many readers
> + * holding or trying to acquire the lock. So disable
> + * optimistic spinning and go directly into the wait list.
> + */
> + if (rwsem_test_oflags(sem, RWSEM_RD_NONSPINNABLE))
> + rwsem_set_nonspinnable(sem);

ISTR rwsem_set_nonspinnable() already does that test, so no need to do
it again, right?

> + goto queue;
> + }
> +
> /*
> * Save the current read-owner of rwsem, if available, and the
> * reader nonspinnable bit.

\
 
 \ /
  Last update: 2019-06-11 15:14    [W:1.731 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site