lkml.org 
[lkml]   [2010]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] arch/tile: fix rwlock so would-be write lockers don't block new readers
From
2010/11/24 Chris Metcalf <cmetcalf@tilera.com>:
> On 11/22/2010 8:36 PM, Cypher Wu wrote:
>> Say, if core A try to write_lock() rwlock and current_ticket_ is 0 and
>> it write next_ticket_ to 1, when it processing the lock, core B try to
>> write_lock() again and write next_ticket_ to 2, then when A
>> write_unlock() it seen that (current_ticket_+1) is not equal to
>> next_ticket_, so it increment current_ticket_, and core B get the
>> lock. If core A try write_lock again before core B write_unlock, it
>> will increment next_ticket_ to 3. And so on.
>> This may rarely happened, I've tested it yesterday for several hours
>> it goes very well under pressure.
>
> This should be OK when it happens (other than starving out the readers, but
> that was the decision made by doing a ticket lock in the first place).
> Even if we wrap around 255 back to zero on the tickets, the ticket queue
> will work correctly. The key is not to need more than 256 concurrent write
> lock waiters, which we don't.
>
> --
> Chris Metcalf, Tilera Corp.
> http://www.tilera.com
>
>

If we count on that, should we make 'my_ticket_ = (val >>
WR_NEXT_SHIFT) & WR_MASK;'?

--
Cyberman Wu
http://www.meganovo.com


\
 
 \ /
  Last update: 2010-11-24 03:55    [W:0.065 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site