lkml.org 
[lkml]   [2013]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: increased vmap_area_lock contentions on "n_tty: Move buffers into n_tty_data"
On 09/26/2013 11:32 AM, Andi Kleen wrote:
> On Thu, Sep 26, 2013 at 07:52:23AM -0400, Peter Hurley wrote:
>> On 09/25/2013 11:20 PM, Andi Kleen wrote:
>>> Lin Ming <minggr@gmail.com> writes:
>>>>
>>>> Would you like below patch?
>>>
>>> The loop body keeps rather complex state. It could easily
>>> get confused by parallel RCU changes.
>>>
>>> So if the list changes in parallel you may suddenly
>>> report very bogus values, as the va_start - prev_end
>>> computation may be bogus.
>>>
>>> Perhaps it's ok (may report bogus gaps), but it seems a bit risky.
>>
>> I don't understand how the computed gap would be bogus; there
>> _was_ a list state in which that particular gap existed. The fact
>
> It could change any time as you don't have an atomic view
> of vm_end / vm_start. It is valid to change the fields
> with the lock held.

va_start and va_end are constant for the lifetime of their vmap_area
(if it's accessible by traversing the vmap_area_list), so it is
not possible for an rcu-based list traversal to see different
values of these individual fields than the spin-locked version.

In addition, for the rcu-based traversal to have arrived at any given
vmap_area requires that the previous vmap_area was its adjacent
lower range at the instant in time when the list cursor was advanced;
again, this is no different than if the spin-locked version had
happened to begin at that same instant.

Regards,
Peter Hurley





\
 
 \ /
  Last update: 2013-09-26 19:41    [W:1.682 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site