lkml.org 
[lkml]   [2017]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/3] mm/page_ref: Ensure page_ref_unfreeze is ordered against prior accesses
From
Date
On 06/06/2017 07:58 PM, Will Deacon wrote:
> page_ref_freeze and page_ref_unfreeze are designed to be used as a pair,
> wrapping a critical section where struct pages can be modified without
> having to worry about consistency for a concurrent fast-GUP.
>
> Whilst page_ref_freeze has full barrier semantics due to its use of
> atomic_cmpxchg, page_ref_unfreeze is implemented using atomic_set, which
> doesn't provide any barrier semantics and allows the operation to be
> reordered with respect to page modifications in the critical section.
>
> This patch ensures that page_ref_unfreeze is ordered after any critical
> section updates, by invoking smp_mb__before_atomic() prior to the
> atomic_set.
>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Acked-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Undecided if it's really needed. This is IMHO not the classical case
from Documentation/core-api/atomic_ops.rst where we have to make
modifications visible before we let others see them? Here the one who is
freezing is doing it so others can't get their page pin and interfere
with the freezer's work. But maybe there are some (documented or not)
consistency guarantees to expect once you obtain the pin, that can be
violated, or they might be added later, so it would be safer to add the
barrier?

> ---
> include/linux/page_ref.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 610e13271918..74d32d7905cb 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -174,6 +174,7 @@ static inline void page_ref_unfreeze(struct page *page, int count)
> VM_BUG_ON_PAGE(page_count(page) != 0, page);
> VM_BUG_ON(count == 0);
>
> + smp_mb__before_atomic();
> atomic_set(&page->_refcount, count);
> if (page_ref_tracepoint_active(__tracepoint_page_ref_unfreeze))
> __page_ref_unfreeze(page, count);
>

\
 
 \ /
  Last update: 2017-06-12 01:25    [W:0.484 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site