lkml.org 
[lkml]   [2021]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] mm/rmap: fix potential batched TLB flush race
Date
Marco Elver <elver@google.com> writes:

> On Tue, 23 Nov 2021 at 08:44, Huang Ying <ying.huang@intel.com> wrote:
>>
>> In theory, the following race is possible for batched TLB flushing.
>>
>> CPU0 CPU1
>> ---- ----
>> shrink_page_list()
>> unmap
>> zap_pte_range()
>> flush_tlb_batched_pending()
>> flush_tlb_mm()
>> try_to_unmap()
>> set_tlb_ubc_flush_pending()
>> mm->tlb_flush_batched = true
>> mm->tlb_flush_batched = false
>>
>> After the TLB is flushed on CPU1 via flush_tlb_mm() and before
>> mm->tlb_flush_batched is set to false, some PTE is unmapped on CPU0
>> and the TLB flushing is pended. Then the pended TLB flushing will be
>> lost. Although both set_tlb_ubc_flush_pending() and
>> flush_tlb_batched_pending() are called with PTL locked, different PTL
>> instances may be used.
>>
>> Because the race window is really small, and the lost TLB flushing
>> will cause problem only if a TLB entry is inserted before the
>> unmapping in the race window, the race is only theoretical. But the
>> fix is simple and cheap too.
>
> Thanks for fixing this!
>
>> Syzbot has reported this too as follows,
>>
>> ==================================================================
>> BUG: KCSAN: data-race in flush_tlb_batched_pending / try_to_unmap_one
> [...]
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index c3a6e6209600..789778067db9 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -632,7 +632,7 @@ struct mm_struct {
>> atomic_t tlb_flush_pending;
>> #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
>> /* See flush_tlb_batched_pending() */
>> - bool tlb_flush_batched;
>> + atomic_t tlb_flush_batched;
>> #endif
>> struct uprobes_state uprobes_state;
>> #ifdef CONFIG_PREEMPT_RT
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 163ac4e6bcee..60902c3cfb4a 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
>> * before the PTE is cleared.
>> */
>> barrier();
>> - mm->tlb_flush_batched = true;
>> + atomic_inc(&mm->tlb_flush_batched);
>
> The use of barrier() and atomic needs some clarification.

There are some comments above barrier() to describe why it is needed.
For atomic, because the type of mm->tlb_flush_batched is atomic_t, do we
need extra clarification?

> Is there a
> requirement that the CPU also doesn't reorder anything after this
> atomic_inc() (which is unordered)? I.e. should this be
> atomic_inc_return_release() and remove barrier()?

We don't have an atomic_xx_acquire() to pair with this. So I guess we
don't need atomic_inc_return_release()?

Best Regards,
Huang, Ying

>> /*
>> * If the PTE was dirty then it's best to assume it's writable. The
>> @@ -680,15 +680,16 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
>> */
>> void flush_tlb_batched_pending(struct mm_struct *mm)
>> {
>> - if (data_race(mm->tlb_flush_batched)) {
>> - flush_tlb_mm(mm);
>> + int batched = atomic_read(&mm->tlb_flush_batched);
>>
>> + if (batched) {
>> + flush_tlb_mm(mm);
>> /*
>> - * Do not allow the compiler to re-order the clearing of
>> - * tlb_flush_batched before the tlb is flushed.
>> + * If the new TLB flushing is pended during flushing,
>> + * leave mm->tlb_flush_batched as is, to avoid to lose
>> + * flushing.
>> */
>> - barrier();
>> - mm->tlb_flush_batched = false;
>> + atomic_cmpxchg(&mm->tlb_flush_batched, batched, 0);
>> }
>> }
>> #else
>> --
>> 2.30.2
>>

\
 
 \ /
  Last update: 2021-11-24 02:45    [W:1.355 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site