lkml.org 
[lkml]   [2017]   [Feb]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 03/14] mm: use pmd lock instead of racy checks in zap_pmd_range()
On Sun, Feb 05, 2017 at 11:12:41AM -0500, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> Originally, zap_pmd_range() checks pmd value without taking pmd lock.
> This can cause pmd_protnone entry not being freed.
>
> Because there are two steps in changing a pmd entry to a pmd_protnone
> entry. First, the pmd entry is cleared to a pmd_none entry, then,
> the pmd_none entry is changed into a pmd_protnone entry.
> The racy check, even with barrier, might only see the pmd_none entry
> in zap_pmd_range(), thus, the mapping is neither split nor zapped.

Okay, this only can happen to MADV_DONTNEED as we hold
down_write(mmap_sem) for the rest of zap_pmd_range() and whoever modifies
page tables has to hold at least down_read(mmap_sem) or exclude parallel
modification in other ways.

See 1a5a9906d4e8 ("mm: thp: fix pmd_bad() triggering in code paths holding
mmap_sem read mode") for more details.

+Andrea.

> Later, in free_pmd_range(), pmd_none_or_clear() will see the
> pmd_protnone entry and clear it as a pmd_bad entry. Furthermore,
> since the pmd_protnone entry is not properly freed, the corresponding
> deposited pte page table is not freed either.

free_pmd_range() should be fine: we only free page tables after vmas gone
(under down_write(mmap_sem() in exit_mmap() and unmap_region()) or after
pagetables moved (under down_write(mmap_sem) in shift_arg_pages()).

> This causes memory leak or kernel crashing, if VM_BUG_ON() is enabled.

The problem is that numabalancing calls change_huge_pmd() under
down_read(mmap_sem), not down_write(mmap_sem) as the rest of users do.
It makes numabalancing the only code path beyond page fault that can turn
pmd_none() into pmd_trans_huge() under down_read(mmap_sem).

This can lead to race when MADV_DONTNEED miss THP. That's not critical for
pagefault vs. MADV_DONTNEED race as we will end up with clear page in that
case. Not so much for change_huge_pmd().

Looks like we need pmdp_modify() or something to modify protection bits
inplace, without clearing pmd.

Not sure how to get crash scenario.

BTW, Zi, have you observed the crash? Or is it based on code inspection?
Any backtraces?

Ouch! madvise_free_huge_pmd() is broken too. We shouldn't clear pmd in the
middle of it as we only hold down_read(mmap_sem). I guess we need a helper
to clear both access and dirty bits.
Minchan, could you look into it?

--
Kirill A. Shutemov

\
 
 \ /
  Last update: 2017-02-07 15:20    [W:0.246 / U:0.700 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site