lkml.org 
[lkml]   [2017]   [Apr]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] mm, numa: Fix bad pmd by atomically check for pmd_trans_huge when marking page tables prot_numa
Date
On 10 Apr 2017, at 12:20, Mel Gorman wrote:

> On Mon, Apr 10, 2017 at 11:45:08AM -0500, Zi Yan wrote:
>>> While this could be fixed with heavy locking, it's only necessary to
>>> make a copy of the PMD on the stack during change_pmd_range and avoid
>>> races. A new helper is created for this as the check if quite subtle and the
>>> existing similar helpful is not suitable. This passed 154 hours of testing
>>> (usually triggers between 20 minutes and 24 hours) without detecting bad
>>> PMDs or corruption. A basic test of an autonuma-intensive workload showed
>>> no significant change in behaviour.
>>>
>>> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
>>> Cc: stable@vger.kernel.org
>>
>> Does this patch fix the same problem fixed by Kirill's patch here?
>> https://lkml.org/lkml/2017/3/2/347
>>
>
> I don't think so. The race I'm concerned with is due to locks not being
> held and is in a different path.

I do not agree. Kirill's patch is fixing the same race problem but in
zap_pmd_range().

The original autoNUMA code first clears PMD then sets it to protnone entry.
pmd_trans_huge() does not return TRUE because it saw cleared PMD, but
pmd_none_or_clear_bad() later saw the protnone entry and reported it as bad.
Is this the problem you are trying solve?

Kirill's patch will pmdp_invalidate() the PMD entry, which keeps _PAGE_PSE bit,
so pmd_trans_huge() will return TRUE. In this case, it also fixes
your race problem in change_pmd_range().

Let me know if I miss anything.

Thanks.

--
Best Regards
Yan Zi
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2017-04-10 19:51    [W:0.078 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site