lkml.org 
[lkml]   [2024]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH v2] mm: support multi-size THP numa balancing
From
On 18.03.24 10:42, Baolin Wang wrote:
>
>
> On 2024/3/18 14:16, Huang, Ying wrote:
>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>
>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>> but the numa balancing still prohibits mTHP migration even though it is an
>>> exclusive mapping, which is unreasonable. Thus let's support the exclusive
>>> mTHP numa balancing firstly.
>>>
>>> Allow scanning mTHP:
>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>> issue, although there is still a GUP race, the issue seems to have been
>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers()
>>> to skip shared CoW pages though this is not a precise sharers count. To
>>> check if the folio is shared, ideally we want to make sure every page is
>>> mapped to the same process, but doing that seems expensive and using
>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>
>>> Allow migrating mTHP:
>>> As mentioned in the previous thread[1], large folios are more susceptible
>>> to false sharing issues, leading to pages ping-pong back and forth during
>>> numa balancing, which is currently hard to resolve. Therefore, as a start to
>>> support mTHP numa balancing, only exclusive mappings are allowed to perform
>>> numa migration to avoid the false sharing issues with large folios. Similarly,
>>> use the estimated mapcount to skip shared mappings, which seems can work
>>> in most cases (?), and we've used folio_estimated_sharers() to skip shared
>>> mappings in migrate_misplaced_folio() for numa balancing, seems no real
>>> complaints.
>>
>> IIUC, folio_estimated_sharers() cannot identify multi-thread
>> applications. If some mTHP is shared by multiple threads in one
>
> Right.
>

Wasn't this "false sharing" previously raised/described by Mel in this
context?

>> process, how to deal with that?
>
> IMHO, seems the should_numa_migrate_memory() already did something to help?
>
> ......
> if (!cpupid_pid_unset(last_cpupid) &&
> cpupid_to_nid(last_cpupid) != dst_nid)
> return false;
>
> /* Always allow migrate on private faults */
> if (cpupid_match_pid(p, last_cpupid))
> return true;
> ......
>
> If the node of the CPU that accessed the mTHP last time is different
> from this time, which means there is some contention for that mTHP among
> threads. So it will not allow migration.
>
> If the contention for the mTHP among threads is light or the accessing
> is relatively stable, then we can allow migration?
>
>> For example, I think that we should avoid to migrate on the first fault
>> for mTHP in should_numa_migrate_memory().
>>
>> More thoughts? Can we add a field in struct folio for mTHP to count
>> hint page faults from the same node?
>
> IIUC, we do not need add a new field for folio, seems we can reuse
> ->_flags_2a field. But how to use it? If there are multiple consecutive
> NUMA faults from the same node, then allow migration?

_flags_2a cannot be used. You could place something after _deferred_list
IIRC. But only for folios with order>1.

But I also wonder how one could achieve that using a new field.

--
Cheers,

David / dhildenb


\
 
 \ /
  Last update: 2024-05-27 15:52    [W:0.087 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site