lkml.org 
[lkml]   [2017]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH -mm] mm, pagemap: Fix soft dirty marking for PMD migration entry
From
Date
On 10/17/2017 01:48 PM, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
>
> Now, when the page table is walked in the implementation of
> /proc/<pid>/pagemap, pmd_soft_dirty() is used for both the PMD huge
> page map and the PMD migration entries. That is wrong,
> pmd_swp_soft_dirty() should be used for the PMD migration entries
> instead because the different page table entry flag is used.

Yeah, different flags can be used on various archs to represent
mapped a PMD and a migration PMD entry. Sounds good.

>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: Daniel Colascione <dancol@google.com>
> Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> ---
> fs/proc/task_mmu.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 2593a0c609d7..01aad772f8db 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1311,13 +1311,15 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
> pmd_t pmd = *pmdp;
> struct page *page = NULL;
>
> - if ((vma->vm_flags & VM_SOFTDIRTY) || pmd_soft_dirty(pmd))
> + if (vma->vm_flags & VM_SOFTDIRTY)
> flags |= PM_SOFT_DIRTY;
>
> if (pmd_present(pmd)) {
> page = pmd_page(pmd);
>
> flags |= PM_PRESENT;
> + if (pmd_soft_dirty(pmd))
> + flags |= PM_SOFT_DIRTY;
> if (pm->show_pfn)
> frame = pmd_pfn(pmd) +
> ((addr & ~PMD_MASK) >> PAGE_SHIFT);
> @@ -1329,6 +1331,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
> frame = swp_type(entry) |
> (swp_offset(entry) << MAX_SWAPFILES_SHIFT);
> flags |= PM_SWAP;
> + if (pmd_swp_soft_dirty(pmd))
> + flags |= PM_SOFT_DIRTY;

Though I was initially skeptical about whether this will compile
on POWER because of lack of a pmd_swp_soft_dirty() definition
but it turns out we have a generic one to fallback on as we dont
define ARCH_ENABLE_THP_MIGRATION yet.

#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
#ifndef CONFIG_ARCH_ENABLE_THP_MIGRATION
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd;
}

static inline int pmd_swp_soft_dirty(pmd_t pmd)
{
return 0;
}

static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
{
return pmd;
}
#endif

\
 
 \ /
  Last update: 2017-10-17 11:48    [W:0.066 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site