lkml.org 
[lkml]   [2020]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v18 06/14] mm/damon: Implement callbacks for the virtual memory address spaces
On Mon, Jul 13, 2020 at 1:44 AM SeongJae Park <sjpark@amazon.com> wrote:
>
> From: SeongJae Park <sjpark@amazon.de>
>
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
> spaces of specific processes by simply configuring the implementation to
> be used by DAMON.
>
> The low level primitives for the fundamental access monitoring are
> defined in two parts:
> 1. Identification of the monitoring target address range for the address
> space.
> 2. Access check of specific address range in the target space.
>
> The reference implementation for the virtual address space provided by
> this commit is designed as below.
>
> PTE Accessed-bit Based Access Check
> -----------------------------------
>
> The implementation uses PTE Accessed-bit for basic access checks. That
> is, it clears the bit for next sampling target page and checks whether
> it set again after one sampling period. To avoid disturbing other
> Accessed bit users such as the reclamation logic, the implementation
> adjusts the ``PG_Idle`` and ``PG_Young`` appropriately, as same to the
> 'Idle Page Tracking'.
>
> VMA-based Target Address Range Construction
> -------------------------------------------
>
> Only small parts in the super-huge virtual address space of the
> processes are mapped to physical memory and accessed. Thus, tracking
> the unmapped address regions is just wasteful. However, because DAMON
> can deal with some level of noise using the adaptive regions adjustment
> mechanism, tracking every mapping is not strictly required but could
> even incur a high overhead in some cases. That said, too huge unmapped
> areas inside the monitoring target should be removed to not take the
> time for the adaptive mechanism.
>
> For the reason, this implementation converts the complex mappings to
> three distinct regions that cover every mapped area of the address
> space. Also, the two gaps between the three regions are the two biggest
> unmapped areas in the given address space. The two biggest unmapped
> areas would be the gap between the heap and the uppermost mmap()-ed
> region, and the gap between the lowermost mmap()-ed region and the stack
> in most of the cases. Because these gaps are exceptionally huge in
> usual address spacees, excluding these will be sufficient to make a
> reasonable trade-off. Below shows this in detail::
>
> <heap>
> <BIG UNMAPPED REGION 1>
> <uppermost mmap()-ed region>
> (small mmap()-ed regions and munmap()-ed regions)
> <lowermost mmap()-ed region>
> <BIG UNMAPPED REGION 2>
> <stack>
>
> Signed-off-by: SeongJae Park <sjpark@amazon.de>
> Reviewed-by: Leonard Foerster <foersleo@amazon.de>
[snip]
> +
> +static void damon_mkold(struct mm_struct *mm, unsigned long addr)
> +{
> + pte_t *pte = NULL;
> + pmd_t *pmd = NULL;
> + spinlock_t *ptl;
> +
> + if (follow_pte_pmd(mm, addr, NULL, &pte, &pmd, &ptl))
> + return;
> +
> + if (pte) {
> + if (pte_young(*pte)) {

Any reason for skipping mmu_notifier_clear_young()? Why exclude VMs as
DAMON's target applications?

> + clear_page_idle(pte_page(*pte));
> + set_page_young(pte_page(*pte));
> + }
> + *pte = pte_mkold(*pte);
> + pte_unmap_unlock(pte, ptl);
> + return;
> + }
> +

\
 
 \ /
  Last update: 2020-07-17 02:48    [W:0.188 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site