lkml.org 
[lkml]   [2022]   [Feb]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure
On Sat, Feb 19, 2022 at 09:49:40AM -0800, Suren Baghdasaryan wrote:
> When page allocation in direct reclaim path fails, the system will
> make one attempt to shrink per-cpu page lists and free pages from
> high alloc reserves. Draining per-cpu pages into buddy allocator can
> be a very slow operation because it's done using workqueues and the
> task in direct reclaim waits for all of them to finish before

Yes, drain_all_pages is serious slow(100ms - 150ms on Android)
especially when CPUs are fully packed. It was also spotted in CMA
allocation even when there was on no memory pressure.

> proceeding. Currently this time is not accounted as psi memory stall.

Good spot.

>
> While testing mobile devices under extreme memory pressure, when
> allocations are failing during direct reclaim, we notices that psi
> events which would be expected in such conditions were not triggered.
> After profiling these cases it was determined that the reason for
> missing psi events was that a big chunk of time spent in direct
> reclaim is not accounted as memory stall, therefore psi would not
> reach the levels at which an event is generated. Further investigation
> revealed that the bulk of that unaccounted time was spent inside
> drain_all_pages call.
>
> Annotate drain_all_pages and unreserve_highatomic_pageblock during
> page allocation failure in the direct reclaim path so that delays
> caused by these calls are accounted as memory stall.
>
> Reported-by: Tim Murray <timmurray@google.com>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
> mm/page_alloc.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6d31..7fd0d392b39b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4639,8 +4639,12 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> * Shrink them and try again
> */
> if (!page && !drained) {
> + unsigned long pflags;
> +
> + psi_memstall_enter(&pflags);
> unreserve_highatomic_pageblock(ac, false);
> drain_all_pages(NULL);
> + psi_memstall_leave(&pflags);

Instead of annotating the specific drain_all_pages, how about
moving the annotation from __perform_reclaim to
__alloc_pages_direct_reclaim?

\
 
 \ /
  Last update: 2022-02-20 01:41    [W:0.228 / U:0.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site