lkml.org 
[lkml]   [2022]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 8/8] mm: Centralize & improve oom reporting in show_mem.c
On Fri, Apr 22, 2022 at 02:58:19PM +0200, Michal Hocko wrote:
> On Thu 21-04-22 19:48:37, Kent Overstreet wrote:
> > This patch:
> > - Changes show_mem() to always report on slab usage
> > - Instead of reporting on all slabs, we only report on top 10 slabs,
> > and in sorted order
>
> As I've already pointed out in the email thread for the previous
> version, this would be better in its own patch explaining why we want to
> make this unconditional and why to limit the number caches to print.
> Why the trashold shouldn't be absolute size based?
>
> > - Also reports on shrinkers, with the new shrinkers_to_text().
> > Shrinkers need to be included in OOM/allocation failure reporting
> > because they're responsible for memory reclaim - if a shrinker isn't
> > giving up its memory, we need to know which one and why.
>
> Again, I do agree that information about shrinkers can be useful but
> there are two main things to consider. Do we want to dump that
> information unconditionaly? E.g. does it make sense to print for all
> allocation requests (even high order, GFP_NOWAIT...)? Should there be
> any explicit trigger when to dump this data (like too many shrinkers
> failing etc)?

To add a concern: largest shrinkers are usually memcg-aware. Scanning
over the whole cgroup tree (with potentially hundreds or thousands of cgroups)
and over all shrinkers from the oom context sounds like a bad idea to me.

IMO it's more appropriate to do from userspace by oomd or a similar daemon,
well before the in-kernel OOM kicks in.

>
> Last but not least let me echo the concern from the other reply. Memory
> allocations are not really reasonable to be done from the oom context so
> the pr_buf doesn't sound like a good tool here.

+1

\
 
 \ /
  Last update: 2022-04-22 17:11    [W:0.127 / U:0.820 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site