lkml.org 
[lkml]   [2021]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject答复: [PATCH v3] mm/compaction:let proactiv e compaction order configurable
Date


> -----邮件原件-----
> 发件人: Khalid Aziz <khalid.aziz@oracle.com>
> 发送时间: 2021年5月7日 5:27
> 收件人: David Rientjes <rientjes@google.com>; Chu,Kaiping
> <chukaiping@baidu.com>
> 抄送: mcgrof@kernel.org; keescook@chromium.org; yzaikin@google.com;
> akpm@linux-foundation.org; vbabka@suse.cz; nigupta@nvidia.com;
> bhe@redhat.com; iamjoonsoo.kim@lge.com; mateusznosek0@gmail.com;
> sh_def@163.com; linux-kernel@vger.kernel.org; linux-fsdevel@vger.kernel.org;
> linux-mm@kvack.org
> 主题: Re: [PATCH v3] mm/compaction:let proactive compaction order
> configurable
>
> On 4/25/21 9:15 PM, David Rientjes wrote:
> > On Sun, 25 Apr 2021, chukaiping wrote:
> >
> >> Currently the proactive compaction order is fixed to
> >> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> >> normal 4KB memory, but it's too high for the machines with small
> >> normal memory, for example the machines with most memory configured
> >> as 1GB hugetlbfs huge pages. In these machines the max order of free
> >> pages is often below 9, and it's always below 9 even with hard
> >> compaction. This will lead to proactive compaction be triggered very
> >> frequently. In these machines we only care about order of 3 or 4.
> >> This patch export the oder to proc and let it configurable by user,
> >> and the default value is still COMPACTION_HPAGE_ORDER.
> >>
> >
> > As asked in the review of the v1 of the patch, why is this not a
> > userspace policy decision? If you are interested in order-3 or
> > order-4 fragmentation, for whatever reason, you could periodically
> > check /proc/buddyinfo and manually invoke compaction on the system.
> >
> > In other words, why does this need to live in the kernel?
> >
>
> I have struggled with this question. Fragmentation and allocation stalls are
> significant issues on large database systems which also happen to use memory
> in similar ways (90+% of memory is allocated as hugepages) leaving just
> enough memory to run rest of the userspace processes. I had originally
> proposed a kernel patch to monitor, do a trend analysis of memory usage and
> take proactive action -
> <https://lore.kernel.org/lkml/20190813014012.30232-1-khalid.aziz@oracle.c
> om/>. Based upon feedback, I moved the implementation to userspace -
> <https://github.com/oracle/memoptimizer>. Test results across multiple
> workloads have been very good. Results from one of the workloads are in this
> blog - <https://blogs.oracle.com/linux/anticipating-your-memory-needs>. It
> works well from userspace but it has limited ways to influence reclamation and
> compaction. It uses watermark_scale_factor to boost watermarks and cause
> reclamation to kick in earlier and run longer. It uses
> /sys/devices/system/node/node%d/compact to force compaction on the node
> expected to reach high level of fragmentation soon. Neither of these is very
> efficient from userspace even though they get the job done. Scaling watermark
> has longer lasting impact than raising scanning priority in balance_pgdat()
> temporarily. I plan to experiment with watermark_boost_factor to see if I can
> use it in place of /sys/devices/system/node/node%d/compact and get the
> same results. Doing all of this in the kernel can be more efficient and lessen
> potential negative impact on the system. On the other hand, it is easier to fix
> and update such policies in userspace although at the cost of having a
> performance critical component live outside the kernel and thus not be active
> on the system by default.
>
I studied your memoptimizer these days, I also agree to move the implementation into kernel to co-work with current proactive compaction mechanism to get higher efficiency.
By the way I am interested about the memoptimizer, I want to have a test of it, but how to evaluate its effectiveness?


\
 
 \ /
  Last update: 2021-05-11 09:50    [W:0.139 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site