lkml.org 
[lkml]   [2020]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
On Tue, Aug 18, 2020 at 06:55:11PM +0200, Thomas Gleixner wrote:
> On Tue, Aug 18 2020 at 09:13, Paul E. McKenney wrote:
> > On Tue, Aug 18, 2020 at 04:43:14PM +0200, Thomas Gleixner wrote:
> >> On Tue, Aug 18 2020 at 06:53, Paul E. McKenney wrote:
> >> > On Tue, Aug 18, 2020 at 09:43:44AM +0200, Michal Hocko wrote:
> >> >> Thomas had a good point that it doesn't really make much sense to
> >> >> optimize for flooders because that just makes them more effective.
> >> >
> >> > The point is not to make the flooders go faster, but rather for the
> >> > system to be robust in the face of flooders. Robust as in harder for
> >> > a flooder to OOM the system.
> >> >
> >> > And reducing the number of post-grace-period cache misses makes it
> >> > easier for the callback-invocation-time memory freeing to keep up with
> >> > the flooder, thus avoiding (or at least delaying) the OOM.
> >>
> >> Throttling the flooder is incresing robustness far more than reducing
> >> cache misses.
> >
> > True, but it takes time to identify a flooding event that needs to be
> > throttled (as in milliseconds). This time cannot be made up.
>
> Not really. A flooding event will deplete your preallocated pages very
> fast, so you have to go into the allocator and get new ones which
> naturally throttles the offender.

Should it turn out that we can in fact go into the allocator, completely
agreed.

> So if your open/close thing uses the new single argument free which has
> to be called from sleepable context then the allocation either gives you
> a page or that thing has to wait. No fancy extras.

In the single-argument kvfree_rcu() case, completely agreed.

> You still can have a page reserved for your other regular things and
> once that it gone, you have to fall back to the linked list for
> those. But when that happens the extra cache misses are not your main
> problem anymore.

The extra cache misses are a problem in that case because they throttle
the reclamation, which anti-throttles the producer, especially in the
case where callback invocation is offloaded.

Thanx, Paul

\
 
 \ /
  Last update: 2020-08-18 19:13    [W:0.103 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site