lkml.org 
[lkml]   [2020]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern
> > > 
> > > Right. Per discussion with Paul, we discussed that it is better if we
> > > pre-allocate N number of array blocks per-CPU and use it for the cache.
> > > Default for N being 1 and tunable with a boot parameter. I agree with this.
> > >
> > As discussed before, we can make use of memory pool API for such
> > purpose. But i am not sure if it should be one pool per CPU or
> > one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated
> > blocks.
>
> There are advantages and disadvantages either way. The advantage of the
> per-CPU pool is that you don't have to worry about something like lock
> contention causing even more pain during an OOM event. One potential
> problem wtih the per-CPU pool can happen when callbacks are offloaded,
> in which case the CPUs needing the memory might never be getting it,
> because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks
> might never be invoking them.
>
> But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y
> either don't have heavy callback loads (HPC systems) or are carefully
> configured (real-time systems). Plus large systems would probably end
> up needing something pretty close to a slab allocator to keep from dying
> from lock contention, and it is hard to justify that level of complexity
> at this point.
>
> Or is there some way to mark a specific slab allocator instance as being
> able to keep some amount of memory no matter what the OOM conditions are?
> If not, the current per-CPU pre-allocated cache is a better choice in the
> near term.
>
As for mempool API:

mempool_alloc() just tries to make regular allocation taking into
account passed gfp_t bitmask. If it fails due to memory pressure,
it uses reserved preallocated pool that consists of number of
desirable elements(preallocated when a pool is created).

mempoll_free() returns an element to to pool, if it detects that
current reserved elements are lower then minimum allowed elements,
it will add an element to reserved pool, i.e. refill it. Otherwise
just call kfree() or whatever we define as "element-freeing function."

>
> If not, the current per-CPU pre-allocated cache is a better choice in the
> near term.
>
OK. I see your point.

Thank you for your comments and view :)

--
Vlad Rezki

\
 
 \ /
  Last update: 2020-04-01 20:16    [W:0.178 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site