lkml.org 
[lkml]   [2019]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC v1 1/2] rcu/tree: Add basic support for kfree_rcu batching
    On Fri, Aug 09, 2019 at 05:25:12PM -0400, Joel Fernandes wrote:
    > On Fri, Aug 09, 2019 at 04:26:45PM -0400, Joel Fernandes wrote:
    > > On Fri, Aug 09, 2019 at 04:22:26PM -0400, Joel Fernandes wrote:
    > > > On Fri, Aug 09, 2019 at 09:33:46AM -0700, Paul E. McKenney wrote:
    > > > > On Fri, Aug 09, 2019 at 11:39:24AM -0400, Joel Fernandes wrote:
    > > > > > On Fri, Aug 09, 2019 at 08:16:19AM -0700, Paul E. McKenney wrote:
    > > > > > > On Thu, Aug 08, 2019 at 07:30:14PM -0400, Joel Fernandes wrote:
    > > > > > [snip]
    > > > > > > > > But I could make it something like:
    > > > > > > > > 1. Letting ->head grow if ->head_free busy
    > > > > > > > > 2. If head_free is busy, then just queue/requeue the monitor to try again.
    > > > > > > > >
    > > > > > > > > This would even improve performance, but will still risk going out of memory.
    > > > > > > >
    > > > > > > > It seems I can indeed hit an out of memory condition once I changed it to
    > > > > > > > "letting list grow" (diff is below which applies on top of this patch) while
    > > > > > > > at the same time removing the schedule_timeout(2) and replacing it with
    > > > > > > > cond_resched() in the rcuperf test. I think the reason is the rcuperf test
    > > > > > > > starves the worker threads that are executing in workqueue context after a
    > > > > > > > grace period and those are unable to get enough CPU time to kfree things fast
    > > > > > > > enough. But I am not fully sure about it and need to test/trace more to
    > > > > > > > figure out why this is happening.
    > > > > > > >
    > > > > > > > If I add back the schedule_uninterruptibe_timeout(2) call, the out of memory
    > > > > > > > situation goes away.
    > > > > > > >
    > > > > > > > Clearly we need to do more work on this patch.
    > > > > > > >
    > > > > > > > In the regular kfree_rcu_no_batch() case, I don't hit this issue. I believe
    > > > > > > > that since the kfree is happening in softirq context in the _no_batch() case,
    > > > > > > > it fares better. The question then I guess is how do we run the rcu_work in a
    > > > > > > > higher priority context so it is not starved and runs often enough. I'll
    > > > > > > > trace more.
    > > > > > > >
    > > > > > > > Perhaps I can also lower the priority of the rcuperf threads to give the
    > > > > > > > worker thread some more room to run and see if anything changes. But I am not
    > > > > > > > sure then if we're preparing the code for the real world with such
    > > > > > > > modifications.
    > > > > > > >
    > > > > > > > Any thoughts?
    > > > > > >
    > > > > > > Several! With luck, perhaps some are useful. ;-)
    > > > > > >
    > > > > > > o Increase the memory via kvm.sh "--memory 1G" or more. The
    > > > > > > default is "--memory 500M".
    > > > > >
    > > > > > Thanks, this definitely helped.
    > > >
    > > > Also, I can go back to 500M if I just keep KFREE_DRAIN_JIFFIES at HZ/50. So I
    > > > am quite happy about that. I think I can declare that the "let list grow
    > > > indefinitely" design works quite well even with an insanely heavily loaded
    > > > case of every CPU in a 16CPU system with 500M memory, indefinitely doing
    > > > kfree_rcu()in a tight loop with appropriate cond_resched(). And I am like
    > > > thinking - wow how does this stuff even work at such insane scales :-D
    > >
    > > Oh, and I should probably also count whether there are any 'total number of
    > > grace periods' reduction, due to the batching!
    >
    > And, the number of grace periods did dramatically drop (by 5X) with the
    > batching!! I have modified the rcuperf test to show the number of grace
    > periods that elapsed during the test.

    Very good! Batching for the win! ;-)

    Thanx, Paul

    \
     
     \ /
      Last update: 2019-08-10 05:40    [W:3.998 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site