lkml.org 
[lkml]   [2020]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 11/18] rcu/tree: Introduce expedited_drain flag
    Date
    From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>

    It is used and set to true when the bulk array can not
    be maintained, it happens under low memory condition
    and memory pressure.

    In that case the drain work is scheduled right away and
    not after KFREE_DRAIN_JIFFIES. It tends to speed up the
    reclaim path. On the other hand, there is no data showing
    the difference yet.

    Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
    Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
    ---
    kernel/rcu/tree.c | 20 ++++++++++++++++----
    1 file changed, 16 insertions(+), 4 deletions(-)

    diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
    index 8fbc8450284db..3b94526f490cb 100644
    --- a/kernel/rcu/tree.c
    +++ b/kernel/rcu/tree.c
    @@ -3128,14 +3128,16 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
    * due to memory pressure.
    *
    * Each kvfree_call_rcu() request is added to a batch. The batch will be drained
    - * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will
    - * be free'd in workqueue context. This allows us to: batch requests together to
    - * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load.
    + * every KFREE_DRAIN_JIFFIES number of jiffies or can be scheduled right away if
    + * a low memory is detected. All the objects in the batch will be free'd in
    + * workqueue context. This allows us to: batch requests together to reduce the
    + * number of grace periods during heavy kfree_rcu()/kvfree_rcu() load.
    */
    void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
    {
    unsigned long flags;
    struct kfree_rcu_cpu *krcp;
    + bool expedited_drain = false;
    void *ptr;

    local_irq_save(flags); // For safely calling this_cpu_ptr().
    @@ -3161,6 +3163,14 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
    head->func = func;
    head->next = krcp->head;
    krcp->head = head;
    +
    + /*
    + * There was an issue to place the pointer directly
    + * into array, due to memory pressure. Initiate an
    + * expedited drain to accelerate lazy invocation of
    + * appropriate free calls.
    + */
    + expedited_drain = true;
    }

    WRITE_ONCE(krcp->count, krcp->count + 1);
    @@ -3169,7 +3179,9 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
    if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
    !krcp->monitor_todo) {
    krcp->monitor_todo = true;
    - schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
    +
    + schedule_delayed_work(&krcp->monitor_work,
    + expedited_drain ? 0 : KFREE_DRAIN_JIFFIES);
    }

    unlock_return:
    --
    2.26.0.rc2.310.g2932bb562d-goog
    \
     
     \ /
      Last update: 2020-03-30 04:34    [W:3.179 / U:0.260 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site