lkml.org 
[lkml]   [2019]   [Jan]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.20 008/111] netfilter: nf_conncount: split gc in two phases
    Date
    4.20-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Florian Westphal <fw@strlen.de>

    commit f7fcc98dfc2d136722007fec0debbed761679b94 upstream.

    The lockless workqueue garbage collector can race with packet path
    garbage collector to delete list nodes, as it calls tree_nodes_free()
    with the addresses of nodes that might have been free'd already from
    another cpu.

    To fix this, split gc into two phases.

    One phase to perform gc on the connections: From a locking perspective,
    this is the same as count_tree(): we hold rcu lock, but we do not
    change the tree, we only change the nodes' contents.

    The second phase acquires the tree lock and reaps empty nodes.
    This avoids a race condition of the garbage collection vs. packet path:
    If a node has been free'd already, the second phase won't find it anymore.

    This second phase is, from locking perspective, same as insert_tree().

    The former only modifies nodes (list content, count), latter modifies
    the tree itself (rb_erase or rb_insert).

    Fixes: 5c789e131cbb9 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search")
    Reviewed-by: Shawn Bohrer <sbohrer@cloudflare.com>
    Signed-off-by: Florian Westphal <fw@strlen.de>
    Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    net/netfilter/nf_conncount.c | 22 +++++++++++++++++++---
    1 file changed, 19 insertions(+), 3 deletions(-)

    --- a/net/netfilter/nf_conncount.c
    +++ b/net/netfilter/nf_conncount.c
    @@ -500,16 +500,32 @@ static void tree_gc_worker(struct work_s
    for (node = rb_first(root); node != NULL; node = rb_next(node)) {
    rbconn = rb_entry(node, struct nf_conncount_rb, node);
    if (nf_conncount_gc_list(data->net, &rbconn->list))
    - gc_nodes[gc_count++] = rbconn;
    + gc_count++;
    }
    rcu_read_unlock();

    spin_lock_bh(&nf_conncount_locks[tree]);
    + if (gc_count < ARRAY_SIZE(gc_nodes))
    + goto next; /* do not bother */

    - if (gc_count) {
    - tree_nodes_free(root, gc_nodes, gc_count);
    + gc_count = 0;
    + node = rb_first(root);
    + while (node != NULL) {
    + rbconn = rb_entry(node, struct nf_conncount_rb, node);
    + node = rb_next(node);
    +
    + if (rbconn->list.count > 0)
    + continue;
    +
    + gc_nodes[gc_count++] = rbconn;
    + if (gc_count >= ARRAY_SIZE(gc_nodes)) {
    + tree_nodes_free(root, gc_nodes, gc_count);
    + gc_count = 0;
    + }
    }

    + tree_nodes_free(root, gc_nodes, gc_count);
    +next:
    clear_bit(tree, data->pending_trees);

    next_tree = (tree + 1) % CONNCOUNT_SLOTS;

    \
     
     \ /
      Last update: 2019-01-21 15:21    [W:4.092 / U:0.068 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site