lkml.org 
[lkml]   [2011]   [Dec]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
Subject[PATCH RFC] IPv6: Avoid taking write lock for /proc/net/ipv6_route
From
During some debugging I needed to look into how /proc/net/ipv6_route
operated and in my digging I found its calling fib6_clean_all() which
uses "write_lock_bh(&table->tb6_lock)" before doing the walk of the
table. I saw this on 2.6.32, but reading the code I believe the same
basic idea exists in the current code. Looking at the rtnetlink code
they are only calling "read_lock_bh(&table->tb6_lock);" via
fib6_dump_table(). While I realize reading from proc probably isn't
the recommended way of fetching the ipv6 route table; taking a write
lock seems unnecessary and would probably cause network performance
issues.

To verify this I loaded up the ipv6 route table and then ran iperf in 3 cases:
* doing nothing
* reading ipv6 route table via proc (while :; do cat
/proc/net/ipv6_route > /dev/null; done)
* reading ipv6 route table via rtnetlink - (while :; do ip -6 route
show table all > /dev/null; done)

* Load the ipv6 route table up with:
* for ((i = 0;i < 4000;i++)); do ip route add unreachable 2000::$i; done

* iperf commands:
* client: iperf -i 1 -V -c <ipv6 addr>
* server: iperf -V -s

* iperf results - 3 runs as client, 3 runs as server (in Mbits/sec)
* nothing: client: 927,927,927 server: 927,927,927
* proc: client: 179,97,96,113 server: 142,112,133
* iproute: client: 928,927,928 server: 927,927,927

lock_stat shows taking the write lock is causing the slowdown. Using
this info I decided to write a version of fib6_clean_all() which
replaces write_lock_bh(&table->tb6_lock) with
read_lock_bh(&table->tb6_lock). With this new function I see the same
results as with my rtnetlink iperf test. I guess my question is what
am I missing? Is there a reason you need to take the write lock when
reading the route table to display to proc?

I attached a patch with my crude method listed above.

Thanks
Josh
commit 606e34e9301aab5e73f47f24b21a61f813398367
Author: Josh Hunt <joshhunt00@gmail.com>
Date: Wed Dec 28 14:54:44 2011 -0800

IPv6: Avoid taking write lock for /proc/net/ipv6_route

During some debugging I needed to look into how /proc/net/ipv6_route
operated and in my digging I found its calling fib6_clean_all() which uses
"write_lock_bh(&table->tb6_lock)" before doing the walk of the table. I
found this on 2.6.32, but reading the code I believe the same basic idea
exists currently. Looking at the rtnetlink code they are only calling
"read_lock_bh(&table->tb6_lock);" via fib6_dump_table(). While I realize
reading from proc isn't the recommended way of fetching the ipv6 route
table; taking a write lock seems unnecessary and would probably cause
network performance issues.

To verify this I loaded up the ipv6 route table and then ran iperf in 3
cases:
* doing nothing
* reading ipv6 route table via proc
(while :; do cat /proc/net/ipv6_route > /dev/null; done)
* reading ipv6 route table via rtnetlink
(while :; do ip -6 route show table all > /dev/null; done)

* Load the ipv6 route table up with:
* for ((i = 0;i < 4000;i++)); do ip route add unreachable 2000::$i; done

* iperf commands:
* client: iperf -i 1 -V -c <ipv6 addr>
* server: iperf -V -s

* iperf results - 3 runs each (in Mbits/sec)
* nothing: client: 927,927,927 server: 927,927,927
* proc: client: 179,97,96,113 server: 142,112,133
* iproute: client: 928,927,928 server: 927,927,927

lock_stat shows taking the write lock is causing the slowdown. Using this
info I decided to write a version of fib6_clean_all() which replaces
write_lock_bh(&table->tb6_lock) with read_lock_bh(&table->tb6_lock). With
this new function I see the same results as with my rtnetlink iperf test.

Signed-off-by: Josh Hunt <joshhunt00@gmail.com>
---

diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
index 5735a0f..1d242bb 100644
--- a/include/net/ip6_fib.h
+++ b/include/net/ip6_fib.h
@@ -202,6 +202,10 @@ struct fib6_node *fib6_locate(struct fib6_node *root,
const struct in6_addr *daddr, int dst_len,
const struct in6_addr *saddr, int src_len);

+extern void fib6_clean_all_ro(struct net *net,
+ int (*func)(struct rt6_info *, void *arg),
+ int prune, void *arg);
+
extern void fib6_clean_all(struct net *net,
int (*func)(struct rt6_info *, void *arg),
int prune, void *arg);
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index 93718f3..3924b67 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -1380,6 +1380,26 @@ static void fib6_clean_tree(struct net *net, struct fib6_node *root,
fib6_walk(&c.w);
}

+void fib6_clean_all_ro(struct net *net, int (*func)(struct rt6_info *, void *arg),
+ int prune, void *arg)
+{
+ struct fib6_table *table;
+ struct hlist_node *node;
+ struct hlist_head *head;
+ unsigned int h;
+
+ rcu_read_lock();
+ for (h = 0; h < FIB6_TABLE_HASHSZ; h++) {
+ head = &net->ipv6.fib_table_hash[h];
+ hlist_for_each_entry_rcu(table, node, head, tb6_hlist) {
+ read_lock_bh(&table->tb6_lock);
+ fib6_clean_tree(net, &table->tb6_root,
+ func, prune, arg);
+ read_unlock_bh(&table->tb6_lock);
+ }
+ }
+ rcu_read_unlock();
+}
void fib6_clean_all(struct net *net, int (*func)(struct rt6_info *, void *arg),
int prune, void *arg)
{
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 3399dd3..20f8f50 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -2659,7 +2659,7 @@ static int rt6_info_route(struct rt6_info *rt, void *p_arg)
static int ipv6_route_show(struct seq_file *m, void *v)
{
struct net *net = (struct net *)m->private;
- fib6_clean_all(net, rt6_info_route, 0, m);
+ fib6_clean_all_ro(net, rt6_info_route, 0, m);
return 0;
}
\
 
 \ /
  Last update: 2011-12-29 00:25    [W:0.175 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site