lkml.org 
[lkml]   [2006]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH, RFC] RCU : OOM avoidance and lower latency (Version 2), HOTPLUG_CPU fix
First patch was buggy, sorry :(

This 2nd version makes no more RCU assumptions, because only the 'donelist'
queue is fetched for an item to be deleted. Items from the donelist are ready
to be freed.

This V2 also corrects a problem in case of a CPU hotplug, we forgot to update
the ->count variable when transfering a queue to another one.

-------------------------------------------------------------------------
In order to avoid some OOM triggered by a flood of call_rcu() calls, we
increased in linux 2.6.14 maxbatch from 10 to 10000, and conditionally call
set_need_resched() in call_rcu().

This solution doesnt solve all the problems and has drawbacks.

1) Using a big maxbatch has a bad impact on latency.
2) A flood of call_rcu_bh() still can OOM

I have some servers that once in a while crashes when the ip route cache is
flushed. After raising /proc/sys/net/ipv4/route/secret_interval (so that *no*
flush is done), I got better uptime for these servers. But in some cases I
think the network stack can floods call_rcu_bh(), and a fatal OOM occurs.

I suggest in this patch :

1) To lower maxbatch to a more reasonable value (as far as the latency is
concerned)

2) To be able to guard a RCU cpu queue against a maximal count (10.000 for
example). If this limit is reached, free the oldest entry (if available from
the donelist queue).

3) Bug correction in __rcu_offline_cpu() where we forgot to adjust ->count
field when transfering a queue to another one.

In my stress tests, I could not reproduce OOM anymore after applying this patch.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
--- linux-2.6.15/kernel/rcupdate.c 2006-01-03 04:21:10.000000000 +0100
+++ linux-2.6.15-edum/kernel/rcupdate.c 2006-01-06 13:32:02.000000000 +0100
@@ -71,14 +71,14 @@

/* Fake initialization required by compiler */
static DEFINE_PER_CPU(struct tasklet_struct, rcu_tasklet) = {NULL};
-static int maxbatch = 10000;
+static int maxbatch = 100;

#ifndef __HAVE_ARCH_CMPXCHG
/*
* We use an array of spinlocks for the rcurefs -- similar to ones in sparc
* 32 bit atomic_t implementations, and a hash function similar to that
* for our refcounting needs.
- * Can't help multiprocessors which donot have cmpxchg :(
+ * Can't help multiprocessors which dont have cmpxchg :(
*/

spinlock_t __rcuref_hash[RCUREF_HASH_SIZE] = {
@@ -110,9 +110,19 @@
*rdp->nxttail = head;
rdp->nxttail = &head->next;

- if (unlikely(++rdp->count > 10000))
- set_need_resched();
-
+/*
+ * OOM avoidance : If we queued too many items in this queue,
+ * free the oldest entry (from the donelist only to respect
+ * RCU constraints)
+ */
+ if (unlikely(++rdp->count > 10000 && (head = rdp->donelist))) {
+ rdp->count--;
+ rdp->donelist = head->next;
+ if (!rdp->donelist)
+ rdp->donetail = &rdp->donelist;
+ local_irq_restore(flags);
+ return head->func(head);
+ }
local_irq_restore(flags);
}

@@ -148,12 +158,19 @@
rdp = &__get_cpu_var(rcu_bh_data);
*rdp->nxttail = head;
rdp->nxttail = &head->next;
- rdp->count++;
/*
- * Should we directly call rcu_do_batch() here ?
- * if (unlikely(rdp->count > 10000))
- * rcu_do_batch(rdp);
+ * OOM avoidance : If we queued too many items in this queue,
+ * free the oldest entry (from the donelist only to respect
+ * RCU constraints)
*/
+ if (unlikely(++rdp->count > 10000 && (head = rdp->donelist))) {
+ rdp->count--;
+ rdp->donelist = head->next;
+ if (!rdp->donelist)
+ rdp->donetail = &rdp->donelist;
+ local_irq_restore(flags);
+ return head->func(head);
+ }
local_irq_restore(flags);
}

@@ -208,19 +225,20 @@
*/
static void rcu_do_batch(struct rcu_data *rdp)
{
- struct rcu_head *next, *list;
- int count = 0;
+ struct rcu_head *next = NULL, *list;
+ int count = maxbatch;

list = rdp->donelist;
while (list) {
- next = rdp->donelist = list->next;
+ next = list->next;
list->func(list);
list = next;
rdp->count--;
- if (++count >= maxbatch)
+ if (--count <= 0)
break;
}
- if (!rdp->donelist)
+ rdp->donelist = next;
+ if (!next)
rdp->donetail = &rdp->donelist;
else
tasklet_schedule(&per_cpu(rcu_tasklet, rdp->cpu));
@@ -344,11 +362,9 @@
static void rcu_move_batch(struct rcu_data *this_rdp, struct rcu_head *list,
struct rcu_head **tail)
{
- local_irq_disable();
*this_rdp->nxttail = list;
if (list)
this_rdp->nxttail = tail;
- local_irq_enable();
}

static void __rcu_offline_cpu(struct rcu_data *this_rdp,
@@ -362,9 +378,12 @@
if (rcp->cur != rcp->completed)
cpu_quiet(rdp->cpu, rcp, rsp);
spin_unlock_bh(&rsp->lock);
+ local_irq_disable();
rcu_move_batch(this_rdp, rdp->curlist, rdp->curtail);
rcu_move_batch(this_rdp, rdp->nxtlist, rdp->nxttail);
-
+ this_rdp->count += rdp->count;
+ rdp->count = 0;
+ local_irq_enable();
}
static void rcu_offline_cpu(int cpu)
{
\
 
 \ /
  Last update: 2006-01-06 13:56    [W:0.094 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site