lkml.org 
[lkml]   [2016]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 5/5] irqtime: drop local_irq_save/restore from irqtime_account_irq
Date
From: Rik van Riel <riel@redhat.com>

Drop local_irq_save/restore from irqtime_account_irq.
Instead, have softirq and hardirq track their time spent
independently, with the softirq code subtracting hardirq
time that happened during the duration of the softirq run.

The softirq code can be interrupted by hardirq code at
any point in time, but it can check whether it got a
consistent snapshot of the timekeeping variables it wants,
and loop around in the unlikely case that it did not.

Signed-off-by: Rik van Riel <riel@redhat.com>
---
kernel/sched/cputime.c | 54 ++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 43 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index e009077aeab6..466aff107f73 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -26,7 +26,9 @@
DEFINE_PER_CPU(u64, cpu_hardirq_time);
DEFINE_PER_CPU(u64, cpu_softirq_time);

-static DEFINE_PER_CPU(u64, irq_start_time);
+static DEFINE_PER_CPU(u64, hardirq_start_time);
+static DEFINE_PER_CPU(u64, softirq_start_time);
+static DEFINE_PER_CPU(u64, prev_hardirq_time);
static int sched_clock_irqtime;

void enable_sched_clock_irqtime(void)
@@ -53,36 +55,66 @@ DEFINE_PER_CPU(seqcount_t, irq_time_seq);
* softirq -> hardirq, hardirq -> softirq
*
* When exiting hardirq or softirq time, account the elapsed time.
+ *
+ * When exiting softirq time, subtract the amount of hardirq time that
+ * interrupted this softirq run, to avoid double accounting of that time.
*/
void irqtime_account_irq(struct task_struct *curr, int irqtype)
{
- unsigned long flags;
- s64 delta;
+ u64 prev_softirq_start;
+ u64 prev_hardirq;
+ u64 hardirq_time;
+ s64 delta = 0;
int cpu;

if (!sched_clock_irqtime)
return;

- local_irq_save(flags);
-
cpu = smp_processor_id();
- delta = sched_clock_cpu(cpu) - __this_cpu_read(irq_start_time);
- __this_cpu_add(irq_start_time, delta);
+ prev_hardirq = __this_cpu_read(prev_hardirq_time);
+ prev_softirq_start = __this_cpu_read(softirq_start_time);
+ /*
+ * Softirq context may get interrupted by hardirq context,
+ * on the same CPU. At softirq
+ */
+ if (irqtype == HARDIRQ_OFFSET) {
+ delta = sched_clock_cpu(cpu) - __this_cpu_read(hardirq_start_time);
+ __this_cpu_add(hardirq_start_time, delta);
+ } else do {
+ hardirq_time = READ_ONCE(per_cpu(cpu_hardirq_time, cpu));
+ u64 now = sched_clock_cpu(cpu);
+
+ delta = now - prev_softirq_start;
+ if (in_serving_softirq()) {
+ /*
+ * Leaving softirq context. Avoid double counting by
+ * subtracting hardirq time from this interval.
+ */
+ delta -= hardirq_time - prev_hardirq;
+ } else {
+ /* Entering softirq context. Note start times. */
+ __this_cpu_write(softirq_start_time, now);
+ __this_cpu_write(prev_hardirq_time, hardirq_time);
+ }
+ /*
+ * If a hardirq happened during this calculation, it may not
+ * have gotten a consistent snapshot. Try again.
+ */
+ } while (hardirq_time != READ_ONCE(per_cpu(cpu_hardirq_time, cpu)));

- irq_time_write_begin();
/*
* We do not account for softirq time from ksoftirqd here.
* We want to continue accounting softirq time to ksoftirqd thread
* in that case, so as not to confuse scheduler with a special task
* that do not consume any time, but still wants to run.
*/
- if (hardirq_count())
+ if (irqtype == HARDIRQ_OFFSET && hardirq_count())
__this_cpu_add(cpu_hardirq_time, delta);
- else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
+ else if (irqtype == SOFTIRQ_OFFSET && in_serving_softirq() &&
+ curr != this_cpu_ksoftirqd())
__this_cpu_add(cpu_softirq_time, delta);

irq_time_write_end();
- local_irq_restore(flags);
}
EXPORT_SYMBOL_GPL(irqtime_account_irq);

--
2.5.5
\
 
 \ /
  Last update: 2016-06-08 05:41    [W:0.039 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site