lkml.org 
[lkml]   [2019]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v4] sched/core: Preempt current task in favour of bound kthread
On Thu, Dec 12, 2019 at 09:46:17AM +1100, Dave Chinner wrote:
> On Wed, Dec 11, 2019 at 11:08:29PM +0530, Srikar Dronamraju wrote:
> > A running task can wake-up a per CPU bound kthread on the same CPU.
> > If the current running task doesn't yield the CPU before the next load
> > balance operation, the scheduler would detect load imbalance and try to
> > balance the load. However this load balance would fail as the waiting
> > task is CPU bound, while the running task cannot be moved by the regular
> > load balancer. Finally the active load balancer would kick in and move
> > the task to a different CPU/Core. Moving the task to a different
> > CPU/core can lead to loss in cache affinity leading to poor performance.
> >
> > This is more prone to happen if the current running task is CPU
> > intensive and the sched_wake_up_granularity is set to larger value.
> > When the sched_wake_up_granularity was relatively small, it was observed
> > that the bound thread would complete before the load balancer would have
> > chosen to move the cache hot task to a different CPU.
> >
> > To deal with this situation, the current running task would yield to a
> > per CPU bound kthread, provided kthread is not CPU intensive.
>
> So a question for you here: when does the workqueue worker pre-empt
> the currently running task? Is it immediately? Or when a time-slice
> of the currently running task runs out?
>
> We don't want queued work immediately pre-empting the task that
> queued the work - the queued work is *deferred* work that should be
> run _soon_ but we want the currently running task to finish what it
> is doing first if possible.

Good point, something to maybe try (Srikar?) is making tick preemption
more agressive for such tasks.

The below extends the previous patch to retain the set_next_buddy() on
wakeup, but does not make the actual preemption more agressive.

Then it 'fixes' the tick preemption to better align with the actual
scheduler pick (ie. consider the buddy hints).

---
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 26 +++++++++++++++++++-------
kernel/sched/sched.h | 3 ++-
3 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 90e4b00ace89..75738b136ea7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2560,6 +2560,9 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
success = 1;
cpu = task_cpu(p);

+ if (is_per_cpu_kthread(p))
+ wake_flags |= WF_KTHREAD;
+
/*
* Ensure we load p->on_rq _after_ p->state, otherwise it would
* be possible to, falsely, observe p->on_rq == 0 and get stuck
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e97a01..78e681c8c19a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4126,6 +4126,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
update_min_vruntime(cfs_rq);
}

+static struct sched_entity *
+__pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr);
+
/*
* Preempt the current task with a newly woken task if needed:
*/
@@ -4156,13 +4159,13 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
if (delta_exec < sysctl_sched_min_granularity)
return;

- se = __pick_first_entity(cfs_rq);
+ se = __pick_next_entity(cfs_rq, NULL);
delta = curr->vruntime - se->vruntime;

if (delta < 0)
return;

- if (delta > ideal_runtime)
+ if (delta > ideal_runtime) // maybe frob this too ?
resched_curr(rq_of(cfs_rq));
}

@@ -4210,7 +4213,7 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
* 4) do not run the "skip" process, if something else is available
*/
static struct sched_entity *
-pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+__pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
struct sched_entity *left = __pick_first_entity(cfs_rq);
struct sched_entity *se;
@@ -4255,8 +4258,14 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
if (cfs_rq->next && wakeup_preempt_entity(cfs_rq->next, left) < 1)
se = cfs_rq->next;

- clear_buddies(cfs_rq, se);
+ return se;
+}

+static struct sched_entity *
+pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+{
+ struct sched_entity *se = __pick_next_entity(cfs_rq, curr);
+ clear_buddies(cfs_rq, se);
return se;
}

@@ -6565,7 +6574,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
struct sched_entity *se = &curr->se, *pse = &p->se;
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
int scale = cfs_rq->nr_running >= sched_nr_latency;
- int next_buddy_marked = 0;
+ int wpe, next_buddy_marked = 0;

if (unlikely(se == pse))
return;
@@ -6612,14 +6621,17 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
find_matching_se(&se, &pse);
update_curr(cfs_rq_of(se));
BUG_ON(!pse);
- if (wakeup_preempt_entity(se, pse) == 1) {
+ wpe = wakeup_preempt_entity(se, pse);
+ if (wpe >= !(wake_flags & WF_KTHREAD)) {
/*
* Bias pick_next to pick the sched entity that is
* triggering this preemption.
*/
if (!next_buddy_marked)
set_next_buddy(pse);
- goto preempt;
+
+ if (wpe == 1)
+ goto preempt;
}

return;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 280a3c735935..2ee86ef51001 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1643,7 +1643,8 @@ static inline int task_on_rq_migrating(struct task_struct *p)
*/
#define WF_SYNC 0x01 /* Waker goes to sleep after wakeup */
#define WF_FORK 0x02 /* Child wakeup after fork */
-#define WF_MIGRATED 0x4 /* Internal use, task got migrated */
+#define WF_MIGRATED 0x04 /* Internal use, task got migrated */
+#define WF_KTHREAD 0x08

/*
* To aid in avoiding the subversion of "niceness" due to uneven distribution
\
 
 \ /
  Last update: 2019-12-12 11:11    [W:0.082 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site