Messages in this thread | | | Date | Sat, 25 Feb 2012 12:24:03 +0530 | From | Srivatsa Vaddagiri <> | Subject | Re: sched: Avoid SMT siblings in select_idle_sibling() if possible |
| |
* Mike Galbraith <efault@gmx.de> [2012-02-23 12:21:04]:
> Unpinned netperf TCP_RR and/or tbench pairs? Anything that's wakeup > heavy should tell the tail.
Here are some tbench numbers:
Machine : 2 Intel Xeon X5650 (Westmere) CPUs (6 core/package) Kernel : tip (HEAD at ebe97fa) dbench : v4.0
One tbench server/client pair was run on same host 5 times (with fs-cache being purged each time) and avg of 5 run for various cases noted below:
Case A : HT enabled (24 logical CPUs)
Thr'put : 168.166 MB/s (SD_SHARE_PKG_RESOURCES + !SD_BALANCE_WAKE) Thr'put : 169.564 MB/s (SD_SHARE_PKG_RESOURCES + SD_BALANCE_WAKE at mc/smt) Thr'put : 173.151 MB/s (!SD_SHARE_PKG_RESOURCES + !SD_BALANCE_WAKE)
Case B : HT disabled (12 logical CPUs)
Thr'put : 167.977 MB/s (SD_SHARE_PKG_RESOURCES + !SD_BALANCE_WAKE) Thr'put : 167.891 MB/s (SD_SHARE_PKG_RESOURCES + SD_BALANCE_WAKE at mc) Thr'put : 173.801 MB/s (!SD_SHARE_PKG_RESOURCES + !SD_BALANCE_WAKE)
Observations:
a. ~3% improvement seen with SD_SHARE_PKG_RESOURCES disabled, which I guess reflects the cost of waking to a cold L2 cache.
b. No degradation seen with SD_BALANCE_WAKE enabled at mc/smt domains
IMO we need to detect tbench type paired wakeups as synchronous case, in which case blindly wakeup the task to cur_cpu (as cost of L2 cache miss could outweight the cost of any reduced scheduling latencies).
IOW select_task_rq_fair() needs to be given better hint as to whether L2 cache has been made warm by someone (interrupt handler or a producer task), in which case (consumer) task needs to be woken in the same L2 cache domain (i.e on cur_cpu itself)?
- vatsa
| |