[lkml]   [2019]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [RFC PATCH v3 00/16] Core scheduling v3
On Tue, 2019-10-29 at 16:34 -0400, Julien Desfossez wrote:
> On 29-Oct-2019 10:20:57 AM, Dario Faggioli wrote:
> > > Hello,
> > >
> > > As anticipated, I've been trying to follow the development of
> > > this
> > > feature and, in the meantime, I have done some benchmarks.
> Hi Dario,

> Thank you for this comprehensive set of tests and analyses !
Sure. And sorry for replying so late. I was travelling (for speaking
about core scheduling and virtualization at KVMForum :-P) and, after
that, had some catch up to do

> It confirms the trend we are seeing for the VM cases. Basically when
> the
> CPUs are overcommitted, core scheduling helps compared to noHT.

> But when
> we have I/O in the mix (sysbench-oltp), then it becomes a bit less
> clear, it depends if the CPU is still overcommitted or not. About the
> 2nd VM that is doing the background noise, is it enough to fill up
> the
> disk queues or is its disk throughput somewhat limited ? Have you
> compared the results if you disable the disk noise ?
There were some IO, but it was mostly CPU noise. Anyway, sure, I can
repeat the experiments with different kind of noise. TBH, I also have
other ideas for different setup. And of course, I'll work on v4 now.

> Our approach for bare-metal tests is a bit different, we are
> constraining a set of processes only on a limited set of cpus, but I
> like your approach because it pushes more the number of processes
> against the whole system.
Yes, and for this time, I deliberately choose a small system, to avoid
having NUMA effects, etc. But I'm working toward running the evaluation
on a bigger box.

> I am curious, for the tagging in KVM, do you move all the vcpus into
> the
> same cgroup before tagging ? Did you leave the emulator threads
> untagged at all time ?
So, for this round, yes, all the vcpus of the VM were put in the same
cgroup, and then I set the tag for it.

All the other threads that libvirt creates were left out of such group
(and were, hence, untagged). I did a few manual runs with _all_ the
tasks related to a VM in a tagged cgroup, but I did not see much
differences (that's why the numbers for these runs are not reported).

The VM did not have any virtual topology defined.

And in fact, one thing that I want to try is to put pairs of vcpus in
the same cgroup, tag it, and define a virtual HT topology for the VM
(i.e., mark the two vcpu that will be in the same cgroup with the same
tag as threads of the same core).

> For the overhead (without tagging), have you tried bisecting the
> patchset to see which patch introduces the overhead ? it is more than
> I
> had in mind.
Yes, there is definitely something weird. Well, in the meantime, I
improved my automated procedure for running the benchmarks. I'll rerun
on v4. And I'll do a bisect if the overhead is still that big.

> And for the cases when core scheduling improves the performance
> compared
> to the baseline numbers, could it be related to frequency scaling
> (more
> work to do means a higher chance of running at a higher frequency) ?
Governor was 'performance' during all the experiments. But yes, since
it's intel_pstate that is in charge, frequency can still vary, and
something like what you suggest may indeed be happening, I think.

> We are almost ready to send the v4 patchset (most likely tomorrow),
> it
> has been rebased on v5.3.5, so stay tuned and ready for another set
> of
> tests ;-)
Already on it. :-)

Thanks and Regards
Dario Faggioli, Ph.D
Virtualization Software Engineer
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[unhandled content-type:application/pgp-signature]
 \ /
  Last update: 2019-11-15 17:31    [W:0.101 / U:6.512 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site