Messages in this thread | | | Date | Thu, 18 May 2023 17:03:35 -0700 | From | Ricardo Neri <> | Subject | Re: [PATCH v4 00/12] sched: Avoid unnecessary migrations within SMT domains |
| |
On Fri, May 12, 2023 at 11:53:48PM +0530, Shrikanth Hegde wrote: > > > On 4/29/23 9:02 PM, Peter Zijlstra wrote: > > On Thu, Apr 06, 2023 at 01:31:36PM -0700, Ricardo Neri wrote: > >> Hi, > >> > >> This is v4 of this series. Previous versions can be found here [1], [2], > >> and here [3]. To avoid duplication, I do not include the cover letter of > >> the original submission. You can read it in [1]. > >> > >> This patchset applies cleanly on today's master branch of the tip tree. > >> > >> Changes since v3: > >> > >> Nobody liked the proposed changes to the setting of prefer_sibling. > >> Instead, I tweaked the solution that Dietmar proposed. Now the busiest > >> group, not the local group, determines the setting of prefer_sibling. > >> > >> Vincent suggested improvements to the logic to decide whether to follow > >> asym_packing priorities. Peter suggested to wrap that in a helper function. > >> I added sched_use_asym_prio(). > >> > >> Ionela found that removing SD_ASYM_PACKING from the SMT domain in x86 > >> rendered sd_asym_packing NULL in SMT cores. Now highest_flag_domain() > >> does not assume that all child domains have the requested flag. > >> > >> Tim found that asym_active_balance() needs to also check for the idle > >> states of the SMT siblings of lb_env::dst_cpu. I added such check. > >> > >> I wrongly assumed that asym_packing could only be used when the busiest > >> group had exactly one busy CPU. This broke asym_packing balancing at the > >> DIE domain. I limited this check to balances between cores at the MC > >> level. > >> > >> As per suggestion from Dietmar, I removed sched_asym_smt_can_pull_tasks() > >> and placed its logic in sched_asym(). Also, sched_asym() uses > >> sched_smt_active() to skip checks when not needed. > >> > >> I also added a patch from Chen Yu to enable asym_packing balancing in > >> Meteor Lake, which has CPUs of different maximum frequency in more than > >> one die. > > > > Is the actual topology of Meteor Lake already public? This patch made me > > wonder if we need SCHED_CLUSTER topology in the hybrid_topology thing, > > but I can't remember (one of the raisins why the endless calls are such > > a frigging waste of time) and I can't seem to find the answer using > > Google either. > > > >> Hopefully, these patches are in sufficiently good shape to be merged? > > > > Changelogs are very sparse towards the end and I had to reverse engineer > > some of it which is a shame. But yeah, on a first reading the code looks > > mostly ok. Specifically 8-10 had me WTF a bit and only at 11 did it > > start to make a little sense. Mostly they utterly fail to answer the > > very fundament "why did you do this" question. > > > > Also, you seem to have forgotten to Cc our friends from IBM such that > > they might verify you didn't break their Power7 stuff -- or do you have > > a Power7 yourself to verify and forgot to mention that? > > Very good patch series in addressing asym packing. Interesting discussions as > well. Took me quite sometime to get through to understand and do a little bit > of testing. > > Tested this patch a bit on power7 with qemu. Tested with SMT=4. sched domains > show ASYM_PACKING present only for SMT domain. > > We don't see any regressions/gain due to patch. SMT priorities are honored when > tasks are scheduled and load_balanced.
Thank you very much for your review and testing! Would you mind sharing the qemu command you use? I would like to test my future patches on power7.
BR, Ricardo
| |