Messages in this thread | | | From | Francisco Jerez <> | Subject | Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface | Date | Wed, 12 Feb 2020 15:32:35 -0800 |
| |
"Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
> Hi All, > > This series of patches is based on the observation that after commit > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of > code dedicated to the handling of global PM QoS classes in general. That code > takes up space and adds overhead in vain, so it is better to get rid of it. > > Moreover, with that unuseful code removed, the interface for adding QoS > requests for CPU latency becomes inelegant and confusing, so it is better to > clean it up. > > Patches [01/28-12/28] do the first part described above, which also includes > some assorted cleanups of the core PM QoS code that doesn't go away. > > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic > "define stubs, migrate users, change the API proper" manner), patches > [26-27/28] update the general comments and documentation to match the code > after the previous changes and the last one makes the CPU latency QoS depend > on CPU_IDLE (because cpuidle is the only user of its target value today). > > The majority of the patches in this series don't change the functionality of > the code at all (at least not intentionally). > > Please refer to the changelogs of individual patches for details. > > Thanks!
Hi Rafael,
I believe some of the interfaces removed here could be useful in the near future. It goes back to the energy efficiency- (and IGP graphics performance-)improving series I submitted a while ago [1]. It relies on some mechanism for the graphics driver to report an I/O bottleneck to CPUFREQ, allowing it to make a more conservative trade-off between energy efficiency and latency, which can greatly reduce the CPU package energy usage of IO-bound applications (in some graphics benchmarks I've seen it reduced by over 40% on my ICL laptop), and therefore also allows TDP-bound applications to obtain a reciprocal improvement in throughput.
I'm not particularly fond of the global PM QoS interfaces TBH, it seems like an excessively blunt hammer to me, so I can very much relate to the purpose of this series. However the finer-grained solution I've implemented has seen some push-back from i915 and CPUFREQ devs due to its complexity, since it relies on task scheduler changes in order to track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra during our previous discussions), pretty much in the spirit of PELT but applied to IO utilization.
With that in mind I was hoping we could take advantage of PM QoS as a temporary solution [2], by introducing a global PM QoS class similar but with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing device drivers to report a *lower* bound on CPU latency beyond which PM shall not bother to reduce latency if doing so would have negative consequences on the energy efficiency and/or parallelism of the system. Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper bound to take precedence over the new lower bound in cases where the former is in conflict with the latter.
I can think of several alternatives to that which don't involve temporarily holding off your clean-up, but none of them sound particularly exciting:
1/ Use an interface specific to CPUFREQ, pretty much like the one introduced in my original submission [1].
2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver to either place a request on every CPU of the system (which would cause a frequent operation to have O(N) complexity on the number of CPUs on the system), or play a cat-and-mouse game with the task scheduler.
3/ Add a new global PM QoS mechanism roughly duplicating the cpu_latency_qos_* interfaces introduced in this series. Drop your change making this available to CPU IDLE only.
3/ Go straight to a scheduling-based approach, which is likely to greatly increase the review effort required to upstream this feature. (Peter might disagree though?)
Regards, Francisco.
[1] https://lore.kernel.org/linux-pm/20180328063845.4884-1-currojerez@riseup.net/
[2] I've written the code to do this already, but I wasn't able to test and debug it extensively until this week due to the instability of i915 on recent v5.5 kernels that prevented any benchmark run from surviving more than a few hours on my ICL system, hopefully the required i915 fixes will flow back to stable branches soon enough. [unhandled content-type:application/pgp-signature] | |