lkml.org 
[lkml]   [2010]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] pm_qos: Add system bus performance parameter
mark gross wrote:
> On Tue, Aug 31, 2010 at 03:38:04PM -0700, Saravana Kannan wrote:
>> mark gross wrote:
>>> On Mon, Aug 30, 2010 at 11:56:54AM -0700, Kevin Hilman
>>>>>> Any specific reason PM QoS doesn't support a "summation" "comparitor"?
>>>>> PM_QoS could do a summation, but keep in mind it pm_qos not qos. pm_qos
>>>>> is a best effort thing to constrain power management throttling, not
>>>>> provide a true quality of service or deadline scheduling support.
>>>> For me (and I think Saravana too), this is still all about power, but
>>>> it's closely tied to QoS.
>> Kevin, Thanks for explaining exactly what I had in mind. I was
>> caught up with other work and was glad to see this discussion moved
>> forward.
>>
>> I pretty much agree with all of Kevin's statements, so here is a
>> preemptive "I agree" to all this paragraphs.
>>
>>> Now I get it! For throughput we need to do a sum. Ok, we need sum
>>> comparator/performance aggregaters too!
>> Yay! Finally one of my pet peeves with PM QoS is being resolved(?).
>
> yes, we need to add a summation aggregater to the pm_qos logic and
> likely apply it to all the throughput pm_qos parameters. You where
> right about that point. (but I'm not budging on the unit less
> parameters)

Yeah, I gave up on the unit less parameter.

>>> Do we also need to figure out the max throughput and warn if the pm_qos
>>> requests are going over? I suppose the network stack could register
>>> each device with a max bus bandwidth and pm_qos could warn on exceeding
>>> the hardware throughput.
>> In my opinion, here is where the "best effort" part, if any, comes
>> in. PM QoS could do it's best to meet the QoS while keeping power
>> low, but if the h/w can't support it, we let it run at highest
>> performance and call it "best effort".
>
> so we don't need to warn if the aggregate qos request exceeds the
> capability of the hardware then.

That should work for now. If we see a strong reason for notifying QoS
failures we could add it in the future.

>>>> This decision is both QoS and PM related. Without summation, the 'max'
>>>> request is still 10Mb/s so you would keep the lower power state. But
>>>> you also know that none of the clients will get their requested rate.
>>>>
>>>> There's some gray area here since there is a choice. Was the point
>>>> of the request to keep the NIC at the *power-state* needed for 10Mb/s (a
>>>> PM request) or was the request saying the app wanted at least 10Mb/s (a
>>>> QoS request.)
>>> I need to think on this a bit. You are correct, and it looks like we
>>> could use both types of interfaces.
>> I'm not sure having both interfaces would work. Should a single
>> client be allowed to keep the *power state* to what's needed for
>> 10Mb/s? What happens if another client votes with "I need at least
>> 20Mb/s"?
>
> I need to think some more on this buy its looking like for throughput
> we may only want one type of interface because, as you say, it will be
> hard to reconcile one against the other.
>
>> I think the "limit max power-state to X" should be a specific to
>> each PM QoS parameter (not its clients) similar to how
>> scaling_max_freq works for CPU freq and is not set by each client
>> (process - it uses the CPU).
>
> yes. However; it follows the units of the pm_qos parameter abstraction
> more than anything else.

Not sure I understand this line.

>> So, will be be adding a system bus thruput parameter? Is it going to
>> have min comparator for now?
>
> a summation aggregater, with units of KBS.

Ok. Who is going to add the summation "comparator"? I can write a patch
for the system bus thruput parameter.

-Saravana

--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.


\
 
 \ /
  Last update: 2010-09-02 05:39    [W:0.096 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site