lkml.org 
[lkml]   [2018]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 07/10] drivers: qcom: rpmh: cache sleep/wake state requests
On Mon, Mar 05 2018 at 13:45 -0700, Evan Green wrote:
>Hi Lina,
>
>On Fri, Mar 2, 2018 at 8:43 AM, Lina Iyer <ilina@codeaurora.org> wrote:
>> Active state requests are sent immediately to the mailbox controller,
>> while sleep and wake state requests are cached in this driver to avoid
>> taxing the mailbox controller repeatedly. The cached values will be sent
>> to the controller when the rpmh_flush() is called.
>>
>> Generally, flushing is a system PM activity and may be called from the
>> system PM drivers when the system is entering suspend or deeper sleep
>> modes during cpuidle.
>>
>> Also allow invalidating the cached requests, so they may be re-populated
>> again.
>>
>> Signed-off-by: Lina Iyer <ilina@codeaurora.org>
>> ---
>>
>> Changes in v3:
>> - Remove locking for flush function
>> - Improve comments
>> ---
>> drivers/soc/qcom/rpmh.c | 208 +++++++++++++++++++++++++++++++++++++++++++++++-
>> include/soc/qcom/rpmh.h | 10 +++
>> 2 files changed, 217 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index d95ea3fa8b67..8a04009075b8 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>[...]
>> +
>> +/**
>> + * rpmh_invalidate: Invalidate all sleep and active sets
>> + * sets.
>> + *
>> + * @rc: The RPMh handle got from rpmh_get_dev_channel
>> + *
>> + * Invalidate the sleep and active values in the TCS blocks.
>> + */
>> +int rpmh_invalidate(struct rpmh_client *rc)
>> +{
>> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>> + int ret;
>> + unsigned long flags;
>> +
>> + if (IS_ERR_OR_NULL(rc))
>> + return -EINVAL;
>> +
>> + spin_lock_irqsave(&rpm->lock, flags);
>> + rpm->dirty = true;
>> + spin_unlock_irqrestore(&rpm->lock, flags);
>
>Thanks for removing the locking from the flush path. I was hoping to
>see the locking removed around this statement as well. The way I
>understand it, all of the racy bits are attempting to set dirty to
>true, so you don't need a lock to protect multiple threads from
>setting the same value. The only time dirty is read or cleared is in
>the single-threaded PM path, so there are no potentially dangerous
>interactions.
>
Fair point. Will take care of it.

>If no one has any other comments on the series, then I don't need to
>hold everything up based on this one tweak alone. But if you end up
>spinning it again for other reasons, consider making this change as
>well.
>
Thanks,
Lina

\
 
 \ /
  Last update: 2018-03-06 23:13    [W:0.608 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site