lkml.org 
[lkml]   [2017]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
From
Date


On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>> We needed inputs on possible optimization that can be done to
>> timecounter/cyclecounter structures/usage.
>> This mail is in response to review of patch
>> https://patchwork.freedesktop.org/patch/188448/.
>>
>> As Chris's observation below, about dozen of timecounter users in the kernel
>> have below structures
>> defined individually:
>>
>> spinlock_t lock;
>> struct cyclecounter cc;
>> struct timecounter tc;
>>
>> Can we move lock and cc to tc? That way it will be convenient.
>> Also it will allow unifying the locking/overflow watchdog handling across all
>> drivers.
> Looks like none of the timecounter usage sites has a real need to separate
> timecounter and cyclecounter.

Yes. Will share patch for this change.

> The lock is a different question. The locking of the various drivers
> differs and I have no idea how you want to handle that. Just sticking the
> lock into the datastructure and then not making use of it in the
> timercounter code and leave it to the callsites does not make sense.

Most of the locks are held around timecounter_read. In some instances it is held when cyclecounter is
updated standalone or is updated along with timecounter calls.
Was thinking if we move the lock in timecounter functions, drivers just have to do locking around its
operations on cyclecounter. But then another problem I see is there are variation of locking calls
like lock_irqsave, lock_bh, write_lock_irqsave (some using rwlock_t). Should this all locking be left
to driver only then?

> Thanks,
>
> tglx

Thanks
Sagar

\
 
 \ /
  Last update: 2017-11-24 10:06    [W:0.091 / U:0.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site