lkml.org 
[lkml]   [2023]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC PATCH 1/6] sched/fair: Add util_guest for tasks
On Wed, Apr 5, 2023 at 1:14 AM Peter Zijlstra <peterz@infradead.org> wrote:
>

Hi Peter,

Appreciate your time,

> On Thu, Mar 30, 2023 at 03:43:36PM -0700, David Dai wrote:
> > @@ -499,6 +509,7 @@ struct sched_avg {
> > unsigned long load_avg;
> > unsigned long runnable_avg;
> > unsigned long util_avg;
> > + unsigned long util_guest;
> > struct util_est util_est;
> > } ____cacheline_aligned;
> >
>
> Yeah, no... you'll have to make room first.
>

I’m not sure what you mean. Do you mean making room by reducing
another member in the same struct? If so, which member would be a good
fit to shorten? Or do you mean something else entirely?

Thanks,
David

> struct sched_avg {
> /* typedef u64 -> __u64 */ long long unsigned int last_update_time; /* 0 8 */
> /* typedef u64 -> __u64 */ long long unsigned int load_sum; /* 8 8 */
> /* typedef u64 -> __u64 */ long long unsigned int runnable_sum; /* 16 8 */
> /* typedef u32 -> __u32 */ unsigned int util_sum; /* 24 4 */
> /* typedef u32 -> __u32 */ unsigned int period_contrib; /* 28 4 */
> long unsigned int load_avg; /* 32 8 */
> long unsigned int runnable_avg; /* 40 8 */
> long unsigned int util_avg; /* 48 8 */
> struct util_est {
> unsigned int enqueued; /* 56 4 */
> unsigned int ewma; /* 60 4 */
> } __attribute__((__aligned__(8)))util_est __attribute__((__aligned__(8))); /* 56 8 */
>
> /* size: 64, cachelines: 1, members: 9 */
> /* forced alignments: 1 */
> } __attribute__((__aligned__(64)));
>
>

\
 
 \ /
  Last update: 2023-04-06 00:55    [W:0.117 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site