Messages in this thread | | | Date | Thu, 3 Dec 2015 19:17:14 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH v2 2/3] sched/fair: Move hot load_avg into its own cacheline |
| |
On Thu, Dec 03, 2015 at 09:56:02AM -0800, bsegall@google.com wrote: > Peter Zijlstra <peterz@infradead.org> writes:
> > @@ -7402,11 +7405,12 @@ void __init sched_init(void) > > #endif /* CONFIG_RT_GROUP_SCHED */ > > > > #ifdef CONFIG_CGROUP_SCHED > > + task_group_cache = KMEM_CACHE(task_group, 0); > > + > > list_add(&root_task_group.list, &task_groups); > > INIT_LIST_HEAD(&root_task_group.children); > > INIT_LIST_HEAD(&root_task_group.siblings); > > autogroup_init(&init_task); > > - > > #endif /* CONFIG_CGROUP_SCHED */ > > > > for_each_possible_cpu(i) { > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -248,7 +248,12 @@ struct task_group { > > unsigned long shares; > > > > #ifdef CONFIG_SMP > > - atomic_long_t load_avg; > > + /* > > + * load_avg can be heavily contended at clock tick time, so put > > + * it in its own cacheline separated from the fields above which > > + * will also be accessed at each tick. > > + */ > > + atomic_long_t load_avg ____cacheline_aligned; > > #endif > > #endif > > > > This loses the cacheline-alignment for task_group, is that ok?
I'm a bit dense (its late) can you spell that out? Did you mean me killing SLAB_HWCACHE_ALIGN? That should not matter because:
#define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\ sizeof(struct __struct), __alignof__(struct __struct),\ (__flags), NULL)
picks up the alignment explicitly.
And struct task_group having one cacheline aligned member, means that the alignment of the composite object (the struct proper) must be an integer multiple of this (typically 1).
| |