Messages in this thread | | | Date | Wed, 02 Dec 2015 13:44:50 -0500 | From | Waiman Long <> | Subject | Re: [RFC PATCH 3/3] sched/fair: Use different cachelines for readers and writers of load_avg |
| |
On 12/01/2015 03:47 AM, Peter Zijlstra wrote: > On Mon, Nov 30, 2015 at 11:00:35PM -0500, Waiman Long wrote: > >> I think the current kernel use power-of-2 kmemcaches to satisfy kalloc() >> requests except when the size is less than or equal to 192 where there are >> some non-power-of-2 kmemcaches available. Given that the task_group >> structure is large enough with FAIR_GROUP_SCHED enabled, we shouldn't hit >> the case that the allocated buffer is not cacheline aligned. > Using out-of-object storage is allowed (none of the existing sl*b > allocators do so iirc). > > That is, its perfectly valid for a sl*b allocator for 64 byte objects to > allocate 72 bytes for each object and use the 'spare' 8 bytes for object > tracking or whatnot. > > That would respect the minimum alignment guarantee of 8 bytes but not > provide the 'expected' object size alignment you're assuming. > > Also, we have the proper interfaces to request the explicit alignment > for a reason. So if you need the alignment for correctness, use those.
Thanks for the tip. I have just sent out an updated patch set which create a cache-aligned memcache for task group. That should work under all kernel config setting.
Cheers, Longman
| |