lkml.org 
[lkml]   [2015]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2 2/3] sched/fair: Move hot load_avg into its own cacheline
    On Wed, Dec 02, 2015 at 01:41:49PM -0500, Waiman Long wrote:
    > +/*
    > + * Make sure that the task_group structure is cacheline aligned when
    > + * fair group scheduling is enabled.
    > + */
    > +#ifdef CONFIG_FAIR_GROUP_SCHED
    > +static inline struct task_group *alloc_task_group(void)
    > +{
    > + return kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
    > +}
    > +
    > +static inline void free_task_group(struct task_group *tg)
    > +{
    > + kmem_cache_free(task_group_cache, tg);
    > +}
    > +#else /* CONFIG_FAIR_GROUP_SCHED */
    > +static inline struct task_group *alloc_task_group(void)
    > +{
    > + return kzalloc(sizeof(struct task_group), GFP_KERNEL);
    > +}
    > +
    > +static inline void free_task_group(struct task_group *tg)
    > +{
    > + kfree(tg);
    > +}
    > +#endif /* CONFIG_FAIR_GROUP_SCHED */

    I think we can simply always use the kmem_cache, both slab and slub
    merge slabcaches where appropriate.


    \
     
     \ /
      Last update: 2015-12-03 12:21    [W:3.265 / U:0.896 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site