lkml.org 
[lkml]   [2008]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH] sched: missing locking in sched_domains code
    On Mon, Apr 28, 2008 at 10:32:22AM +0200, Ingo Molnar wrote:
    >
    > * Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
    >
    > > /* doms_cur_mutex serializes access to doms_cur[] array */
    > > static DEFINE_MUTEX(doms_cur_mutex);
    > >
    > > +static inline void lock_doms_cur(void)
    > > +{
    > > + mutex_lock(&doms_cur_mutex);
    > > +}
    >
    > > @@ -7813,8 +7811,10 @@ int arch_reinit_sched_domains(void)
    > > int err;
    > >
    > > get_online_cpus();
    > > + lock_doms_cur();
    >
    > thanks, that looks a lot more clean already. May i ask for another
    > thing, if you are hacking on this anyway? Please get rid of the
    > lock_doms_cur() complication now that it's not conditional - an open
    > coded mutex_lock(&sched_doms_mutex) looks more readable - it gives a
    > clear idea about what's happening. Also, please rename sched_doms_mutex
    > to something less tongue-twisting - such as sched_domains_mutex. Hm?

    Your wish is my order:

    Subject: [PATCH] sched: fix locking in arch_reinit_sched_domains

    From: Heiko Carstens <heiko.carstens@de.ibm.com>

    Concurrent calls to detach_destroy_domains and arch_init_sched_domains
    were prevented by the old scheduler subsystem cpu hotplug mutex. When
    this got converted to get_online_cpus() the locking got broken.
    Unlike before now several processes can concurrently enter the critical
    sections that were protected by the old lock.

    So use the already present doms_cur_mutex to protect these sections again.

    Cc: Gautham R Shenoy <ego@in.ibm.com>
    Cc: Ingo Molnar <mingo@elte.hu>
    Cc: Paul Jackson <pj@sgi.com>
    Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
    ---
    kernel/sched.c | 20 +++++---------------
    1 file changed, 5 insertions(+), 15 deletions(-)

    Index: linux-2.6/kernel/sched.c
    ===================================================================
    --- linux-2.6.orig/kernel/sched.c
    +++ linux-2.6/kernel/sched.c
    @@ -309,7 +309,7 @@ static DEFINE_PER_CPU(struct rt_rq, init
    static DEFINE_SPINLOCK(task_group_lock);

    /* doms_cur_mutex serializes access to doms_cur[] array */
    -static DEFINE_MUTEX(doms_cur_mutex);
    +static DEFINE_MUTEX(sched_domains_mutex);

    #ifdef CONFIG_FAIR_GROUP_SCHED
    #ifdef CONFIG_USER_SCHED
    @@ -358,21 +358,9 @@ static inline void set_task_rq(struct ta
    #endif
    }

    -static inline void lock_doms_cur(void)
    -{
    - mutex_lock(&doms_cur_mutex);
    -}
    -
    -static inline void unlock_doms_cur(void)
    -{
    - mutex_unlock(&doms_cur_mutex);
    -}
    -
    #else

    static inline void set_task_rq(struct task_struct *p, unsigned int cpu) { }
    -static inline void lock_doms_cur(void) { }
    -static inline void unlock_doms_cur(void) { }

    #endif /* CONFIG_GROUP_SCHED */

    @@ -7755,7 +7743,7 @@ void partition_sched_domains(int ndoms_n
    {
    int i, j;

    - lock_doms_cur();
    + mutex_lock(&sched_domains_mutex);

    /* always unregister in case we don't destroy any domains */
    unregister_sched_domain_sysctl();
    @@ -7804,7 +7792,7 @@ match2:

    register_sched_domain_sysctl();

    - unlock_doms_cur();
    + mutex_unlock(&sched_domains_mutex);
    }

    #if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
    @@ -7813,8 +7801,10 @@ int arch_reinit_sched_domains(void)
    int err;

    get_online_cpus();
    + mutex_lock(&sched_domains_mutex);
    detach_destroy_domains(&cpu_online_map);
    err = arch_init_sched_domains(&cpu_online_map);
    + mutex_unlock(&sched_domains_mutex);
    put_online_cpus();

    return err;

    \
     
     \ /
      Last update: 2008-04-28 10:51    [W:2.755 / U:0.680 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site