lkml.org 
[lkml]   [2017]   [Feb]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 43/53] sched/headers: Move the task_lock()/unlock() APIs to <linux/sched/task.h>
    Date
    The task_lock()/task_unlock() APIs are not realated to core scheduling,
    they are task lifetime APIs, i.e. they belong into <linux/sched/task.h>.

    Move them.

    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    ---
    include/linux/sched.h | 20 --------------------
    include/linux/sched/task.h | 20 ++++++++++++++++++++
    2 files changed, 20 insertions(+), 20 deletions(-)

    diff --git a/include/linux/sched.h b/include/linux/sched.h
    index 5478c419b2d9..3e149b590e96 100644
    --- a/include/linux/sched.h
    +++ b/include/linux/sched.h
    @@ -1522,26 +1522,6 @@ static inline unsigned long wait_task_inactive(struct task_struct *p,
    }
    #endif

    -/*
    - * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring
    - * subscriptions and synchronises with wait4(). Also used in procfs. Also
    - * pins the final release of task.io_context. Also protects ->cpuset and
    - * ->cgroup.subsys[]. And ->vfork_done.
    - *
    - * Nests both inside and outside of read_lock(&tasklist_lock).
    - * It must not be nested with write_lock_irq(&tasklist_lock),
    - * neither inside nor outside.
    - */
    -static inline void task_lock(struct task_struct *p)
    -{
    - spin_lock(&p->alloc_lock);
    -}
    -
    -static inline void task_unlock(struct task_struct *p)
    -{
    - spin_unlock(&p->alloc_lock);
    -}
    -
    /* set thread flags in other task's structures
    * - see asm/thread_info.h for TIF_xxxx flags available
    */
    diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
    index 1be049a18d1b..2be9fde588a7 100644
    --- a/include/linux/sched/task.h
    +++ b/include/linux/sched/task.h
    @@ -91,4 +91,24 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
    }
    #endif

    +/*
    + * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring
    + * subscriptions and synchronises with wait4(). Also used in procfs. Also
    + * pins the final release of task.io_context. Also protects ->cpuset and
    + * ->cgroup.subsys[]. And ->vfork_done.
    + *
    + * Nests both inside and outside of read_lock(&tasklist_lock).
    + * It must not be nested with write_lock_irq(&tasklist_lock),
    + * neither inside nor outside.
    + */
    +static inline void task_lock(struct task_struct *p)
    +{
    + spin_lock(&p->alloc_lock);
    +}
    +
    +static inline void task_unlock(struct task_struct *p)
    +{
    + spin_unlock(&p->alloc_lock);
    +}
    +
    #endif /* _LINUX_SCHED_TASK_H */
    --
    2.7.4
    \
     
     \ /
      Last update: 2017-02-08 20:19    [W:2.605 / U:0.076 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site