lkml.org 
[lkml]   [2009]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch 2/4] sched: Use rcu in sched_get/set_affinity()
tasklist_lock is held read locked to protect the find_task_by_vpid()
call and to prevent the task going away. sched_setaffinity acquires a
task struct ref and drops tasklist lock right away. The access to the
cpus_allowed mask is protected by rq->lock.

rcu_read_lock() provides the same protection here.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
kernel/sched.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

Index: linux-2.6-tip/kernel/sched.c
===================================================================
--- linux-2.6-tip.orig/kernel/sched.c
+++ linux-2.6-tip/kernel/sched.c
@@ -6535,22 +6535,18 @@ long sched_setaffinity(pid_t pid, const
int retval;

get_online_cpus();
- read_lock(&tasklist_lock);
+ rcu_read_lock();

p = find_process_by_pid(pid);
if (!p) {
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
put_online_cpus();
return -ESRCH;
}

- /*
- * It is not safe to call set_cpus_allowed with the
- * tasklist_lock held. We will bump the task_struct's
- * usage count and then drop tasklist_lock.
- */
+ /* Prevent p going away */
get_task_struct(p);
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();

if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
retval = -ENOMEM;
@@ -6636,7 +6632,7 @@ long sched_getaffinity(pid_t pid, struct
int retval;

get_online_cpus();
- read_lock(&tasklist_lock);
+ rcu_read_lock();

retval = -ESRCH;
p = find_process_by_pid(pid);
@@ -6652,7 +6648,7 @@ long sched_getaffinity(pid_t pid, struct
task_rq_unlock(rq, &flags);

out_unlock:
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
put_online_cpus();

return retval;



\
 
 \ /
  Last update: 2009-12-09 11:17    [W:0.071 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site