lkml.org 
[lkml]   [2010]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Considerations on sched APIs under RT patch
From
Date
On Tue, 2010-04-20 at 23:56 +0200, Primiano Tucci wrote:
> Hi Peter,

> long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) {
> cpumask_var_t cpus_allowed, new_mask;
> struct task_struct *p;
> int retval;
>
> get_online_cpus();
> --> read_lock(&tasklist_lock);
>
>
> My question is: suppose that tasklist_lock is held by a writer.
> What happens to the calling thread? It can't take the lock, therefore
> it yields to the next ready task (that in my scenario has a lower
> priority).
> In my view, this is not a Priority Inversion problem. The problem is
> that the sched_setaffinity is unexpectedly "suspensive" and yields to
> the lower thread.

read_locks are converted into "special" rt_mutexes. The only thing
special about them, is the owner may grab the same read lock more than
once (recursive).

If a lower priority process currently holds the tasklist_lock for write,
when a high priority process tries to take it for read (or write for
that matter) it will block on the lower priority process. But that lower
priority process will acquire the priority of the higher priority
process (priority inheritance) and will run at that priority until it
releases the lock. Then it will go back to its low priority and the
higher priority process will then preempt it and acquire the lock for
read.

The above is what is expected.

-- Steve




\
 
 \ /
  Last update: 2010-04-21 01:03    [W:0.075 / U:1.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site