lkml.org 
[lkml]   [2008]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [RFC/PATCH 0/4] CPUSET driven CPU isolation
    From
    Date

    On Thu, 2008-02-28 at 02:12 -0800, David Rientjes wrote:
    > On Thu, 28 Feb 2008, David Rientjes wrote:
    >
    > > Should the kernel refuse to move some threads, such as the migration
    > > or watchdog kthreads, out of the root cpuset where the mems can be
    > > adjusted to disallow access to the cpu to which they are bound? This is
    > > a quick way to cause a crash or soft lockup.

    Indeed, there is a hole in my cpus_match_system() logic in that when the
    system set is reduced to a single cpu, the tasks bound to that cpu also
    match.

    I had wanted to avoid adding PF_ flags (as I remember we're running
    short on them), but I think you're right.

    Thanks!

    > Something like this?
    > ---
    > include/linux/sched.h | 1 +
    > kernel/cpuset.c | 5 ++++-
    > kernel/kthread.c | 1 +
    > kernel/sched.c | 6 ++++++
    > 4 files changed, 12 insertions(+), 1 deletions(-)
    >
    > diff --git a/include/linux/sched.h b/include/linux/sched.h
    > --- a/include/linux/sched.h
    > +++ b/include/linux/sched.h
    > @@ -1464,6 +1464,7 @@ static inline void put_task_struct(struct task_struct *t)
    > #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */
    > #define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */
    > #define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */
    > +#define PF_CPU_BOUND 0x04000000 /* Kthread bound to specific cpu */
    > #define PF_MEMPOLICY 0x10000000 /* Non-default NUMA mempolicy */
    > #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */
    > #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezeable */
    > diff --git a/kernel/cpuset.c b/kernel/cpuset.c
    > --- a/kernel/cpuset.c
    > +++ b/kernel/cpuset.c
    > @@ -1175,11 +1175,14 @@ static void cpuset_attach(struct cgroup_subsys *ss,
    > struct mm_struct *mm;
    > struct cpuset *cs = cgroup_cs(cont);
    > struct cpuset *oldcs = cgroup_cs(oldcont);
    > + int ret;
    >
    > mutex_lock(&callback_mutex);
    > guarantee_online_cpus(cs, &cpus);
    > - set_cpus_allowed(tsk, cpus);
    > + ret = set_cpus_allowed(tsk, cpus);
    > mutex_unlock(&callback_mutex);
    > + if (ret < 0)
    > + return;
    >
    > from = oldcs->mems_allowed;
    > to = cs->mems_allowed;
    > diff --git a/kernel/kthread.c b/kernel/kthread.c
    > --- a/kernel/kthread.c
    > +++ b/kernel/kthread.c
    > @@ -180,6 +180,7 @@ void kthread_bind(struct task_struct *k, unsigned int cpu)
    > wait_task_inactive(k);
    > set_task_cpu(k, cpu);
    > k->cpus_allowed = cpumask_of_cpu(cpu);
    > + k->flags |= PF_CPU_BOUND;
    > }
    > EXPORT_SYMBOL(kthread_bind);
    >
    > diff --git a/kernel/sched.c b/kernel/sched.c
    > --- a/kernel/sched.c
    > +++ b/kernel/sched.c
    > @@ -5345,6 +5345,12 @@ int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
    > goto out;
    > }
    >
    > + if (unlikely((p->flags & PF_CPU_BOUND) && p != current &&
    > + !cpus_equal(p->cpus_allowed, new_mask))) {
    > + ret = -EINVAL;
    > + goto out;
    > + }
    > +
    > if (p->sched_class->set_cpus_allowed)
    > p->sched_class->set_cpus_allowed(p, &new_mask);
    > else {



    \
     
     \ /
      Last update: 2008-02-28 11:29    [W:0.027 / U:0.584 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site