lkml.org 
[lkml]   [2010]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 3/3] vhost: apply cpumask and cgroup to vhost workers
    On Tue, Jun 01, 2010 at 11:35:15AM +0200, Tejun Heo wrote:
    > Apply the cpumask and cgroup of the initializing task to the created
    > vhost worker.
    >
    > Based on Sridhar Samudrala's patch. Li Zefan spotted a bug in error
    > path (twice), fixed (twice).
    >
    > Signed-off-by: Tejun Heo <tj@kernel.org>
    > Cc: Michael S. Tsirkin <mst@redhat.com>
    > Cc: Sridhar Samudrala <samudrala.sridhar@gmail.com>
    > Cc: Li Zefan <lizf@cn.fujitsu.com>

    Something that I wanted to figure out - what happens if the
    CPU mask limits us to a certain CPU that subsequently goes offline?
    Will e.g. flush block forever or until that CPU comes back?
    Also, does singlethreaded workqueue behave in the same way?

    > ---
    > drivers/vhost/vhost.c | 34 ++++++++++++++++++++++++++++++----
    > 1 file changed, 30 insertions(+), 4 deletions(-)
    >
    > Index: work/drivers/vhost/vhost.c
    > ===================================================================
    > --- work.orig/drivers/vhost/vhost.c
    > +++ work/drivers/vhost/vhost.c
    > @@ -23,6 +23,7 @@
    > #include <linux/highmem.h>
    > #include <linux/slab.h>
    > #include <linux/kthread.h>
    > +#include <linux/cgroup.h>
    >
    > #include <linux/net.h>
    > #include <linux/if_packet.h>
    > @@ -187,11 +188,29 @@ long vhost_dev_init(struct vhost_dev *de
    > struct vhost_virtqueue *vqs, int nvqs)
    > {
    > struct task_struct *worker;
    > - int i;
    > + cpumask_var_t mask;
    > + int i, ret = -ENOMEM;
    > +
    > + if (!alloc_cpumask_var(&mask, GFP_KERNEL))
    > + goto out_free_mask;
    >
    > worker = kthread_create(vhost_worker, dev, "vhost-%d", current->pid);
    > - if (IS_ERR(worker))
    > - return PTR_ERR(worker);
    > + if (IS_ERR(worker)) {
    > + ret = PTR_ERR(worker);
    > + goto out_free_mask;
    > + }
    > +
    > + ret = sched_getaffinity(current->pid, mask);
    > + if (ret)
    > + goto out_stop_worker;
    > +
    > + ret = sched_setaffinity(worker->pid, mask);
    > + if (ret)
    > + goto out_stop_worker;
    > +
    > + ret = cgroup_attach_task_current_cg(worker);
    > + if (ret)
    > + goto out_stop_worker;
    >
    > dev->vqs = vqs;
    > dev->nvqs = nvqs;
    > @@ -214,7 +233,14 @@ long vhost_dev_init(struct vhost_dev *de
    > }
    >
    > wake_up_process(worker); /* avoid contributing to loadavg */
    > - return 0;
    > + ret = 0;
    > + goto out_free_mask;
    > +
    > +out_stop_worker:
    > + kthread_stop(worker);
    > +out_free_mask:
    > + free_cpumask_var(mask);
    > + return ret;
    > }
    >
    > /* Caller should have device mutex */


    \
     
     \ /
      Last update: 2010-06-01 12:25    [W:0.027 / U:32.212 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site