lkml.org 
[lkml]   [2013]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v4 13/14] workqueue: implement NUMA affinity for unbound workqueues
    On 31/03/13 00:32, Tejun Heo wrote:
    > Hello, Lai.
    >
    >
    > On Sat, Mar 30, 2013 at 9:13 AM, Lai Jiangshan <eag0628@gmail.com <mailto:eag0628@gmail.com>> wrote:
    >
    >
    > + /* all pwqs have been created successfully, let's install'em */
    > mutex_lock(&wq->mutex);
    >
    > copy_workqueue_attrs(wq->unbound_attrs, new_attrs);
    > +
    > + /* save the previous pwq and install the new one */
    > for_each_node(node)
    > - last_pwq = numa_pwq_tbl_install(wq, node, pwq);
    > + pwq_tbl[node] = numa_pwq_tbl_install(wq, node, pwq_tbl[node]);
    > +
    > + /* @dfl_pwq might not have been used, ensure it's linked */
    > + link_pwq(dfl_pwq);
    > + swap(wq->dfl_pwq, dfl_pwq);
    >
    > mutex_unlock(&wq->mutex);
    >
    > - put_pwq_unlocked(last_pwq);
    > + /* put the old pwqs */
    > + for_each_node(node)
    > + put_pwq_unlocked(pwq_tbl[node]);
    > + put_pwq_unlocked(dfl_pwq);
    > +
    > + put_online_cpus();
    > return 0;
    >
    >
    >
    > Forgot to free new_attrs in previous patch
    > (workqueue: fix unbound workqueue attrs hashing / comparison).
    >
    > Forgot to free tmp_attrs, pwq_tbl in this patch.
    >
    >
    > Right, will fix.
    >
    > +retry:
    > + mutex_lock(&wq->mutex);
    > +
    > + copy_workqueue_attrs(target_attrs, wq->unbound_attrs);
    > + pwq = unbound_pwq_by_node(wq, node);
    > +
    > + /*
    > + * Let's determine what needs to be done. If the target cpumask is
    > + * different from wq's, we need to compare it to @pwq's and create
    > + * a new one if they don't match. If the target cpumask equals
    > + * wq's, the default pwq should be used. If @pwq is already the
    > + * default one, nothing to do; otherwise, install the default one.
    > + */
    > + if (wq_calc_node_cpumask(wq->unbound_attrs, node, cpu_off, cpumask)) {
    > + if (cpumask_equal(cpumask, pwq->pool->attrs->cpumask))
    > + goto out_unlock;
    > + } else if (pwq != wq->dfl_pwq) {
    > + goto use_dfl_pwq;
    > + } else {
    > + goto out_unlock;
    > + }
    > +
    > + /*
    > + * Have we already created a new pwq? As we could have raced with
    > + * apply_workqueue_attrs(), verify that its attrs match the desired
    > + * one before installing.
    > + */
    >
    >
    > I don't see any race since there is get/put_online_cpu() in apply_workqueue_attrs().
    >
    >
    > I don't know. I kinda want wq exclusion to be self-contained, but yeah the hotplug exclusion here is *almost* explicit so maybe it would be better to depend on it. Will think about it.
    >
    > + mutex_unlock(&wq->mutex);
    > + put_pwq_unlocked(old_pwq);
    > + free_unbound_pwq(new_pwq);
    > +}
    >
    >
    > OK, your solution is what I suggested: swapping dfl_pwq <-> node pwq.
    > But when the last cpu of the node(of the wq) is trying to offline.
    > you need to handle the work items of node pwq(old_pwq in the code).
    >
    > you may handle the works which are still queued by migrating, OR by
    > flushing the works.
    > and you may handle busy works by temporary changing the cpumask of
    > the workers, OR by flushing the busy works.
    >
    >
    > I don't think that's necessary.

    Please document it.

    > It's not like we have hard guarantee on attr changes anyway.
    > Self-requeueing work items can get stuck with old attributes for quite a while,

    It is OK for it is documented.

    > and even per-cpu work items get migrated to other CPUs on CPU DOWN.

    It is expected.

    But for unbound wq when cpuhotplug
    w/o NUMA affinity, works are always in the cpus if there is online cpu in wq's cpumask
    w/ NUMA affinity, ......... NOT always ........ even ....................................

    > Workqueue's affinity guarantee is very specific - the work item owner is
    > responsible for flushing the work item during CPU DOWN if it wants
    > to guarantee affinity over full execution.

    Could you add the comments and add Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
    for the patchset?

    Thanks,
    Lai



    \
     
     \ /
      Last update: 2013-03-30 19:01    [W:2.739 / U:0.716 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site