lkml.org 
[lkml]   [2015]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 2/5] workqueue: merge the similar code
    Hello, Lai.

    On Mon, May 11, 2015 at 05:35:49PM +0800, Lai Jiangshan wrote:
    > @@ -3440,48 +3440,86 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq,
    > }
    >
    > /**
    > - * wq_calc_node_mask - calculate a wq_attrs' cpumask for the specified node
    > - * @attrs: the wq_attrs of the default pwq of the target workqueue
    > + * alloc_node_unbound_pwq - allocate a pwq for specified node

    for the specified node

    > + * @wq: the target workqueue
    > + * @dfl_pwq: the allocated default pwq
    > + * @numa: NUMA affinity
    > * @node: the target NUMA node
    > - * @cpu_going_down: if >= 0, the CPU to consider as offline
    > - * @cpumask: outarg, the resulting cpumask
    > + * @cpu_off: if >= 0, the CPU to consider as offline

    @cpu_off sounds like offset into cpu array or sth. Is there a reason
    to change the name?

    > + * @use_dfl_when_fail: use the @dfl_pwq in case the normal allocation failed

    @use_dfl_on_fail

    > + *
    > + * Allocate or reuse a pwq with the cpumask that @wq should use on @node.

    I wonder whether a better name for the function would be sth like
    get_alloc_node_unbound_pwq().

    > *
    > - * Calculate the cpumask a workqueue with @attrs should use on @node. If
    > - * @cpu_going_down is >= 0, that cpu is considered offline during
    > - * calculation. The result is stored in @cpumask.
    > + * If NUMA affinity is not enabled, @dfl_pwq is always used. @dfl_pwq
    > + * was allocated with the effetive attrs saved in @dfl_pwq->pool->attrs.

    I'm not sure we need the second sentence.

    ...
    > - * Return: %true if the resulting @cpumask is different from @attrs->cpumask,
    > - * %false if equal.
    > + * Return: valid pwq, it might be @dfl_pwq under some conditions
    > + * or might be the old pwq of the @node.
    > + * NULL, when the normal allocation failed.

    Maybe explain how @use_dfl_on_fail affects the result?

    > */
    > -static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
    > - int cpu_going_down, cpumask_t *cpumask)
    > +static struct pool_workqueue *
    > +alloc_node_unbound_pwq(struct workqueue_struct *wq,
    > + struct pool_workqueue *dfl_pwq, bool numa,
    > + int node, int cpu_off, bool use_dfl_when_fail)
    > +
    > {
    > - if (!wq_numa_enabled || attrs->no_numa)
    > + struct pool_workqueue *pwq = unbound_pwq_by_node(wq, node);
    > + struct workqueue_attrs *attrs = dfl_pwq->pool->attrs, *tmp_attrs;
    > + cpumask_t *cpumask;
    > +
    > + lockdep_assert_held(&wq_pool_mutex);
    > +
    > + if (!wq_numa_enabled || !numa)
    > goto use_dfl;
    >
    > + /*
    > + * We don't wanna alloc/free wq_attrs for each call. Let's use a
    > + * preallocated one. It is protected by wq_pool_mutex.
    > + * tmp_attrs->cpumask will be updated in next and tmp_attrs->no_numa

    will be updated below

    > + * is not used, so we just need to initialize tmp_attrs->nice;
    > + */
    > + tmp_attrs = wq_update_unbound_numa_attrs_buf;
    > + tmp_attrs->nice = attrs->nice;
    > + cpumask = tmp_attrs->cpumask;
    > +
    > /* does @node have any online CPUs @attrs wants? */
    > cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask);
    > - if (cpu_going_down >= 0)
    > - cpumask_clear_cpu(cpu_going_down, cpumask);
    > -
    > + if (cpu_off >= 0)
    > + cpumask_clear_cpu(cpu_off, cpumask);
    > if (cpumask_empty(cpumask))
    > goto use_dfl;
    >
    > - /* yeap, return possible CPUs in @node that @attrs wants */
    > + /* yeap, use possible CPUs in @node that @attrs wants */
    > cpumask_and(cpumask, attrs->cpumask, wq_numa_possible_cpumask[node]);
    > - return !cpumask_equal(cpumask, attrs->cpumask);
    > + if (cpumask_equal(cpumask, attrs->cpumask))
    > + goto use_dfl;
    > + if (pwq && wqattrs_equal(tmp_attrs, pwq->pool->attrs))
    > + goto use_existed;

    goto use_current;

    Also, would it be difficult to put this in a separate patch? This is
    mixing code refactoring with behavior change. Make both code paths
    behave the same way first and then refactor?

    > +
    > + /* create a new pwq */
    > + pwq = alloc_unbound_pwq(wq, tmp_attrs);
    > + if (!pwq && use_dfl_when_fail) {
    > + pr_warn("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n",
    > + wq->name);
    > + goto use_dfl;

    Does this need to be in this function? Can't we let the caller handle
    the fallback instead?

    Thanks.

    --
    tejun


    \
     
     \ /
      Last update: 2015-05-11 16:41    [W:5.670 / U:0.000 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site