lkml.org 
[lkml]   [2015]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/4 V5] workqueue: Allow modifying low level unbound workqueue cpumask
On Wed, Mar 18, 2015 at 12:40:17PM +0800, Lai Jiangshan wrote:
> The oreder-workquue is ignore from the low level unbound workqueue cpumask,
> it will be handled in near future.

Ugh, right, ordered workqueues are tricky. Maybe we should change how
ordered workqueues are implemented. Just gate work items at the
workqueue layer instead of fiddling with max_active and the number of
pwqs.

> static struct wq_unbound_install_ctx *
> wq_unbound_install_ctx_prepare(struct workqueue_struct *wq,
> - const struct workqueue_attrs *attrs)
> + const struct workqueue_attrs *attrs,
> + cpumask_var_t unbound_cpumask)
> {
...
> /* make a copy of @attrs and sanitize it */
> copy_workqueue_attrs(new_attrs, attrs);
> - cpumask_and(new_attrs->cpumask, new_attrs->cpumask, wq_unbound_cpumask);
> + copy_workqueue_attrs(pwq_attrs, attrs);
> + cpumask_and(new_attrs->cpumask, new_attrs->cpumask, cpu_possible_mask);
> + cpumask_and(pwq_attrs->cpumask, pwq_attrs->cpumask, unbound_cpumask);

Hmmm... we weren't checking whether the intersection becomes null
before. Why are we doing it now? Note that this doesn't really make
things water-tight as cpu on/offlining can still leave the mask w/o
any online cpus. Shouldn't we just let the scheduler handle it as
before?

> @@ -3712,6 +3726,9 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
> * wq's, the default pwq should be used.
> */
> if (wq_calc_node_cpumask(wq->unbound_attrs, node, cpu_off, cpumask)) {
> + cpumask_and(cpumask, cpumask, wq_unbound_cpumask);
> + if (cpumask_empty(cpumask))
> + goto use_dfl_pwq;

So, this special handling is necessary only because we did special in
the above for dfl_pwq. Why do we need these?

> +static int unbounds_cpumask_apply(cpumask_var_t cpumask)
> +{
..
> + list_for_each_entry_safe(ctx, n, &ctxs, list) {
> + if (ret >= 0)

Let's do !ret.

> + wq_unbound_install_ctx_commit(ctx);
> + wq_unbound_install_ctx_free(ctx);
> + }
...
> +/**
> + * workqueue_unbounds_cpumask_set - Set the low-level unbound cpumask
> + * @cpumask: the cpumask to set
> + *
> + * The low-level workqueues cpumask is a global cpumask that limits
> + * the affinity of all unbound workqueues. This function check the @cpumask
> + * and apply it to all unbound workqueues and updates all pwqs of them.
> + * When all succeed, it saves @cpumask to the global low-level unbound
> + * cpumask.
> + *
> + * Retun: 0 - Success
> + * -EINVAL - No online cpu in the @cpumask
> + * -ENOMEM - Failed to allocate memory for attrs or pwqs.
> + */
> +int workqueue_unbounds_cpumask_set(cpumask_var_t cpumask)
> +{
> + int ret = -EINVAL;
> +
> + get_online_cpus();
> + cpumask_and(cpumask, cpumask, cpu_possible_mask);
> + if (cpumask_intersects(cpumask, cpu_online_mask)) {

Does this make sense? We can't prevent cpus going down right after
the mask is set. What's the point of preventing empty config if we
can't prevent transitions into it and have to handle it anyway?

> +static ssize_t unbounds_cpumask_store(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)

Naming is too confusing. Please pick a name which clearly
distinguishes per-wq and global masking.

Thanks.

--
tejun


\
 
 \ /
  Last update: 2015-03-24 19:01    [W:0.346 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site