lkml.org 
[lkml]   [2024]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] workqueue: fix selection of wake_cpu in kick_pool()
Date
Tejun Heo <tj@kernel.org> writes:

> On Mon, Apr 15, 2024 at 07:35:49AM +0200, Sven Schnelle wrote:
>> @@ -1277,7 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
>> !cpumask_test_cpu(p->wake_cpu, pool->attrs->__pod_cpumask)) {
>> struct work_struct *work = list_first_entry(&pool->worklist,
>> struct work_struct, entry);
>> - p->wake_cpu = cpumask_any_distribute(pool->attrs->__pod_cpumask);
>> + p->wake_cpu = cpumask_any_and_distribute(pool->attrs->__pod_cpumask,
>> + cpu_online_mask);
>
> I think this can still race with the last CPU in the pod going down and
> return nr_cpu_ids. Maybe something like the following would be better?
>
> int wake_cpu;
>
> wake_cpu = cpumask_any_distribute_and(...);
> if (wake_cpu < nr_cpus_ids) {
> p->wake_cpu = wake_cpu;
> // update stat;
> }
>
> This generally seems like a good idea but isn't this still racy? The CPU may
> go down between setting p->wake_cpu and wake_up_process().

Don't know without reading the source, but how does this code normally
protect against that?

Thanks
Sven

\
 
 \ /
  Last update: 2024-04-17 17:39    [W:0.092 / U:0.572 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site