lkml.org 
[lkml]   [2024]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH 1/2] cgroup/cpuset: Make cpuset hotplug processing synchronous
    Date
    On 03/04/24 09:38, Waiman Long wrote:
    > On 4/3/24 08:02, Michal Koutný wrote:
    >> On Tue, Apr 02, 2024 at 11:30:11AM -0400, Waiman Long <longman@redhat.com> wrote:
    >>> Yes, there is a potential that a cpus_read_lock() may be called leading to
    >>> deadlock. So unless we reverse the current cgroup_mutex --> cpu_hotplug_lock
    >>> ordering, it is not safe to call cgroup_transfer_tasks() directly.
    >> I see that cgroup_transfer_tasks() has the only user -- cpuset. What
    >> about bending it for the specific use like:
    >>
    >> diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
    >> index 34aaf0e87def..64deb7212c5c 100644
    >> --- a/include/linux/cgroup.h
    >> +++ b/include/linux/cgroup.h
    >> @@ -109,7 +109,7 @@ struct cgroup *cgroup_get_from_fd(int fd);
    >> struct cgroup *cgroup_v1v2_get_from_fd(int fd);
    >>
    >> int cgroup_attach_task_all(struct task_struct *from, struct task_struct *);
    >> -int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from);
    >> +int cgroup_transfer_tasks_locked(struct cgroup *to, struct cgroup *from);
    >>
    >> int cgroup_add_dfl_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
    >> int cgroup_add_legacy_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
    >> diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
    >> index 520a11cb12f4..f97025858c7a 100644
    >> --- a/kernel/cgroup/cgroup-v1.c
    >> +++ b/kernel/cgroup/cgroup-v1.c
    >> @@ -91,7 +91,8 @@ EXPORT_SYMBOL_GPL(cgroup_attach_task_all);
    >> *
    >> * Return: %0 on success or a negative errno code on failure
    >> */
    >> -int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
    >> +int cgroup_transfer_tasks_locked(struct cgroup *to, struct cgroup *from)
    >> {
    >> DEFINE_CGROUP_MGCTX(mgctx);
    >> struct cgrp_cset_link *link;
    >> @@ -106,9 +106,11 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
    >> if (ret)
    >> return ret;
    >>
    >> - cgroup_lock();
    >> -
    >> - cgroup_attach_lock(true);
    >> + /* The locking rules serve specific purpose of v1 cpuset hotplug
    >> + * migration, see hotplug_update_tasks_legacy() and
    >> + * cgroup_attach_lock() */
    >> + lockdep_assert_held(&cgroup_mutex);
    >> + lockdep_assert_cpus_held();
    >> + percpu_down_write(&cgroup_threadgroup_rwsem);
    >>
    >> /* all tasks in @from are being moved, all csets are source */
    >> spin_lock_irq(&css_set_lock);
    >> @@ -144,8 +146,7 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
    >> } while (task && !ret);
    >> out_err:
    >> cgroup_migrate_finish(&mgctx);
    >> - cgroup_attach_unlock(true);
    >> - cgroup_unlock();
    >> + percpu_up_write(&cgroup_threadgroup_rwsem);
    >> return ret;
    >> }
    >>
    >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
    >> index 13d27b17c889..94fb8b26f038 100644
    >> --- a/kernel/cgroup/cpuset.c
    >> +++ b/kernel/cgroup/cpuset.c
    >> @@ -4331,7 +4331,7 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
    >> nodes_empty(parent->mems_allowed))
    >> parent = parent_cs(parent);
    >>
    >> - if (cgroup_transfer_tasks(parent->css.cgroup, cs->css.cgroup)) {
    >> + if (cgroup_transfer_tasks_locked(parent->css.cgroup, cs->css.cgroup)) {
    >> pr_err("cpuset: failed to transfer tasks out of empty cpuset ");
    >> pr_cont_cgroup_name(cs->css.cgroup);
    >> pr_cont("\n");
    >> @@ -4376,21 +4376,9 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
    >>
    >> /*
    >> * Move tasks to the nearest ancestor with execution resources,
    >> - * This is full cgroup operation which will also call back into
    >> - * cpuset. Execute it asynchronously using workqueue.
    >> */
    >> - if (is_empty && css_tryget_online(&cs->css)) {
    >> - struct cpuset_remove_tasks_struct *s;
    >> -
    >> - s = kzalloc(sizeof(*s), GFP_KERNEL);
    >> - if (WARN_ON_ONCE(!s)) {
    >> - css_put(&cs->css);
    >> - return;
    >> - }
    >> -
    >> - s->cs = cs;
    >> - INIT_WORK(&s->work, cpuset_migrate_tasks_workfn);
    >> - schedule_work(&s->work);
    >> + if (is_empty)
    >> + remove_tasks_in_empty_cpuset(cs);
    >> }
    >> }
    >>
    >
    > It still won't work because of the possibility of mutiple tasks
    > involving in a circular locking dependency. The hotplug thread always
    > acquire the cpu_hotplug_lock first before acquiring cpuset_mutex or
    > cgroup_mtuex in this case (cpu_hotplug_lock --> cgroup_mutex). Other
    > tasks calling into cgroup code will acquire the pair in the order
    > cgroup_mutex --> cpu_hotplug_lock. This may lead to a deadlock if these
    > 2 locking sequences happen at the same time. Lockdep will certainly
    > spill out a splat because of this.

    > So unless we change all the relevant
    > cgroup code to the new cpu_hotplug_lock --> cgroup_mutex locking order,
    > the hotplug code can't call cgroup_transfer_tasks() directly.
    >

    IIUC that was Thomas' suggestion [1], but I can't tell yet how bad it would
    be to change cgroup_lock() to also do a cpus_read_lock().

    Also, I gave Michal's patch a try and it looks like it's introducing a
    cgroup_threadgroup_rwsem -> cpuset_mutex
    ordering from
    cgroup_transfer_tasks_locked()
    `\
    percpu_down_write(&cgroup_threadgroup_rwsem);
    cgroup_migrate()
    `\
    cgroup_migrate_execute()
    `\
    ss->can_attach() // cpuset_can_attach()
    `\
    mutex_lock(&cpuset_mutex);

    which is invalid, see below.

    [1]: https://lore.kernel.org/lkml/87cyrfe7a3.ffs@tglx/

    [ 77.627915] WARNING: possible circular locking dependency detected
    [ 77.628419] 6.9.0-rc1-00042-g844178b616c7-dirty #23 Tainted: G W
    [ 77.629035] ------------------------------------------------------
    [ 77.629548] cpuhp/2/24 is trying to acquire lock:
    [ 77.629946] ffffffff82d680b0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_transfer_tasks_locked+0x123/0x450
    [ 77.630851]
    [ 77.630851] but task is already holding lock:
    [ 77.631397] ffffffff82d6c088 (cpuset_mutex){+.+.}-{3:3}, at: cpuset_update_active_cpus+0x352/0xca0
    [ 77.632169]
    [ 77.632169] which lock already depends on the new lock.
    [ 77.632169]
    [ 77.632891]
    [ 77.632891] the existing dependency chain (in reverse order) is:
    [ 77.633521]
    [ 77.633521] -> #1 (cpuset_mutex){+.+.}-{3:3}:
    [ 77.634028] lock_acquire+0xc0/0x2d0
    [ 77.634393] __mutex_lock+0xaa/0x710
    [ 77.634751] cpuset_can_attach+0x6d/0x2c0
    [ 77.635146] cgroup_migrate_execute+0x6f/0x520
    [ 77.635565] cgroup_attach_task+0x2e2/0x450
    [ 77.635989] __cgroup1_procs_write.isra.0+0xfd/0x150
    [ 77.636440] kernfs_fop_write_iter+0x127/0x1c0
    [ 77.636917] vfs_write+0x2b0/0x540
    [ 77.637330] ksys_write+0x70/0xf0
    [ 77.637707] int80_emulation+0xf8/0x1b0
    [ 77.638183] asm_int80_emulation+0x1a/0x20
    [ 77.638636]
    [ 77.638636] -> #0 (cgroup_threadgroup_rwsem){++++}-{0:0}:
    [ 77.639321] check_prev_add+0xeb/0xb20
    [ 77.639751] __lock_acquire+0x12ac/0x16d0
    [ 77.640345] lock_acquire+0xc0/0x2d0
    [ 77.640903] percpu_down_write+0x33/0x260
    [ 77.641347] cgroup_transfer_tasks_locked+0x123/0x450
    [ 77.641826] cpuset_update_active_cpus+0x782/0xca0
    [ 77.642268] sched_cpu_deactivate+0x1ad/0x1d0
    [ 77.642677] cpuhp_invoke_callback+0x16b/0x6b0
    [ 77.643098] cpuhp_thread_fun+0x1ba/0x240
    [ 77.643488] smpboot_thread_fn+0xd8/0x1d0
    [ 77.643873] kthread+0xce/0x100
    [ 77.644209] ret_from_fork+0x2f/0x50
    [ 77.644626] ret_from_fork_asm+0x1a/0x30
    [ 77.645084]
    [ 77.645084] other info that might help us debug this:
    [ 77.645084]
    [ 77.645829] Possible unsafe locking scenario:
    [ 77.645829]
    [ 77.646356] CPU0 CPU1
    [ 77.646748] ---- ----
    [ 77.647143] lock(cpuset_mutex);
    [ 77.647529] lock(cgroup_threadgroup_rwsem);
    [ 77.648193] lock(cpuset_mutex);
    [ 77.648767] lock(cgroup_threadgroup_rwsem);
    [ 77.649216]
    [ 77.649216] *** DEADLOCK ***


    \
     
     \ /
      Last update: 2024-05-27 16:22    [W:6.080 / U:0.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site