lkml.org 
[lkml]   [2018]   [Apr]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v3] writeback: safer lock nesting
On Tue, Apr 10, 2018 at 1:15 AM Wang Long <wanglong19@meituan.com> wrote:

> > lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if
> > the page's memcg is undergoing move accounting, which occurs when a
> > process leaves its memcg for a new one that has
> > memory.move_charge_at_immigrate set.
> >
> > unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if
the
> > given inode is switching writeback domains. Switches occur when enough
> > writes are issued from a new domain.
> >
> > This existing pattern is thus suspicious:
> > lock_page_memcg(page);
> > unlocked_inode_to_wb_begin(inode, &locked);
> > ...
> > unlocked_inode_to_wb_end(inode, locked);
> > unlock_page_memcg(page);
> >
> > If both inode switch and process memcg migration are both in-flight then
> > unlocked_inode_to_wb_end() will unconditionally enable interrupts while
> > still holding the lock_page_memcg() irq spinlock. This suggests the
> > possibility of deadlock if an interrupt occurs before
> > unlock_page_memcg().
> >
> > truncate
> > __cancel_dirty_page
> > lock_page_memcg
> > unlocked_inode_to_wb_begin
> > unlocked_inode_to_wb_end
> > <interrupts mistakenly enabled>
> > <interrupt>
> > end_page_writeback
> > test_clear_page_writeback
> > lock_page_memcg
> > <deadlock>
> > unlock_page_memcg
> >
> > Due to configuration limitations this deadlock is not currently possible
> > because we don't mix cgroup writeback (a cgroupv2 feature) and
> > memory.move_charge_at_immigrate (a cgroupv1 feature).
> >
> > If the kernel is hacked to always claim inode switching and memcg
> > moving_account, then this script triggers lockup in less than a minute:
> > cd /mnt/cgroup/memory
> > mkdir a b
> > echo 1 > a/memory.move_charge_at_immigrate
> > echo 1 > b/memory.move_charge_at_immigrate
> > (
> > echo $BASHPID > a/cgroup.procs
> > while true; do
> > dd if=/dev/zero of=/mnt/big bs=1M count=256
> > done
> > ) &
> > while true; do
> > sync
> > done &
> > sleep 1h &
> > SLEEP=$!
> > while true; do
> > echo $SLEEP > a/cgroup.procs
> > echo $SLEEP > b/cgroup.procs
> > done
> >
> > Given the deadlock is not currently possible, it's debatable if there's
> > any reason to modify the kernel. I suggest we should to prevent future
> > surprises.
> This deadlock occurs three times in our environment,

> this deadlock occurs three times in our environment. It is better to cc
stable kernel and
> backport it.

That's interesting. Are you using cgroup v1 or v2? Do you enable
memory.move_charge_at_immigrate?
I assume you've been using 4.4 stable. I'll look closer at it at a 4.4
stable backport.

\
 
 \ /
  Last update: 2018-04-11 02:40    [W:0.088 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site