lkml.org 
[lkml]   [2011]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [BUG] fatal hang untarring 90GB file, possibly writeback related.
From
Date
On Thu, 2011-04-28 at 18:18 +0100, Mel Gorman wrote:
> On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> > # Events: 6K cycles
> > #
> > # Overhead Command Shared Object Symbol
> > # ........ ........... ................... .......................................
> > #
> > 20.41% kswapd0 [kernel.kallsyms] [k] shrink_slab
> > |
> > --- shrink_slab
> > |
> > |--99.91%-- kswapd
> > | kthread
> > | kernel_thread_helper
> > --0.09%-- [...]
> >
>
> Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
> compaction instead of lumpy reclaim" is related unless we are seeing
> two problems that happen to manifest in a similar manner.
>
> However, there were a number of changes made to dcache in particular
> for 2.6.38. Specifically thinks like dentry_kill use trylock and is
> happy to loop around if it fails to acquire anything. See things like
> this for example;

OK, so for this, I tried a 2.6.37 kernel. It doesn't work very well,
networking is hosed for no reason I can see (probably systemd / cgroups
problems).

However, it runs enough for me to say that the tar proceeds to
completion in a non-PREEMPT kernel. (I tried several times for good
measure). That makes this definitely a regression of some sort, but it
doesn't definitively identify the dcache code ... it could be an ext4
bug that got introduced in 2.6.38 either.

> static void try_prune_one_dentry(struct dentry *dentry)
> __releases(dentry->d_lock)
> {
> struct dentry *parent;
>
> parent = dentry_kill(dentry, 0);
> /*
> * If dentry_kill returns NULL, we have nothing more to do.
> * if it returns the same dentry, trylocks failed. In either
> * case, just loop again.
>
>
> If this in combination with many inodes being locked for whatever
> reason (writeback locking them maybe?) is causing the shrinker to
> return after zero progress, it could in turn cause kswapd to enter
> into a loop for longish periods of time in shrink_slab here;
>
> while (total_scan >= SHRINK_BATCH) {
> long this_scan = SHRINK_BATCH;
> int shrink_ret;
> int nr_before;
>
> nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
> shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
> gfp_mask);
> if (shrink_ret == -1)
> break;
> if (shrink_ret < nr_before)
> ret += nr_before - shrink_ret;
> count_vm_events(SLABS_SCANNED, this_scan);
> total_scan -= this_scan;
>
> cond_resched();
> }
>
> That would explain this trace.
>
> > 9.98% kswapd0 [kernel.kallsyms] [k] shrink_zone
> > |
> > --- shrink_zone
> > |
> > |--99.46%-- kswapd
> > | kthread
> > | kernel_thread_helper
> > |
> > --0.54%-- kthread
> > kernel_thread_helper
> >
> > 7.70% kswapd0 [kernel.kallsyms] [k] kswapd
> > |
> > --- kswapd
> > kthread
> > kernel_thread_helper
> >
> > 5.40% kswapd0 [kernel.kallsyms] [k] zone_watermark_ok_safe
> > |
> > --- zone_watermark_ok_safe
> > |
> > |--72.66%-- kswapd
> > | kthread
> > | kernel_thread_helper
> > |
> > |--20.88%-- sleeping_prematurely.part.12
> > | kswapd
> > | kthread
> > | kernel_thread_helper
> > |
> > --6.46%-- kthread
> > kernel_thread_helper
> >
>
> We are also spending an astonishing amount of time in
> sleeping_prematurely leading me to believe we are failing to balance the
> zones and are continually under the min watermark for one of the zones.
> We are never going to sleep because of this check;
>
> if (total_scanned && (priority < DEF_PRIORITY - 2)) {
> if (has_under_min_watermark_zone)
> count_vm_event(KSWAPD_SKIP_CONGESTION_WAIT);
> else
> congestion_wait(BLK_RW_ASYNC, HZ/10);
> }
>
> However, I think this is a secondary effect to the failure of shrinkers
> to do their work. If slabs were being shrunk, one would expect us to
> be getting over the min watermark.
>
> > 4.25% kswapd0 [kernel.kallsyms] [k] do_raw_spin_lock
> > |
> > --- do_raw_spin_lock
> > |
> > |--77.49%-- _raw_spin_lock
> > | |
> > | |--51.85%-- mb_cache_shrink_fn
> > | | shrink_slab
> > | | kswapd
> > | | kthread
> > | | kernel_thread_helper
> > | |
> > | --48.15%-- mem_cgroup_soft_limit_reclaim
> > | kswapd
> > | kthread
> > | kernel_thread_helper
> > |
>
> Way hey, cgroups are also in the mix. How jolly.
>
> Is systemd a common element of the machines hitting this bug by any
> chance?

Well, yes, the bug report is against FC15, which needs cgroups for
systemd.

> The remaining traces seem to be follow-on damage related to the three
> issues of "shrinkers are bust in some manner" causing "we are not
> getting over the min watermark" and as a side-show "we are spending lots
> of time doing something unspecified but unhelpful in cgroups".

Heh, well find a way for me to verify this: I can't turn off cgroups
because systemd then won't work and the machine won't boot ...

James




\
 
 \ /
  Last update: 2011-04-28 20:33    [W:0.112 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site