lkml.org 
[lkml]   [2010]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
    On Sun, Sep 05, 2010 at 09:45:54PM +0800, Wu Fengguang wrote:
    > [restoring CC list]
    >
    > On Sun, Sep 05, 2010 at 09:14:47PM +0800, Dave Chinner wrote:
    > > On Sun, Sep 05, 2010 at 02:05:39PM +0800, Wu Fengguang wrote:
    > > > On Sun, Sep 05, 2010 at 10:15:55AM +0800, Dave Chinner wrote:
    > > > > On Sun, Sep 05, 2010 at 09:54:00AM +0800, Wu Fengguang wrote:
    > > > > > Dave, could you post (publicly) the kconfig and /proc/vmstat?
    > > > > >
    > > > > > I'd like to check if you have swap or memory compaction enabled..
    > > > >
    > > > > Swap is enabled - it has 512MB of swap space:
    > > > >
    > > > > $ free
    > > > > total used free shared buffers cached
    > > > > Mem: 4054304 100928 3953376 0 4096 43108
    > > > > -/+ buffers/cache: 53724 4000580
    > > > > Swap: 497976 0 497976
    > > >
    > > > It looks swap is not used at all.
    > >
    > > It isn't 30s after boot, abut I haven't checked after a livelock.
    >
    > That's fine. I see in your fs_mark-wedge-1.png that there are no
    > read/write IO at all when CPUs are 100% busy. So there should be no
    > swap IO at "livelock" time.
    >
    > > > > And memory compaction is not enabled:
    > > > >
    > > > > $ grep COMPACT .config
    > > > > # CONFIG_COMPACTION is not set
    >
    > Memory compaction is not likely the cause too. It will only kick in for
    > order > 3 allocations.
    >
    > > > >
    > > > > The .config is pretty much a 'make defconfig' and then enabling XFS and
    > > > > whatever debug I need (e.g. locking, memleak, etc).
    > > >
    > > > Thanks! The problem seems hard to debug -- you cannot login at all
    > > > when it is doing lock contentions, so cannot get sysrq call traces.
    > >
    > > Well, I don't know whether it is lock contention at all. The sets of
    > > traces I have got previously have shown backtraces on all CPUs in
    > > direct reclaim with several in draining queues, but no apparent lock
    > > contention.
    >
    > That's interesting. Do you still have the full backtraces?
    >
    > Maybe your system eats too much slab cache (icache/dcache) by creating
    > so many zero-sized files. The system may run into problems reclaiming
    > so many (dirty) slab pages.

    Yes, that's where most of the memory pressure is coming from.
    However, it's not stuck reclaiming slab - it's pretty clear from
    another chart that I run that the slab cache contents is not
    changing aross the livelock. IOWs, it appears to get stuck before it
    gets to shrink_slab().

    Worth noting, though, is that XFS metadata workloads do create page
    cache pressure as well - all the metadata pages are cached on a
    separate address space, so perhaps it is getting stuck there...

    > > > How about enabling CONFIG_LOCK_STAT? Then you can check
    > > > /proc/lock_stat when the contentions are over.
    > >
    > > Enabling the locking debug/stats gathering slows the workload
    > > by a factor of 3 and doesn't produce the livelock....
    >
    > Oh sorry.. but it would still be interesting to check the top
    > contended locks for this workload without any livelocks :)

    I'll see what i can do.

    Cheers,

    Dave.
    --
    Dave Chinner
    david@fromorbit.com


    \
     
     \ /
      Last update: 2010-09-06 01:37    [W:0.025 / U:179.756 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site