[lkml]   [2006]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: What's the NFS OOM problem?
    On Fri, 2006-08-11 at 10:33 +1000, Neil Brown wrote:
    > On Thursday August 10, wrote:
    > >
    > > > Can someone help me and give me a brief description on OOM issue?
    > >
    > > I don't know about any OOM issue related to NFS. At most it might happen
    > > on the client (eg: stating firefox from an NFS root) which might not have
    > > enough memory for new network buffers, but I don't even know if it's
    > > possible at all.
    > We've had reports of OOM problems with NFS at SuSE.
    > The common factors seem to be lots of memory (6G+) and very large
    > files.
    > Tuning down /proc/sys/vm/dirty_*ratio seems to avoid the problem,
    > but I'm not very close to understanding what the real problem is.

    Would it not be related to mmap'ed files, where the client will not
    track the dirty pages? This will make the reclaim code go crap itself
    suddenly not a single page is easily freeable anymore, all pages are
    found to be dirty and require writeback, which takes more memory - ie.
    network packets, and wait for proper answer.

    Andrew is currently carrying some patches that will avoid this problem
    virtue of tracking dirtying of mmap'ed pages. With these patches
    nr_dirty is
    properly incremented and the pdflush logic should kick in and do its

    This would explain why lowering dirty_*ratio would sometimes help, that
    kick off the pdflush thread earlier, which would then detect the
    unknown dirty pages.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-08-11 10:51    [W:0.020 / U:2.568 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site