lkml.org 
[lkml]   [2004]   [Oct]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC] memory defragmentation to satisfy high order allocations
From
Hello,

> > Pages for NFS also might be pinned with network problems.
> > One of the ideas is to restrict NFS to allocate pages from
> > specific memory region, sot that all memory except the region
> > can be hot-removed. And it's possible to implementing whole
> > migrate_page method, which may handled stuck pages.
>
> Why do you want to special-case this?
>
> The above is a generic condition: any filesystem can suffer from the
> equivalent problem of a failure or slow response in the underlying
> device. Making an NFS-specific hack is just counter-productive to
> solving the generic problem.

However, while network is down network/cluster filesystems might not
release pages forever unlike in the case of block devices, which may
timeout or returns a error in case of failure.

Each filesystem can control what the migration code does.
If it doesn't have anything to help memory migration, it's possible
to wait for the network coming up before starting memory migration,
or give up it if the network happen to be down. That's no problem.

Thank you,
Hirokazu Takahashi.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:06    [W:0.098 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site