lkml.org 
[lkml]   [2007]   [Jul]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] Wait for page writeback when directly reclaiming contiguous areas
On Sat, 28 Jul 2007 23:52:30 +0100
Andy Whitcroft <apw@shadowen.org> wrote:

>
> From: Mel Gorman <mel@csn.ul.ie>
>
> Lumpy reclaim works by selecting a lead page from the LRU list and then
> selecting pages for reclaim from the order-aligned area of pages. In the
> situation were all pages in that region are inactive and not referenced by
> any process over time, it works well.
>
> In the situation where there is even light load on the system, the pages may
> not free quickly. Out of a area of 1024 pages, maybe only 950 of them are
> freed when the allocation attempt occurs because lumpy reclaim returned early.
> This patch alters the behaviour of direct reclaim for large contiguous blocks.
>
> The first attempt to call shrink_page_list() is asynchronous but if it
> fails, the pages are submitted a second time and the calling process waits
> for the IO to complete. It'll retry up to 5 times for the pages to be
> fully freed. This may stall allocators waiting for contiguous memory but
> that should be expected behaviour for high-order users. It is preferable
> behaviour to potentially queueing unnecessary areas for IO. Note that kswapd
> will not stall in this fashion.

I agree with the intent.

> +/* Request for sync pageout. */
> +typedef enum {
> + PAGEOUT_IO_ASYNC,
> + PAGEOUT_IO_SYNC,
> +} pageout_io_t;

no typedefs.

(checkpatch.pl knew that ;))

> /* possible outcome of pageout() */
> typedef enum {
> /* failed to write page out, page is locked */
> @@ -287,7 +293,8 @@ typedef enum {
> * pageout is called by shrink_page_list() for each dirty page.
> * Calls ->writepage().
> */
> -static pageout_t pageout(struct page *page, struct address_space *mapping)
> +static pageout_t pageout(struct page *page, struct address_space *mapping,
> + pageout_io_t sync_writeback)
> {
> /*
> * If the page is dirty, only perform writeback if that write
> @@ -346,6 +353,15 @@ static pageout_t pageout(struct page *page, struct address_space *mapping)
> ClearPageReclaim(page);
> return PAGE_ACTIVATE;
> }
> +
> + /*
> + * Wait on writeback if requested to. This happens when
> + * direct reclaiming a large contiguous area and the
> + * first attempt to free a ranage of pages fails

cnat tpye.

> + */
> + if (PageWriteback(page) && sync_writeback == PAGEOUT_IO_SYNC)
> + wait_on_page_writeback(page);
> +
>
> if (!PageWriteback(page)) {
> /* synchronous write or broken a_ops? */
> ClearPageReclaim(page);
> @@ -423,7 +439,8 @@ cannot_free:
> * shrink_page_list() returns the number of reclaimed pages
> */
> static unsigned long shrink_page_list(struct list_head *page_list,
> - struct scan_control *sc)
> + struct scan_control *sc,
> + pageout_io_t sync_writeback)
> {
> LIST_HEAD(ret_pages);
> struct pagevec freed_pvec;
> @@ -458,8 +475,12 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> if (page_mapped(page) || PageSwapCache(page))
> sc->nr_scanned++;
>
> - if (PageWriteback(page))
> - goto keep_locked;
> + if (PageWriteback(page)) {
> + if (sync_writeback == PAGEOUT_IO_SYNC)
> + wait_on_page_writeback(page);
> + else
> + goto keep_locked;
> + }

This is unneeded and conceivably deadlocky for !__GFP_FS allocations.
Probably we avoid doing all this if the test which may_enter_fs uses is
false.

It's unlikely that any very-high-order allocators are using GFP_NOIO or
whatever, but still...


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-07-30 22:53    [W:0.040 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site