lkml.org 
[lkml]   [2012]   [Jan]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 -mm] make swapin readahead skip over holes
On 01/25/2012 08:23 PM, Andrew Morton wrote:

> Just to show that I'm paying attention...
>
>> --- a/mm/swap_state.c
>> +++ b/mm/swap_state.c
>> @@ -382,25 +382,23 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>> struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
>> struct vm_area_struct *vma, unsigned long addr)
>> {
>> - int nr_pages;
>> struct page *page;
>> - unsigned long offset;
>> - unsigned long end_offset;
>> + unsigned long offset = swp_offset(entry);
>> + unsigned long start_offset, end_offset;
>> + unsigned long mask = (1<< page_cluster) - 1;
>
> This is broken for page_cluster> 31. Fix:

I don't know who would want to do their swapins in chunks
of 8GB or large at a time, but still a good catch.

Want me to send in a v5, or do you prefer to merge a -fix
patch in your tree?

--
All rights reversed


\
 
 \ /
  Last update: 2012-01-26 02:29    [W:0.069 / U:7.184 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site