lkml.org 
[lkml]   [2013]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations
On 06/03/2013 10:01 PM, Minchan Kim wrote:
>> > +static int __remove_mapping_batch(struct list_head *remove_list,
>> > + struct list_head *ret_pages,
>> > + struct list_head *free_pages)
>> > +{
>> > + int nr_reclaimed = 0;
>> > + struct address_space *mapping;
>> > + struct page *page;
>> > + LIST_HEAD(need_free_mapping);
>> > +
>> > + while (!list_empty(remove_list)) {
...
>> > + if (!__remove_mapping(mapping, page)) {
>> > + unlock_page(page);
>> > + list_add(&page->lru, ret_pages);
>> > + continue;
>> > + }
>> > + list_add(&page->lru, &need_free_mapping);
...
> + spin_unlock_irq(&mapping->tree_lock);
> + while (!list_empty(&need_free_mapping)) {...
> + list_move(&page->list, free_pages);
> + mapping_release_page(mapping, page);
> + }
> Why do we need new lru list instead of using @free_pages?

I actually tried using @free_pages at first. The problem is that we
need to call mapping_release_page() without the radix tree lock held so
we can not do it in the first while() loop.

'free_pages' is a list created up in shrink_page_list(). There can be
several calls to __remove_mapping_batch() for each call to
shrink_page_list().

'need_free_mapping' lets us temporarily differentiate the pages that we
need to call mapping_release_page()/unlock_page() on versus the ones on
'free_pages' which have already had that done.

We could theoretically delay _all_ of the
release_mapping_page()/unlock_page() operations until the _entire_
shrink_page_list() operation is done, but doing this really helps with
lock_page() latency.

Does that make sense?


\
 
 \ /
  Last update: 2013-06-04 09:01    [W:0.088 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site