lkml.org 
[lkml]   [2002]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 13/21] deferred and batched addition of faulted-in pages to the LRU
On Sun, Aug 11, 2002 at 12:39:30AM -0700, Andrew Morton wrote:
> pagevec_init(&pvec);
>
> + lru_add_drain();
> spin_lock_irq(&_pagemap_lru_lock);
> while (max_scan > 0 && nr_pages > 0) {
> struct page *page;
> @@ -380,6 +381,7 @@ static /* inline */ void refill_inactive
> struct page *page;
> struct pagevec pvec;
>
> + lru_add_drain();
> spin_lock_irq(&_pagemap_lru_lock);
> while (nr_pages && !list_empty(&active_list)) {
> page = list_entry(active_list.prev, struct page, lru);


Looks to me like it could be advantageous to perform the draining while
already under the lock. But the gains from this and the other changes are
already so vast it seems like it could have at most a minor effect.

Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:27    [W:0.032 / U:0.660 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site