lkml.org 
[lkml]   [2012]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 16/21] mm: handle lruvec relocks in compaction
KAMEZAWA Hiroyuki wrote:
> On Thu, 23 Feb 2012 17:52:56 +0400
> Konstantin Khlebnikov<khlebnikov@openvz.org> wrote:
>
>> Prepare for lru_lock splitting in memory compaction code.
>>
>> * disable irqs in acct_isolated() for __mod_zone_page_state(),
>> lru_lock isn't required there.
>>
>> Signed-off-by: Konstantin Khlebnikov<khlebnikov@openvz.org>
>> ---
>> mm/compaction.c | 30 ++++++++++++++++--------------
>> 1 files changed, 16 insertions(+), 14 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index a976b28..54340e4 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -224,8 +224,10 @@ static void acct_isolated(struct zone *zone, struct compact_control *cc)
>> list_for_each_entry(page,&cc->migratepages, lru)
>> count[!!page_is_file_cache(page)]++;
>>
>> + local_irq_disable();
>> __mod_zone_page_state(zone, NR_ISOLATED_ANON, count[0]);
>> __mod_zone_page_state(zone, NR_ISOLATED_FILE, count[1]);
>> + local_irq_enable();
>
> Why we need to disable Irq here ??

__mod_zone_page_state() want this to protect per-cpu counters, maybe preempt_disable() is enough.

>
>
>
>> }
>>
>> /* Similar to reclaim, but different enough that they don't share logic */
>> @@ -262,7 +264,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>> unsigned long nr_scanned = 0, nr_isolated = 0;
>> struct list_head *migratelist =&cc->migratepages;
>> isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
>> - struct lruvec *lruvec;
>> + struct lruvec *lruvec = NULL;
>>
>> /* Do not scan outside zone boundaries */
>> low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
>> @@ -294,25 +296,24 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>>
>> /* Time to isolate some pages for migration */
>> cond_resched();
>> - spin_lock_irq(&zone->lru_lock);
>> for (; low_pfn< end_pfn; low_pfn++) {
>> struct page *page;
>> - bool locked = true;
>>
>> /* give a chance to irqs before checking need_resched() */
>> if (!((low_pfn+1) % SWAP_CLUSTER_MAX)) {
>> - spin_unlock_irq(&zone->lru_lock);
>> - locked = false;
>> + if (lruvec)
>> + unlock_lruvec_irq(lruvec);
>> + lruvec = NULL;
>> }
>> - if (need_resched() || spin_is_contended(&zone->lru_lock)) {
>> - if (locked)
>> - spin_unlock_irq(&zone->lru_lock);
>> + if (need_resched() ||
>> + (lruvec&& spin_is_contended(&zone->lru_lock))) {
>> + if (lruvec)
>> + unlock_lruvec_irq(lruvec);
>> + lruvec = NULL;
>> cond_resched();
>> - spin_lock_irq(&zone->lru_lock);
>> if (fatal_signal_pending(current))
>> break;
>> - } else if (!locked)
>> - spin_lock_irq(&zone->lru_lock);
>> + }
>>
>> /*
>> * migrate_pfn does not necessarily start aligned to a
>> @@ -359,7 +360,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>> continue;
>> }
>>
>> - if (!PageLRU(page))
>> + if (!__lock_page_lruvec_irq(&lruvec, page))
>> continue;
>
> Could you add more comments onto __lock_page_lruvec_irq() ?

Actually there is a very unlikely race with page free-realloc,
(which is fixed in Hugh's patchset, and surprisingly fixed in my old memory controller)
thus this part will be redesigned.

>
> Thanks,
> -Kame
>



\
 
 \ /
  Last update: 2012-02-28 07:33    [W:0.078 / U:0.676 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site