lkml.org 
[lkml]   [2012]   [Dec]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] makedumpfile: request the kernel do page scans
From
From: Cliff Wickman <cpw@sgi.com>
Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
Date: Mon, 10 Dec 2012 09:36:14 -0600
> On Mon, Dec 10, 2012 at 09:59:29AM +0900, HATAYAMA Daisuke wrote:
>> From: Cliff Wickman <cpw@sgi.com>
>> Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
>> Date: Mon, 19 Nov 2012 12:07:10 -0600
>>
>> > On Fri, Nov 16, 2012 at 03:39:44PM -0500, Vivek Goyal wrote:
>> >> On Thu, Nov 15, 2012 at 04:52:40PM -0600, Cliff Wickman wrote:
>
> Hi Hatayama,
>
> If ioremap/iounmap is the bottleneck then perhaps you could do what
> my patch does: it consolidates all the ranges of physical addresses
> where the boot kernel's page structures reside (see make_kernel_mmap())
> and passes them to the kernel, which then does a handfull of ioremaps's to
> cover all of them. Then /proc/vmcore could look up the already-mapped
> virtual address.
> (also note a kludge in get_mm_sparsemem() that verifies that each section
> of the mem_map spans contiguous ranges of page structures. I had
> trouble with some sections when I made that assumption)
>
> I'm attaching 3 patches that might be useful in your testing:
> - 121210.proc_vmcore2 my current patch that applies to the released
> makedumpfile 1.5.1
> - 121207.vmcore_pagescans.sles applies to a 3.0.13 kernel
> - 121207.vmcore_pagescans.rhel applies to a 2.6.32 kernel
>

I used the same patch set on the benchmark.

BTW, I have continuously reservation issue, so I think I cannot use
terabyte memory machine at least in this year.

Also, your patch set is doing ioremap per a chunk of memory map,
i.e. a number of consequtive pages at the same time. On your terabyte
machines, how large they are? We have memory consumption issue on the
2nd kernel so we must decrease amount of memory used. But looking into
ioremap code quickly, it looks not using 2MB or 1GB pages to
remap. This means more than tera bytes page table is generated. Or
have you probably already investigated this?

BTW, my idea to solve this issue are two:

1) make linear direct mapping for old memory, and acess the old memory
via the linear direct mapping, not by ioremap.

- adding remap code in vmcore, or passing the regions that need to
be remapped using memmap= kernel option to tell the 2nd kenrel to
map them in addition.

Or,

2) Support 2MB or 1GB pages in ioremap.

Thanks.
HATAYAMA, Daisuke



\
 
 \ /
  Last update: 2012-12-20 04:41    [W:0.057 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site