lkml.org 
[lkml]   [2012]   [Aug]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list, not MFN list and part of pagetables.
    On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
    > We call memblock_reserve for [start of mfn list] -> [PMD aligned end
    > of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
    >
    > This has the disastrous effect that if at bootup the end of mfn_list is
    > not PMD aligned we end up returning to memblock parts of the region
    > past the mfn_list array. And those parts are the PTE tables with
    > the disastrous effect of seeing this at bootup:

    This patch looks wrong to me.

    Aren't you changing the way mfn_list is reserved using memblock in patch
    #3? Moreover it really seems to me that you are PAGE_ALIGN'ing size
    rather than PMD_ALIGN'ing it there.


    > Write protecting the kernel read-only data: 10240k
    > Freeing unused kernel memory: 1860k freed
    > Freeing unused kernel memory: 200k freed
    > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
    > ...
    > (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
    > (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
    > .. and so on.
    >
    > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    > ---
    > arch/x86/xen/mmu.c | 2 +-
    > 1 files changed, 1 insertions(+), 1 deletions(-)
    >
    > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
    > index 5a880b8..6019c22 100644
    > --- a/arch/x86/xen/mmu.c
    > +++ b/arch/x86/xen/mmu.c
    > @@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
    > /* We should be in __ka space. */
    > BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
    > addr = xen_start_info->mfn_list;
    > - size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
    > /* We roundup to the PMD, which means that if anybody at this stage is
    > * using the __ka address of xen_start_info or xen_start_info->shared_info
    > * they are in going to crash. Fortunatly we have already revectored
    > @@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
    > size = roundup(size, PMD_SIZE);
    > xen_cleanhighmap(addr, addr + size);
    >
    > + size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
    > memblock_free(__pa(xen_start_info->mfn_list), size);
    > /* And revector! Bye bye old array */
    > xen_start_info->mfn_list = new_mfn_list;
    > --
    > 1.7.7.6
    >
    >
    > _______________________________________________
    > Xen-devel mailing list
    > Xen-devel@lists.xen.org
    > http://lists.xen.org/xen-devel
    >


    \
     
     \ /
      Last update: 2012-08-21 17:01    [W:0.038 / U:29.540 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site