[lkml]   [2012]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH] phys_efi_set_virtual_address_map needs va, no pa.
    On 06/20/2012 05:27 PM, Robin Holt wrote:
    > What do you need from me? If you want me to help with this, I have a
    > _WHOLE_ lot of learning to do. Can you give me any pointers?
    > We are trying to get this finally fixed. We have had work-around code
    > in SLES11 SP1, SLES11 SP2, and RHEL 6.x. I would love to get this fixed
    > for future distro snaps.

    If you want to tackle it, the task is basically that when we modify the
    pgds in 32-bit legacy (non-PAE) mode, we should make the corresponding
    modifications to initial_page_table, and in 64-bit mode to
    real_mode_header->trampoline_pgd. It might be worthwhile to introduce a
    common pointer for both, obviously.

    This is currently handled via something called the pgd_list (when we
    update the top level kernel address space we walk pgd_list and update
    them all), but there are two issues:

    1. Obviously, in the case of the 1:1 map, we don't just need to maintain
    the kernel area, but the "user space" part of the address space should
    contain a copy, as well.

    2. To complicate things, there is code in there to grab an mm lock for
    the benefit of Xen. The 1:1 map doesn't have an mm associated with it,
    so I'm not quite sure how that is to be handled. Perhaps Xen just plain
    won't need it and we can just bypass it, but I have no bloody idea.

    It is also a bit "cute" how we seem to make a function call to indirect
    through a pointer (why on Earth is pgd_page_get_mm() not an inline?!),
    and then grab a lock unconditionally, regardless of if we are affected
    by Xen or not.


     \ /
      Last update: 2012-06-21 03:21    [W:0.022 / U:242.096 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site