lkml.org 
[lkml]   [2015]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: another pmem variant V2
    On 03/31/2015 12:25 PM, Christoph Hellwig wrote:
    > On Thu, Mar 26, 2015 at 06:57:47PM +0200, Boaz Harrosh wrote:
    >> On 03/26/2015 10:32 AM, Christoph Hellwig wrote:
    >>> Here is another version of the same trivial pmem driver, because two
    >>> obviously aren't enough. The first patch is the same pmem driver
    >>> that Ross posted a short time ago, just modified to use platform_devices
    >>> to find the persistant memory region instead of hardconding it in the
    >>> Kconfig. This allows to keep pmem.c separate from any discovery mechanism,
    >>> but still allow auto-discovery.
    >>>
    >>
    >> Hi Christoph
    >>
    >> So I've been trying to test your version, and play around with it.
    >> I currently have some problems, but this is the end of the week for me
    >> so I will debug and fix it after the weekend on Sunday.
    >
    > Any news? I'd really like to resend this ASAP to get it into 4.1..

    Yes sorry I got stuck with the NUMA thing. I will finally finish today.

    For some reason your patch with the memmap=nn!aa behaves differently than
    with memmap=nn\$aa

    And also compared to my old e820.c fix + my old pmem. My fixes are effectively
    the same as with your Kernel and the memmap=nn\$aa which sets the range "reserved"
    like. And less "ram" like as in your patch.

    The problem I see is that if I state a memmap=nn!aa that crosses a NUMA
    boundary then the machine will not boot.
    So BTW for sure I need that "don't merge E820_PMEM ranges" patch because
    otherwise I will not be able to boot if I have pmem on both NUMA nodes
    and they happen to be contiguous.

    I do not understand why this happens, because a contiguous range of
    RAM is fine with cross NUMA and pmem not. (Also the pmem defined as
    memmap=nn!aa behaves the same)

    Also we have another problem with NUMA that I'm researching for a solution
    long term. Is that if the second NUMA node has only pmem and no RAM than
    the Kernel will not define a second NUMA node. And we are NUMA screwed with
    pmem. So one must put v-ram and nv-ram equally spread across his nodes.

    Regarding the SQUASHMEs to PMEM. Originally I had them as 3-4 patches.
    But I thought since you are squashing them into a single submitted patch
    I can just send just the one patch. Tell me what you prefer and I'll
    resend (The one vs the three)

    And one last issue. I have some configuration "hardness" with the
    memmap=nn!aa Kernel command line API, it was better for me with the
    pmem map= module param. Will you be OK if I split pmem_probe() into
    calling pmem_alloc(addr, length), so I can keep an out-of-tree patch
    that adds the map= parameter to pmem?

    Will send the patch in one hour. Just tell me if you need just the one
    or three.

    Thanks, Christoph
    Boaz



    \
     
     \ /
      Last update: 2015-03-31 13:01    [W:3.297 / U:0.140 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site