lkml.org 
[lkml]   [2015]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Interacting with coherent memory on external devices
    On Tue, Apr 21, 2015 at 06:49:29PM -0500, Christoph Lameter wrote:
    > On Tue, 21 Apr 2015, Paul E. McKenney wrote:
    >
    > > Thoughts?
    >
    > Use DAX for memory instead of the other approaches? That way it is
    > explicitly clear what information is put on the CAPI device.
    >

    Memory on this device should not be considered as something special
    (even if it is). More below.

    [...]
    >
    > > 3. The device's memory is treated like normal system
    > > memory by the Linux kernel, for example, each page has a
    > > "struct page" associate with it. (In contrast, the
    > > traditional approach has used special-purpose OS mechanisms
    > > to manage the device's memory, and this memory was treated
    > > as MMIO space by the kernel.)
    >
    > Why do we need a struct page? If so then maybe equip DAX with a struct
    > page so that the contents of the device memory can be controlled via a
    > filesystem? (may be custom to the needs of the device).

    So big use case here, let say you have an application that rely on a
    scientific library that do matrix computation. Your application simply
    use malloc and give pointer to this scientific library. Now let say
    the good folks working on this scientific library wants to leverage
    the GPU, they could do it by allocating GPU memory through GPU specific
    API and copy data in and out. For matrix that can be easy enough, but
    still inefficient. What you really want is the GPU directly accessing
    this malloced chunk of memory, eventualy migrating it to device memory
    while performing the computation and migrating it back to system memory
    once done. Which means that you do not want some kind of filesystem or
    anything like that.

    By allowing transparent migration you allow library to just start using
    the GPU without the application being non the wiser about that. More
    over when you start playing with data set that use more advance design
    pattern (list, tree, vector, a mix of all the above) you do not want
    to have to duplicate the list for the GPU address space and for the
    regular CPU address space (which you would need to do in case of a
    filesystem solution).

    So the corner stone of HMM and Paul requirement are the same, we want
    to be able to move normal anonymous memory as well as regular file
    backed page to device memory for some period of time while at the same
    time allowing the usual memory management to keep going as if nothing
    was different.

    Paul is working on a platform that is more advance that the one HMM try
    to address and i believe the x86 platform will not have functionality
    such a CAPI, at least it is not part of any roadmap i know about for
    x86.

    Cheers,
    Jérôme


    \
     
     \ /
      Last update: 2015-04-22 02:21    [W:4.250 / U:0.672 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site