lkml.org 
[lkml]   [2015]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interacting with coherent memory on external devices
On Wed, Apr 22, 2015 at 10:25:37AM -0500, Christoph Lameter wrote:
> On Wed, 22 Apr 2015, Benjamin Herrenschmidt wrote:
>
> > Right, it doesn't look at all like what we want.
>
> Its definitely a way to map memory that is outside of the kernel managed
> pool into a user space process. For that matter any device driver could be
> doing this as well. The point is that we already have pletora of features
> to do this. Putting new requirements on the already
> warped-and-screwed-up-beyond-all-hope zombie of a page allocator that we
> have today is not the way to do this. In particular what I have head
> repeatedly is that we do not want kernel structures alllocated there but
> then we still want to use this because we want malloc support in
> libraries. The memory has different performance characteristics (for
> starters there may be lots of other isssues depending on the device) so we
> just add a NUMA "node" with estremely high distance.
>
> There are hooks in glibc where you can replace the memory
> management of the apps if you want that.

Glibc hooks will not work, this is about having same address space on
CPU and GPU/accelerator while allowing backing memory to be regular
system memory or device memory all this in a transparent manner to
userspace program and library.

You also have to think at things like mmaped file, let say you have a
big file on disk and you want to crunch number from its data, you do
not want to copy it, instead you want to to the usual mmap and just
have device driver do migration to device memory (how device driver
make the decision is a different problem and this can be entirely
leave to the userspace application or their can be heuristic or both).

Glibc hooks do not work with share memory either and again this is
a usecase we care about. You really have to think of let's have today
applications start using those accelerators without the application
even knowing about it.

So you would not know before hand what will end up being use by the
GPU/accelerator and would need to be allocated from special memory.
We do not want today model of using GPU, we want to provide tomorrow
infrastructure for using GPU in a transparent way.


I understand that the application you care about wants to be clever
and can make better decission and we intend to support that, but this
does not need to be at the expense of all the others applications.
Like i said numerous time the decission to migrate memory is a device
driver decission and how the device driver make that decission can
be entirely control by userspace through proper device driver API.

The numa idea is interesting for application that do not know about
this and do not need to know. It would allow to have heuristic inside
the kernel, under the control of the device driver and that could be
disabled by application that know better.


Bottom line is we want today anonymous, share or file mapped memory
to stay the only kind of memory that exist and we want to choose the
backing store of each of those kind for better placement depending
on how memory is use (again which can be in the total control of
the application). But we do not want to introduce a third kind of
disjoint memory to userspace, this is today situation and we want
to move forward to tomorrow solution.


Cheers,
Jérôme


>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


\
 
 \ /
  Last update: 2015-04-22 19:01    [W:0.364 / U:0.504 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site