lkml.org 
[lkml]   [2017]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 0/6] Cache coherent device memory (CDM) with HMM v5
    On Thu, Jul 20, 2017 at 6:41 PM, Jerome Glisse <jglisse@redhat.com> wrote:
    > On Fri, Jul 21, 2017 at 09:15:29AM +0800, Bob Liu wrote:
    >> On 2017/7/20 23:03, Jerome Glisse wrote:
    >> > On Wed, Jul 19, 2017 at 05:09:04PM +0800, Bob Liu wrote:
    >> >> On 2017/7/19 10:25, Jerome Glisse wrote:
    >> >>> On Wed, Jul 19, 2017 at 09:46:10AM +0800, Bob Liu wrote:
    >> >>>> On 2017/7/18 23:38, Jerome Glisse wrote:
    >> >>>>> On Tue, Jul 18, 2017 at 11:26:51AM +0800, Bob Liu wrote:
    >> >>>>>> On 2017/7/14 5:15, Jérôme Glisse wrote:
    >
    > [...]
    >
    >> >> Then it's more like replace the numa node solution(CDM) with ZONE_DEVICE
    >> >> (type MEMORY_DEVICE_PUBLIC). But the problem is the same, e.g how to make
    >> >> sure the device memory say HBM won't be occupied by normal CPU allocation.
    >> >> Things will be more complex if there are multi GPU connected by nvlink
    >> >> (also cache coherent) in a system, each GPU has their own HBM.
    >> >>
    >> >> How to decide allocate physical memory from local HBM/DDR or remote HBM/
    >> >> DDR?
    >> >>
    >> >> If using numa(CDM) approach there are NUMA mempolicy and autonuma mechanism
    >> >> at least.
    >> >
    >> > NUMA is not as easy as you think. First like i said we want the device
    >> > memory to be isolated from most existing mm mechanism. Because memory
    >> > is unreliable and also because device might need to be able to evict
    >> > memory to make contiguous physical memory allocation for graphics.
    >> >
    >>
    >> Right, but we need isolation any way.
    >> For hmm-cdm, the isolation is not adding device memory to lru list, and many
    >> if (is_device_public_page(page)) ...
    >>
    >> But how to evict device memory?
    >
    > What you mean by evict ? Device driver can evict whenever they see the need
    > to do so. CPU page fault will evict too. Process exit or munmap() will free
    > the device memory.
    >
    > Are you refering to evict in the sense of memory reclaim under pressure ?
    >
    > So the way it flows for memory pressure is that if device driver want to
    > make room it can evict stuff to system memory and if there is not enough
    > system memory than thing get reclaim as usual before device driver can
    > make progress on device memory reclaim.
    >
    >
    >> > Second device driver are not integrated that closely within mm and the
    >> > scheduler kernel code to allow to efficiently plug in device access
    >> > notification to page (ie to update struct page so that numa worker
    >> > thread can migrate memory base on accurate informations).
    >> >
    >> > Third it can be hard to decide who win between CPU and device access
    >> > when it comes to updating thing like last CPU id.
    >> >
    >> > Fourth there is no such thing like device id ie equivalent of CPU id.
    >> > If we were to add something the CPU id field in flags of struct page
    >> > would not be big enough so this can have repercusion on struct page
    >> > size. This is not an easy sell.
    >> >
    >> > They are other issues i can't think of right now. I think for now it
    >>
    >> My opinion is most of the issues are the same no matter use CDM or HMM-CDM.
    >> I just care about a more complete solution no matter CDM,HMM-CDM or other ways.
    >> HMM or HMM-CDM depends on device driver, but haven't see a public/full driver to
    >> demonstrate the whole solution works fine.
    >
    > I am working with NVidia close source driver team to make sure that it works
    > well for them. I am also working on nouveau open source driver for same NVidia
    > hardware thought it will be of less use as what is missing there is a solid
    > open source userspace to leverage this. Nonetheless open source driver are in
    > the work.

    Can you point to the nouveau patches? I still find these HMM patches
    un-reviewable without an upstream consumer.

    \
     
     \ /
      Last update: 2017-07-21 05:48    [W:7.677 / U:0.248 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site