lkml.org 
[lkml]   [2017]   [Mar]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] mm: Change generic FALLBACK zonelist creation process
From
Date
On 03/14/2017 06:33 AM, Anshuman Khandual wrote:
> On 03/08/2017 04:37 PM, John Hubbard wrote:
[...]
>> There was a discussion, on an earlier version of this patchset, in which
>> someone pointed out that a slight over-allocation on a device that has
>> much more memory than the CPU has, could use up system memory. Your
>> latest approach here does not address this.
>
> Hmm, I dont remember this. Could you please be more specific and point
> me to the discussion on this.

That idea came from Dave Hansen, who was commenting on your RFC V2 patch:

https://lkml.org/lkml/2017/1/30/894

..."A device who got its memory usage off by 1% could start to starve the rest of the system..."

>
>>
>> I'm thinking that, until oversubscription between NUMA nodes is more
>> fully implemented in a way that can be properly controlled, you'd
>
> I did not get you. What does over subscription mean in this context ?
> FALLBACK zonelist on each node has memory from every node including
> it's own. Hence the allocation request targeted towards any node is
> symmetrical with respect to from where the memory will be allocated.
>

Here, I was referring to the lack of support in the kernel today, for allocating X+N bytes on a NUMA
node, when that node only has X bytes associated with it. Currently, the system uses a fallback node
list to try to allocate on other nodes, in that case, but that's not idea. If it NUMA allocation
instead supported "oversubscription", it could allow the allocation to succeed, and then fault and
evict (to other nodes) to support a working set that is larger than the physical memory that the
node has.

This is what GPUs do today, in order to handle work loads that are too large for GPU memory. This
enables a whole other level of applications that the user can run.

Maybe there are other ways to get the same result, so if others have ideas, please chime in. I'm
assuming for now that this sort of thing will just be required in the coming months.

>> probably better just not fallback to system memory. In other words, a
>> CDM node really is *isolated* from other nodes--no automatic use in
>> either direction.
>
> That is debatable. With this proposed solution the CDM FALLBACK
> zonelist contains system RAM zones as fallback option which will
> be used in case CDM memory is depleted. IMHO, I think thats the
> right thing to do as it still maintains the symmetry to some
> extent.
>

Yes, it's worth discussing. Again, Dave's note applies here.

>>
>> Also, naming and purpose: maybe this is a "Limited NUMA Node", rather
>> than a Coherent Device Memory node. Because: the real point of this
>> thing is to limit the normal operation of NUMA, just enough to work with
>> what I am *told* is memory-that-is-too-fragile-for-kernel-use (I remain
>> soemwhat on the fence, there, even though you did talk me into it
>> earlier, heh).
>
> :) Naming can be debated later after we all agree on the proposal
> in principle. We have already discussed about kernel memory on CDM
> in detail.

OK.

thanks,
John Hubbard
NVIDIA

>
>>
>> On process: it would probably help if you gathered up previous
>> discussion points and carefully, concisely addressed each one,
>> somewhere, (maybe in a cover letter). Because otherwise, it's too easy
>> for earlier, important problems to be forgotten. And reviewers don't
>> want to have to repeat themselves, of course.
>
> Will do.
>

\
 
 \ /
  Last update: 2017-03-15 05:11    [W:0.080 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site