Messages in this thread | | | From | Nick Piggin <> | Subject | Re: [patch] SLQB slab allocator | Date | Tue, 3 Feb 2009 12:53:26 +1100 |
| |
On Tuesday 27 January 2009 04:28:03 Christoph Lameter wrote: > n Fri, 23 Jan 2009, Nick Piggin wrote: > > According to memory policies, a task's memory policy is supposed to > > apply to its slab allocations too. > > It does apply to slab allocations. The question is whether it has to apply > to every object allocation or to every page allocation of the slab > allocators.
Quite obviously it should. Behaviour of a slab allocation on behalf of some task constrained within a given node should not depend on the task which has previously run on this CPU and made some allocations. Surely you can see this behaviour is not nice.
> > > Memory policies are applied in a fuzzy way anyways. A context switch > > > can result in page allocation action that changes the expected > > > interleave pattern. Page populations in an address space depend on the > > > task policy. So the exact policy applied to a page depends on the task. > > > This isnt an exact thing. > > > > There are other memory policies than just interleave though. > > Which have similar issues since memory policy application is depending on > a task policy and on memory migration that has been applied to an address > range.
What similar issues? If a task ask to have slab allocations constrained to node 0, then SLUB hands out objects from other nodes, then that's bad.
> > But that is wrong. The lists obviously have high water marks that > > get trimmed down. Periodic trimming as I keep saying basically is > > alrady so infrequent that it is irrelevant (millions of objects > > per cpu can be allocated anyway between existing trimming interval) > > Trimming through water marks and allocating memory from the page allocator > is going to be very frequent if you continually allocate on one processor > and free on another.
Um yes, that's the point. But you previously claimed that it would just grow unconstrained. Which is obviously wrong. So I don't understand what your point is.
| |