lkml.org 
[lkml]   [2007]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 0/5] make slab gfp fair
    On Fri, 18 May 2007, Peter Zijlstra wrote:

    > On Thu, 2007-05-17 at 15:27 -0700, Christoph Lameter wrote:

    > Isn't the zone mask the same for all allocations from a specific slab?
    > If so, then the slab wide ->reserve_slab will still dtrt (barring
    > cpusets).

    All allocations from a single slab have the same set of allowed types of
    zones. I.e. a DMA slab can access only ZONE_DMA a regular slab
    ZONE_NORMAL, ZONE_DMA32 and ZONE_DMA.


    > > On x86_64 systems you have the additional complication that there are
    > > even multiple DMA32 or NORMAL zones per node. Some will have DMA32 and
    > > NORMAL, others DMA32 alone or NORMAL alone. Which watermarks are we
    > > talking about?
    >
    > Watermarks like used by the page allocator given the slabs zone mask.
    > The page allocator will only fall back to ALLOC_NO_WATERMARKS when all
    > target zones are exhausted.

    That works if zones do not vary between slab requests. So on SMP (without
    extra gfp flags) we may be fine. But see other concerns below.

    > > The use of ALLOC_NO_WATERMARKS depends on the contraints of the allocation
    > > in all cases. You can only compare the stresslevel (rank?) of allocations
    > > that have the same allocation constraints. The allocation constraints are
    > > a result of gfp flags,
    >
    > The gfp zone mask is constant per slab, no? It has to, because the zone
    > mask is only used when the slab is extended, other allocations live off
    > whatever was there before them.

    The gfp zone mask is used to select the zones in a SMP config. But not in
    a NUMA configuration there the zones can come from multiple nodes.

    Ok in an SMP configuration the zones are determined by the allocation
    flags. But then there are also the gfp flags that influence reclaim
    behavior. These also have an influence on the memory pressure.

    These are

    __GFP_IO
    __GFP_FS
    __GFP_NOMEMMALLOC
    __GFP_NOFAIL
    __GFP_NORETRY
    __GFP_REPEAT

    An allocation that can call into a filesystem or do I/O will have much
    less memory pressure to contend with. Are the ranks for an allocation
    with __GFP_IO|__GFP_FS really comparable with an allocation that does not
    have these set?

    > > cpuset configuration and memory policies in effect.
    >
    > Yes, I see now that these might become an issue, I will have to think on
    > this.

    Note that we have not yet investigated what weird effect memory policy
    constraints can have on this. There are issues with memory policies only
    applying to certain zones.....
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-05-18 19:15    [W:0.022 / U:29.324 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site