lkml.org 
[lkml]   [2009]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/4] compcache: xvmalloc memory allocator
    Hi Pekka,

    On 08/24/2009 11:03 PM, Pekka Enberg wrote:

    <snip>

    > On Mon, Aug 24, 2009 at 7:37 AM, Nitin Gupta<ngupta@vflare.org> wrote:
    >> +/**
    >> + * xv_malloc - Allocate block of given size from pool.
    >> + * @pool: pool to allocate from
    >> + * @size: size of block to allocate
    >> + * @pagenum: page no. that holds the object
    >> + * @offset: location of object within pagenum
    >> + *
    >> + * On success,<pagenum, offset> identifies block allocated
    >> + * and 0 is returned. On failure,<pagenum, offset> is set to
    >> + * 0 and -ENOMEM is returned.
    >> + *
    >> + * Allocation requests with size> XV_MAX_ALLOC_SIZE will fail.
    >> + */
    >> +int xv_malloc(struct xv_pool *pool, u32 size, u32 *pagenum, u32 *offset,
    >> + gfp_t flags)

    <snip>

    >
    > What's the purpose of passing PFNs around? There's quite a lot of PFN
    > to struct page conversion going on because of it. Wouldn't it make
    > more sense to return (and pass) a pointer to struct page instead?


    PFNs are 32-bit on all archs while for 'struct page *', we require 32-bit or
    64-bit depending on arch. ramzswap allocates a table entry <pagenum, offset>
    corresponding to every swap slot. So, the size of table will unnecessarily
    increase on 64-bit archs. Same is the argument for xvmalloc free list sizes.

    Also, xvmalloc and ramzswap itself does PFN -> 'struct page *' conversion
    only when freeing the page or to get a deferencable pointer.

    Thanks,
    Nitin



    \
     
     \ /
      Last update: 2009-08-24 21:39    [W:0.026 / U:0.492 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site