lkml.org 
[lkml]   [2011]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH 3/7] zram: Speed insertion of new pages with cached idx
    From
    On Wed, Jan 26, 2011 at 7:23 PM, Robert Jennings <rcj@linux.vnet.ibm.com> wrote:
    > Calculate the first- and second-level indices for new page when the pool
    > is initialized rather than calculating them on each insertion.
    >
    > Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
    > ---
    >  drivers/staging/zram/xvmalloc.c     |   13 +++++++++++--
    >  drivers/staging/zram/xvmalloc_int.h |    4 ++++
    >  2 files changed, 15 insertions(+), 2 deletions(-)
    >
    > diff --git a/drivers/staging/zram/xvmalloc.c b/drivers/staging/zram/xvmalloc.c
    > index 3fdbb8a..a507f95 100644
    > --- a/drivers/staging/zram/xvmalloc.c
    > +++ b/drivers/staging/zram/xvmalloc.c
    > @@ -184,8 +184,13 @@ static void insert_block(struct xv_pool *pool, struct page *page, u32 offset,
    >        u32 flindex, slindex;
    >        struct block_header *nextblock;
    >
    > -       slindex = get_index_for_insert(block->size);
    > -       flindex = slindex / BITS_PER_LONG;
    > +       if (block->size >= (PAGE_SIZE - XV_ALIGN)) {
    > +               slindex = pagesize_slindex;
    > +               flindex = pagesize_flindex;
    > +       } else {
    > +               slindex = get_index_for_insert(block->size);
    > +               flindex = slindex / BITS_PER_LONG;
    > +       }
    >
    >        block->link.prev_page = 0;
    >        block->link.prev_offset = 0;
    > @@ -316,6 +321,10 @@ struct xv_pool *xv_create_pool(void)
    >        if (!pool)
    >                return NULL;
    >
    > +       /* cache the first/second-level indices for PAGE_SIZE allocations */
    > +       pagesize_slindex = get_index_for_insert(PAGE_SIZE);
    > +       pagesize_flindex = pagesize_slindex / BITS_PER_LONG;

    Why is this in xv_create_pool(). AFAICT, it can be called multiple
    times if there's more than one zram device. Do we really need
    variables for these? They look like something GCC constant propagation
    should take care of if they would be defines or static inline
    functions.

    > +
    >        spin_lock_init(&pool->lock);
    >
    >        return pool;
    > diff --git a/drivers/staging/zram/xvmalloc_int.h b/drivers/staging/zram/xvmalloc_int.h
    > index 051a49b..68db384 100644
    > --- a/drivers/staging/zram/xvmalloc_int.h
    > +++ b/drivers/staging/zram/xvmalloc_int.h
    > @@ -52,6 +52,10 @@
    >
    >  /* End of user params */
    >
    > +/* Cache of the first/second-level indices for PAGE_SIZE allocations */
    > +static u32 pagesize_slindex;
    > +static u32 pagesize_flindex;

    This doesn't look right. You shouldn't put variable declarations in
    header files.

    > +
    >  enum blockflags {
    >        BLOCK_FREE,
    >        PREV_FREE,
    > --
    > 1.6.0.2
    >
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at  http://vger.kernel.org/majordomo-info.html
    > Please read the FAQ at  http://www.tux.org/lkml/
    >
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2011-01-26 18:55    [W:3.717 / U:0.136 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site