lkml.org 
[lkml]   [2017]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 11/17] mm: move get_dev_pagemap out of line
    On Fri, Dec 15, 2017 at 6:09 AM, Christoph Hellwig <hch@lst.de> wrote:
    > This is a pretty big function, which should be out of line in general,
    > and a no-op stub if CONFIG_ZONE_DEVICЕ is not set.
    >
    > Signed-off-by: Christoph Hellwig <hch@lst.de>
    > Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
    [..]
    > +/**
    > + * get_dev_pagemap() - take a new live reference on the dev_pagemap for @pfn
    > + * @pfn: page frame number to lookup page_map
    > + * @pgmap: optional known pgmap that already has a reference
    > + *
    > + * @pgmap allows the overhead of a lookup to be bypassed when @pfn lands in the
    > + * same mapping.
    > + */
    > +struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
    > + struct dev_pagemap *pgmap)
    > +{
    > + const struct resource *res = pgmap ? pgmap->res : NULL;
    > + resource_size_t phys = PFN_PHYS(pfn);
    > +
    > + /*
    > + * In the cached case we're already holding a live reference so
    > + * we can simply do a blind increment
    > + */
    > + if (res && phys >= res->start && phys <= res->end) {
    > + percpu_ref_get(pgmap->ref);
    > + return pgmap;
    > + }

    I was going to say keep the cached case in the static inline, but with
    the optimization to the calling convention in the following patch I
    think that makes this moot.

    So,

    Reviewed-by: Dan Williams <dan.j.williams@intel.com>

    \
     
     \ /
      Last update: 2017-12-17 18:26    [W:4.176 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site