lkml.org 
[lkml]   [2020]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > goto umem_release;
> >
> > cur_base += ret * PAGE_SIZE;
> > - npages -= ret;
> > -
> > - sg = ib_umem_add_sg_table(sg, page_list, ret,
> > - dma_get_max_seg_size(device->dma_device),
> > - &umem->sg_nents);
> > + npages -= ret;
> > + sg = __sg_alloc_table_from_pages(
> > + &umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > + dma_get_max_seg_size(device->dma_device), sg, npages,
> > + GFP_KERNEL);
> > + umem->sg_nents = umem->sg_head.nents;
> > + if (IS_ERR(sg)) {
> > + unpin_user_pages_dirty_lock(page_list, ret, 0);
> > + ret = PTR_ERR(sg);
> > + goto umem_release;
> > + }
> > }
> >
> > sg_mark_end(sg);
>
> Does it still need the sg_mark_end?

It is preserved here for correctness, the release logic doesn't rely on
this marker, but it is better to leave it.

Thanks

>
> Jason

\
 
 \ /
  Last update: 2020-09-30 11:55    [W:0.070 / U:7.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site