lkml.org 
[lkml]   [2020]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> goto umem_release;
>
> cur_base += ret * PAGE_SIZE;
> - npages -= ret;
> -
> - sg = ib_umem_add_sg_table(sg, page_list, ret,
> - dma_get_max_seg_size(device->dma_device),
> - &umem->sg_nents);
> + npages -= ret;
> + sg = __sg_alloc_table_from_pages(
> + &umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> + dma_get_max_seg_size(device->dma_device), sg, npages,
> + GFP_KERNEL);
> + umem->sg_nents = umem->sg_head.nents;
> + if (IS_ERR(sg)) {
> + unpin_user_pages_dirty_lock(page_list, ret, 0);
> + ret = PTR_ERR(sg);
> + goto umem_release;
> + }
> }
>
> sg_mark_end(sg);

Does it still need the sg_mark_end?

Jason

\
 
 \ /
  Last update: 2020-09-29 21:59    [W:0.091 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site