lkml.org 
[lkml]   [2020]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] mm: reorganize internal_get_user_pages_fast()
From
Date
On 10/27/20 6:15 AM, Jason Gunthorpe wrote:
> On Tue, Oct 27, 2020 at 10:33:01AM +0100, Jan Kara wrote:
>> On Fri 23-10-20 21:44:17, John Hubbard wrote:
>>> On 10/23/20 5:19 PM, Jason Gunthorpe wrote:
>>>> + start += (unsigned long)nr_pinned << PAGE_SHIFT;
>>>> + pages += nr_pinned;
>>>> + ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, gup_flags,
>>>> + pages);
>>>> + if (ret < 0) {
>>>> /* Have to be a bit careful with return values */
>>>
>>> ...and can we move that comment up one level, so that it reads:
>>>
>>> /* Have to be a bit careful with return values */
>>> if (ret < 0) {
>>> if (nr_pinned)
>>> return nr_pinned;
>>> return ret;
>>> }
>>> return ret + nr_pinned;
>>>
>>> Thinking about this longer term, it would be nice if the whole gup/pup API
>>> set just stopped pretending that anyone cares about partial success, because
>>> they *don't*. If we had return values of "0 or -ERRNO" throughout, and an
>>> additional set of API wrappers that did some sort of limited retry just like
>>> some of the callers do, that would be a happier story.
>>
>> Actually there are callers that care about partial success. See e.g.
>> iov_iter_get_pages() usage in fs/direct_io.c:dio_refill_pages() or
>> bio_iov_iter_get_pages(). These places handle partial success just fine and
>> not allowing partial success from GUP could regress things...
>
> I looked through a bunch of call sites, and there are a wack that

So did I! :)

> actually do only want a complete return and are carrying a bunch of
> code to fix it:
>
> pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> if (!pvec)
> return -ENOMEM;
>
> do {
> unsigned num_pages = npages - pinned;
> uint64_t ptr = userptr->ptr + pinned * PAGE_SIZE;
> struct page **pages = pvec + pinned;
>
> ret = pin_user_pages_fast(ptr, num_pages,
> !userptr->ro ? FOLL_WRITE : 0, pages);
> if (ret < 0) {
> unpin_user_pages(pvec, pinned);
> kvfree(pvec);
> return ret;
> }
>
> pinned += ret;
>
> } while (pinned < npages);
>
> Is really a lot better if written as:
>
> pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> if (!pvec)
> return -ENOMEM;
> ret = pin_user_pages_fast(userptr->ptr, npages, FOLL_COMPLETE |
> (!userptr->ro ? FOLL_WRITE : 0),
> pvec);
> if (ret) {
> kvfree(pvec);
> return ret;
> }
>
> (eg FOLL_COMPLETE says to return exactly npages or fail)


Yes, exactly. And if I reverse the polarity (to Christoph's FOLL_PARTIAL, instead
of FOLL_COMPLETE) it's even smaller, slightly. Which is where I am leaning now.


>
> Some code assumes things work that way already anyhow:
>
> /* Pin user pages for DMA Xfer */
> err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count,
> dma->map, FOLL_FORCE);
>
> if (user_dma.page_count != err) {
> IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n",
> err, user_dma.page_count);
> if (err >= 0) {
> unpin_user_pages(dma->map, err);
> return -EINVAL;
> }
> return err;
> }
>
> Actually I'm quite surprised I didn't find too many missing the tricky
> unpin_user_pages() on the error path - eg
> videobuf_dma_init_user_locked() is wrong.
>

Well. That's not accidental. "Some People" (much thanks to Souptick Joarder, btw) have
been fixing up many of those sites, during the pin_user_pages() conversions. Otherwise
you would have found about 10 or 15 more.

I'll fix up that one above (using your Reported-by, I assume), unless someone else is
already taking care of it.


thanks,
--
John Hubbard
NVIDIA

\
 
 \ /
  Last update: 2020-10-28 23:55    [W:0.073 / U:0.440 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site