lkml.org 
[lkml]   [2020]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [GIT PULL] io_uring changes for 5.9-rc1
On Mon, Aug 3, 2020 at 4:18 PM Jens Axboe <axboe@kernel.dk> wrote:
>
>
> I took a look at the rewrite you queued up, and made a matching change
> on the io_uring side:

Oh, no, you made it worse.

Now you're tying your odd wakeup routine to entirely irrelevant things
that can't even happen to you.

That io_async_buf_func() will never be called for any entry that isn't
your own, so testing

wait->flags & WQ_FLAG_EXCLUSIVE

is completely pointless, because you never set that flag. And
similarly, for you to then do

wait->flags |= WQ_FLAG_WOKEN;

is equally pointless, because the only thing that cares and looks at
that wait entry is you, and you don't care about the WOKEN flag.

So that patch shows a fundamental misunderstanding of how the
waitqueues actually work.

Which is kind of my _point_. The io_uring code that hooked into the
page wait queues really looks like complete cut-and-paste voodoo
programming.

It needs comments. It's hard to follow. Even somebody like me, who
actually knows how the page wait queues really work, have a really
hard time following how io_uring initializing a wait-queue entry and
pointing to it in the io ctx then interacts with the (later) generic
file reading path, and how it then calls back at unlock time to the
io_uring callback _if_ the page was locked.

And that patch you point to makes me 100% sure you don't quite
understand the code either.

So when you do

/*
* Only test the bit if it's an exclusive wait, as we know the
* bit is cleared for non-exclusive waits. Also see mm/filemap.c
*/
if ((wait->flags & WQ_FLAG_EXCLUSIVE) &&
test_and_set_bit(key->bit_nr, &key->page->flags))
return -1;

the first test guarantees that the second test is never done, which is
good, because if it *had* been done, you'd have taken the lock and
nothing you have actually expects that.

So the fix is to just remove those lines entirely. If somebody
unlocked the page you care about, and did a wakeup on that page and
bit, then you know you should start the async worker. Noi amount of
testing bits matters at all.

And similarly, the

wait->flags |= WQ_FLAG_WOKEN;

is a no-op because nothing tests that WQ_FLAG_WOKEN bit. That wait
entry is _your_ wait entry. It's not the wait entry of some normal
page locker - those use wake_page_function().

Now *if* you had workers that actually expected to be woken up with
the page lock already held, and owning it, then that kind of
WQ_FLAG_EXCLUSIVE and WQ_FLAG_WOKEN logic would be a good idea. But
that's not what you have.

> and also queued a documentation patch for the retry logic and the
> callback handler:
>
> https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.9&id=9541a9d4791c2d31ba74b92666edd3f1efd936a8

Better. Although I find the first comment a bit misleading.

You say

/* Invoked from our "page is now unlocked" handler when someone ..

but that's not really the case. The function gets called by whoever
unlocks the page after you've registered that page wait entry through
lock_page_async().

So there's no "our handler" anywhere, which I find misleading and
confusing in the comment.

Linus

\
 
 \ /
  Last update: 2020-08-04 01:34    [W:0.055 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site