lkml.org 
[lkml]   [2019]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC v2 3/3] io_uring: batch get(ctx->ref) across submits
From
Date
On 12/21/19 9:20 AM, Pavel Begunkov wrote:
> On 21/12/2019 19:15, Pavel Begunkov wrote:
>> Double account ctx->refs keeping number of taken refs in ctx. As
>> io_uring gets per-request ctx->refs during submission, while holding
>> ctx->uring_lock, this allows in most of the time to bypass
>> percpu_ref_get*() and its overhead.
>
> Jens, could you please benchmark with this one? Especially for offloaded QD1
> case. I haven't got any difference for nops test and don't have a decent SSD
> at hands to test it myself. We could drop it, if there is no benefit.
>
> This rewrites that @extra_refs from the second one, so I left it for now.

Sure, let me run a peak test, qd1 test, qd1+sqpoll test on
for-5.6/io_uring, same branch with 1-2, and same branch with 1-3. That
should give us a good comparison. One core used for all, and we're going
to be core speed bound for the performance in all cases on this setup.
So it'll be a good comparison.

--
Jens Axboe

\
 
 \ /
  Last update: 2019-12-21 17:39    [W:0.090 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site