lkml.org 
[lkml]   [2012]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [Patch 1/4] ipc/mqueue: improve performance of send/recv
From
On 2 May 2012 03:50, Doug Ledford <dledford@redhat.com> wrote:

> Avg time to send/recv (in nanoseconds per message)
>  when queue empty            305/288                    349/318
>  when queue full (65528 messages)
>    constant priority      526589/823                    362/314
>    increasing priority    403105/916                    495/445
>    decreasing priority     73420/594                    482/409
>    random priority        280147/920                    546/436
>
> Time to fill/drain queue (65528 messages, in seconds)
>  constant priority         17.37/.12                    .13/.12
>  increasing priority        4.14/.14                    .21/.18
>  decreasing priority       12.93/.13                    .21/.18
>  random priority            8.88/.16                    .22/.17
>
> So, I think the results speak for themselves.  It's possible this
> implementation could be improved by cacheing at least one priority
> level in the node tree (that would bring the queue empty performance
> more in line with the old implementation), but this works and is *so*
> much better than what we had, especially for the common case of a
> single priority in use, that further refinements can be in follow on
> patches.

Nice work! Yeah I think if you cache a last unused entry, that
should mostly solve the empty queue regression.

I would imagine most users won't have huge queues, so the
empty case should be important too.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2012-05-03 12:41    [W:0.122 / U:0.548 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site