lkml.org 
[lkml]   [2002]   [Aug]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: IPC lock patch performance improvement
On Mon, 5 Aug 2002, Duc Vianney wrote:
> I ran the LMbench Pipe and IPC latency test bucket against the IPC lock
> patch from Mingming Cao and found the patch improves the performance of
> those functions from 1% to 9%. See the attached data. The kernel under
> test is 2.5.29, SMP kernel running on a 4-way 500 MHz. The data for
> 2.5.29s4-ipc represents the average of three runs.
>
> Percent
> 2.5.29s4 2.5.29s4-ipc Improvement
> Pipe latency 12.51 11.43 9%
> AF_Unix sock stream latency 21.61 19.82 8%
> UDP latency using localhost 36.28 35.12 3%
> TCP latency using localhost 56.90 54.89 4%
> RPC/tcp latency using local host 123.30 121.91 1%
> RPC/udp latency using localhost 89.78 88.70 1%
> TCP/IP connection cost to localhost 192.74 187.76 3%
> Note: Latency is in microseconds
> Note: 2.5.29s4 is the base 2.5.29 SMP kernel running on a 4-way,
> 2.5.29s4-ipc is the base 2.5.29 SMP kernel built with IPC lock patch.

Please show me I'm wrong, but so far as I can see (from source and
breakpoints) LMbench never touches the SysV IPC code, which is the only
code affected by Mingming's proposed IPC locking changes. I believe
LMbench tests InterProcessCommunication via pipes and sockets,
not via the SysV IPC msg sem and shm.

If that's right, then your improvement is magical; but we can
hope for even better when the appropriate codepaths are tested.

Hugh

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:27    [W:0.027 / U:1.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site