lkml.org 
[lkml]   [2004]   [Sep]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [discuss] f_ops flag to speed up compatible ioctls in linux kernel
Hello!
Quoting Andi Kleen (ak@suse.de) "Re: [discuss] f_ops flag to speed up compatible ioctls in linux kernel":
> On Wed, Sep 01, 2004 at 10:22:45AM +0300, Michael S. Tsirkin wrote:
> > Hello!
> > Currently, on the x86_64 architecture, its quite tricky to make
> > a char device ioctl work for an x86 executables.
> > In particular,
> > 1. there is a requirement that ioctl number is unique -
> > which is hard to guarantee especially for out of kernel modules
>
> Yes, that is a problem for some people. But you should
> have used an unique number in the first place.

Do you mean the _IOC macro and friends?
But their uniqueness depends on allocating a unique magic number
in the first place.

> There are some hackish ways to work around it for non modules[1], but at some
> point we should probably support it better.
>
> [1] it can be handled, except for module unloading, so you have
> to disable that.

Why use the global hash at all?
Why not, for example, pass a parameter to the ioctl function
to make it possible to figure out this is a compat call?

> > 2. there's a performance huge overhead for each compat call - there's
> > a hash lookup in a global hash inside a lock_kernel -
> > and I think compat performance *is* important.
>
> Did you actually measure it? I doubt it is a big issue.
>

But that would depend on what the driver actually does inside
the ioctl and on how many ioctls are already registered, would it not?

I built a silly driver example which just used a semaphore and a switch
statement inside the ioctl.

~/<1>tavor/tools/driver_new>time /tmp/ioctltest64 /dev/mst/mt23108_pci_cr0
0.357u 4.760s 0:05.11 100.0% 0+0k 0+0io 0pf+0w
~/<1>tavor/tools/driver_new>time /tmp/ioctltest32 /dev/mst/mt23108_pci_cr0
0.641u 6.486s 0:07.12 100.0% 0+0k 0+0io 0pf+0w

So just looking at system time there seems to be an overhead of
about 20%.
The overhead is bigger if there are collisions in the hash.

For muti-processor scenarious, the difference is much more pronounced
(note I have dual-cpu Opteron system):

~>time /tmp/ioctltest32 /dev/mst/mt23108_pci_cr0 & ;time /tmp/ioctltest32
/dev/mst/mt23108_pci_cr0 &
[2] 10829
[3] 10830
[2] Done /tmp/ioctltest32 /dev/mst/mt23108_pci_cr0
0.435u 21.322s 0:21.76 99.9% 0+0k 0+0io 0pf+0w
[3] Done /tmp/ioctltest32 /dev/mst/mt23108_pci_cr0
0.683u 21.231s 0:21.92 99.9% 0+0k 0+0io 0pf+0w
~>


~>time /tmp/ioctltest64 /dev/mst/mt23108_pci_cr0 & ;time /tmp/ioctltest64
/dev/mst/mt23108_pci_cr0 &
[2] 10831
[3] 10832
[3] Done /tmp/ioctltest64 /dev/mst/mt23108_pci_cr0
0.474u 11.194s 0:11.70 99.6% 0+0k 0+0io 0pf+0w
[2] Done /tmp/ioctltest64 /dev/mst/mt23108_pci_cr0
0.476u 11.277s 0:11.75 99.9% 0+0k 0+0io 0pf+0w
~>

So we get 50% slowdown.
I imagine this is the result of BKL contention during the hash lookup.

MST
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:05    [W:0.242 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site