Messages in this thread |  | | Date | Thu, 17 Oct 96 18:37:33 PDT | From | (Matthew Jacob) | Subject | "raw" I/O.... |
| |
I've had to do some work a while back along these lines- for performance reasons reads off of a disk into the buffer cache, out to user space and thence out a network were killing a poor pentium 150.
The q&d solution to this was to read into static kernel buffers, and use mmap in the user process to get at the data, fiddle with stuff and ship it out (probably would have cache coherence problems for anything but an x86...). This solved the problem. Customer was happy.
I briefly worked on making mmap work for sd.c after this. I certainly got it to work (after some nice encouragement from Linus), but what stopped me from going further on this is that unless mapped pares are sufficiently big, or the VM subsystem does sufficient clustering (a la SunOS - and larry mcvoy's UFS clustering for same), there isn't that that big a win in terms of performance- so I didn't finish it off and package it up (what would have been the point?).
This will certainly come back again- if only for some stuff I've been thinking about (need to do 60-90MB/s streaming to/from RAIDV disk to/from HIPPI), and I certainly don't want to have screw around with copying in/out of user space (and I'm not even sure NFSv3 is an option here...).
This isn't an argument for raw I/O: it's just an observation that the model of secondarymem<>primarymem<>userapp is not necessarily the one to make the most efficient - perhaps this why more kernel process/daemons have crept into the kernel since the 1.2.X days?
|  |