lkml.org 
[lkml]   [1998]   [Dec]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: PATCH: Raw device IO for 2.1.131

On Sat, 12 Dec 1998, David S. Miller wrote:

> Linus, if what you want to say is that it's "ok" to have the data go
> in and out of the CPU cache for every I/O, and that twiddling block
> and inode allocation bits in the filesystem code is "ok" for every
> I/O, then you need to have your head seriously examined.

we _do_ have a zero-copy mechanizm in there, it's just not really doing
zero-copy currently :)

With mmap()+copyfd() we dont ever copy the data, we just start off DMA
requests straight from kernel-space. (lets assume we have fixed the
page-cache stupidity of copying on writeout, and lets assume we have a
smart copyfd())

On the read() side, if bmap() was a performance-problem, we could cache
(page => block) mappings in struct page. (But really, it isnt a problem,
at least not on the IO subsystems i work with.)

On the write() side you are right, ext2fs doesnt do too well with big
files, but thats a filesystem problem. (again, we are not even able to see
real costs on the write() side because we do the copy)

while raw-IO sounds good, it's simply the wrong solution:

- it slows down the development of important features because
new and 'correct' VFS/filesystem features will make less
performance-difference because raw-IO will already have the
performance.

- it moves applications to the wrong API

- it violates several layers in the kernel, thus creates extra
complexity we dont really need, and which will be
hard/impossible to remove in the future.

it's _much_ harder to fix the real issues, but we have to do it :( I think
raw-IO will help in the short term only.

framegrabbing, zero-copy disk IO is much harder through the filesystem,
but there is no conceptual problem with it, is there? And if done right,
people will suddenly have not only the performance, but all the features
of the VFS too, plus a clean core/API. (those features are simply not
there with raw-IO)

(i think it's entirely possible to create a 'blobfs', which is basically a
1:1 mapping of the underlying device with only the default VFS features,
but which already operates through the page-cache and inode space
properly. Another solution would be to embedd block devices into the
page-cache via special inode numbers.)

why dont we allow raw chunks of memory being accessed by root without any
VM mappings? The VM architecture is correct and is not a performance
problem for even the simplest uses, the speed of DosEMU applications rival
'pure' DOS performance. i remember the times when 'real men' were writing
DOS applications only because 'real-mode is so much faster than the too
complex protected-mode'. For file IO the same thing is harder to achieve
but if we give up our framework for shortterm gains, who will develop
zero-copy for the VFS, who will develop 2G+ file support and extent-based
allocation for ext2fs?

[there is an important exception, we have to allow in a new API if the old
API has conceptually no way to achieve a given goal, but do we really have
this case now?]

> I want 100% of my memory bandwidth, and that means:
>
> 1) making the data go once over the memory bus
> 2) never having the data hit the cpu cache
> 3) the data path must be user data --> device, no bulky VFS sitting
> in the middle dirtying inode and block tables along the way

almost all per-page constant operations are basically lost in the noise if
we are doing IO. (we need a multi-page IO and bandwith-guarantee thing in
the future, but thats a different issue) block tables are an
ext2fs-specific issue.

> Why should the hardware people build faster memory subsystems if the
> software people are just going to use it half-assedly? Thats what M$
> does, but we shouldn't.

agreed, but really, we already have the mechanizm there. Most of our sucky
mmap()+write() (or rather mmap()+sendfile()) performance is due to the
extra copy we do when we write the page-cache.

the new raw-IO API is equivalent to an mmap()-ed 'temporary' file, plus a
copyfd() done from this temporary file to whatever other (new or
overwritten) file.


The cache semantics of raw-IO are a different issue, no cache-behind of a
target file should be possible. There are two reasons for this feature:

1) SCSI clustering

here i think it is conceptually more correct to couple systems on the
page-cache level, not on the device level, and have an apropriate API.
(thus we could local-cache a page and only invalidate it only when some
other host in the cluster uses it too, MESI for the page-cache)

2) persistency control

some applications need to know when an IO has touched the disk, to
guarantee system-interruption safety. fsync() is the current workaround,
but we definitely need something better. The conceptually right thing i
think is to have this on the filesystem level too, and give applications
the possibility to order IO explicitly.

But a simple flush-behind flag for files will handle both cases for now :)
[mainly because opposed to the zero-copy thing, in this case we do not
really know what API we want in the future. (or at least i dont know :)]

-- mingo


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:46    [W:0.036 / U:0.856 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site