lkml.org 
[lkml]   [1999]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Large tmp files, async flush and still lots of I/O?
From
Date
>>>>> "DC" == David Chappell <David.Chappell@mail.cc.trincoll.edu> writes:

DC> Your belief that this shouldn't happen if the file has already
DC> been deleted sounds reasonable. I think the reason it doesn't work
DC> is due to Unix unlink semantics. To be precise, unlink() does
DC> _not_ delete a file. Rather, a file is deleted when unlink() has
DC> been used sucessfully on all its names and all processes have
DC> closed it. Thus, technically, in your example, the file has not
DC> been deleted, so the kernel is right to commit it to disk. Of
DC> course, there is probably no way to establish new links (names)
DC> for the file, so this is sort of silly.

It seems it would be nice to have an "don't commit" flag. That would
be set when no process could ever get access to data after a reboot.
Pages that are always "don't commit" should not get sent to disk
unless there is a shortage of memory.

This would actually be a nice way to get lots of shared memory -- no
problem with needing swap for it, and the performance would be as good
as shared anonymous mappings.


Benny


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.033 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site