[lkml]   [1999]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: Large tmp files, async flush and still lots of I/O?
>>>>> "DC" == David Chappell <> writes:

DC> Your belief that this shouldn't happen if the file has already
DC> been deleted sounds reasonable. I think the reason it doesn't work
DC> is due to Unix unlink semantics. To be precise, unlink() does
DC> _not_ delete a file. Rather, a file is deleted when unlink() has
DC> been used sucessfully on all its names and all processes have
DC> closed it. Thus, technically, in your example, the file has not
DC> been deleted, so the kernel is right to commit it to disk. Of
DC> course, there is probably no way to establish new links (names)
DC> for the file, so this is sort of silly.

It seems it would be nice to have an "don't commit" flag. That would
be set when no process could ever get access to data after a reboot.
Pages that are always "don't commit" should not get sent to disk
unless there is a shortage of memory.

This would actually be a nice way to get lots of shared memory -- no
problem with needing swap for it, and the performance would be as good
as shared anonymous mappings.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at

 \ /
  Last update: 2005-03-22 13:52    [from the cache]
©2003-2014 Jasper Spaans. hosted at Digital Ocean