lkml.org 
[lkml]   [1999]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Large tmp files, async flush and still lots of I/O?


On Fri, 4 Jun 1999, Bernd Eckenfels wrote:

> In article <99060212474000.00732@mouse> you wrote:
> > Your belief that this shouldn't happen if the file has already been deleted
> > sounds reasonable. I think the reason it doesn't work is due to Unix unlink
> > semantics. To be precise, unlink() does _not_ delete a file. Rather, a file is
> > deleted when unlink() has been used sucessfully on all its names and all
> > processes have closed it. Thus, technically, in your example, the file has not
> > been deleted, so the kernel is right to commit it to disk.
>
> But the inode has a link count of 0, which tells the kernel that it wont be
> accessable after it is closed, therefore the kernel dont need to bother to
> write the data to disk.

Yes, it does. You can lseek() and read() the stuff you've written. Yes,
once you close() it the contents will be lost. And once you return from a
function local variables are lost too. It doesn't mean that assignments to
them can be optimized away.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.047 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site