Messages in this thread |  | | Date | Tue, 17 Oct 2000 21:42:36 -0700 (PDT) | From | Linus Torvalds <> | Subject | Re: mapping user space buffer to kernel address space |
| |
On Wed, 18 Oct 2000, Andrea Arcangeli wrote: > > > Are you suggesting something like: if it is reading from a page (ie > > writing the contents of that page somewhere else), we don't lock it, but > > if it is writing to a page, we lock it so that the dirty bit won't get > > lost. > > That wasn't what I suggested but I like also the the way you describe above. > It makes sense.
I don't think it really makes sense - I can see that it works, but I don't like the way it ties in the dirty bit handling with the lock bit handling. I personally think they are (and should be) unrelated.
> > Sure, that works (modulo the fact that it still has the issues with > > serializing mmap's and accesses to other areas in the same page). But do > > you really claim that it's the clean solution? > > It looks cleaner than swapping out a page while a device is writing > to it in DMA under the swapout.
Note that _that_ is something I'd much rather handle another way entirely: something I've long long wanted to do is to handle all swap-outs from the "struct page *" rather than based on the VM scan.
Now, the way I'v ealways envisioned this to work is that the VM scanning function basically always does the equivalent of just
- get PTE entry, clear it out. - if PTE was dirty, add the page to the swap cache, and mark it dirty, but DON'T ACTUALLY START THE IO! - free the page.
Basically, we removed the page from the virtual mapping, and it's now in the LRU queues, and marked dirty there.
Then, we'd move the "writeout" part into the LRU queue side, and at that point I agree with you 100% that we probably should just delay it until there are no mappings available - is we'd only write out a swap cache entry if the count == 1 (ie it only exists in the swap cache), because before that is true there are other people marking it dirty.
What are the advantges of this approach? Never mind the kiobuf issues at this point - I wanted to do this long before kiobuf's happened.
Basically, it means that we'd write out shared pages only once, instead of initiating write-back multiple times for each mapping that the page exists in. Right now this isn't actually much of a problem, simply because it's actually fairly hard to get shared dirty pages that would get written out twice, but I think you see what I'm talking about on a conceptual level.
It also makes the kiobuf dirty issues a _completely_ separate issue, and makes it very clean to handle: what kiobuf's become is just a kind of virtual "pseudo-address-space". A "kiobuf address space" doesn't have a page table, but it ends up being basically equivalent to a virtual address space without the TLB overhead. Like a "real" address space attached to a process, it has dirty pages in it, and like the real address space it informs the VM layer of that through the page dirty bit.
See? THAT, in my opinion, is the clean way to handle this all.
Linus
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |