Messages in this thread |  | | Date | Mon, 13 Jan 1997 22:45:04 -0500 | From | "David S. Miller" <> | Subject | Re: MMU & cache trick: page flips on COW? |
| |
Date: Mon, 13 Jan 1997 20:10:27 +0100 (MET) From: Ingo Molnar <mingo@pc5829.hil.siemens.at>
but if this all works, we could have hot cache for the process continuing execution, for the price of N*(cost of invlpg+cost of backmapping)+1, where N is the number of processes mapping the page. At least for small 'N' and for Intel caches this looks like a definit win ...
A win for the cache architecture in question, but:
1) Very machine specific, doesn't work at all in other cache configurations (see below).
2) The implementation is messy, there is a better way to do this.
Virtually indexed caches would not gain from this setup even if physically tagged. The copy already requires a flush_page_to_ram() on virtual indexed caches for both pages of the copy because of this.
If you truly want get around the no-write-allocate problem, do something to the memcpy code itself which helps the situation. I believe the memcpy etc. code in GNU libc written by Torbjourn Granlund does in fact do a quick read of the first word in each cache line being copied to in order to work around this problem specifically, and the speed increase does show up.
The idea I had for making cow fault processing more cache friendly, which neither made the situation better nor worse on physical caches, and does tremendously improve performance on all virutally indexed caches, was the following.
Instead of copying to/from the kernel pages, only use the kernel mapping as the source of the copy, use the new user mapping as the destination of the copy. You'd need to setup the pte beforehand to allow the writes or else you'd fault recursively forever. So it looks more like:
do_wp_page() { [ ... ]
if (mem_map[MAP_NR(old_page)].count != 1) { if (new_page) { if (PageReserved(mem_map + MAP_NR(old_page))) ++vma->vm_mm->rss; flush_cache_page(vma, address); set_pte(page_table, pte_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)))); flush_tlb_page(vma, address); copy_cow_page(old_page,address); flush_page_to_ram(old_page); free_page(old_page); return; } [ ... ] }
A nice side effect is that this eliminates one of the flush_page_to_ram() invocations, the one for the new page since we are only writing to where the cache will finally "view" the data and thus consistancy is guarenteed.
The above will not work unmodified, there is still a bug. You need to only perform this optimization if vma->vm_mm == current->mm, if not you have to perform the operation the old way. (this happens during ptrace() writes to the inferier's address space, so you cannot assume do_wp_page() is being run for current all the time, although this is the most common case).
So this optimization plus the memcpy() hacks for no-write-allocate caches is probably what you want to solve the problem at hand.
---------------------------------------------//// Yow! 11.26 MB/s remote host TCP bandwidth & //// 199 usec remote TCP latency over 100Mb/s //// ethernet. Beat that! //// -----------------------------------------////__________ o David S. Miller, davem@caip.rutgers.edu /_____________/ / // /_/ ><
|  |