Messages in this thread | | | Date | Tue, 9 Mar 2004 08:47:47 +0100 | From | Ingo Molnar <> | Subject | Re: objrmap-core-1 (rmap removal for file mappings to avoid 4:4 in <=16G machines) |
| |
* Andrea Arcangeli <andrea@suse.de> wrote:
> I agree that works fine for Oracle, that's becase Oracle is an extreme > special case since most of this shared memory is an I/O cache, this is > not the case of other apps, and those other apps really depends on the > kernel vm paging algorithms for things more than istantiating a pte > (or a pmd if it's a largepage). Other apps can't use mlock. Some of > these apps works closely with oracle too.
what other apps use gigs of shared memory where that shared memory is not an IO cache?
> dropping pte_chains through mlock was suggested around april 2003 > originally by Wli and I didn't like that idea since we really want to > allow swapping if we run short of ram. [...]
dropping pte_chains on mlock() we implemented in RHEL3 and it works fine to reduce the pte_chain overhead for those extreme shm users.
mind you, it still doesnt make high-end DB workloads viable on 32 GB systems. (and no, not due to the pte_chain overhead.) 3:1 is simply not enough at 32 GB and higher [possibly much ealier, for other workloads]. Trying to argue otherwise is sticking your head into the sand.
most of the anti-rmap sentiment (not this patch - this patch looks OK at first sight, except the increase in struct page) is really backwards. The right solution is to have rmap which is a _per page_ overhead and the clear path to a mostly O(1) VM logic. Then we can increase the page size (pgcl) to scale down the rmap overhead (both the per-page and the locking overhead). What's so hard about this concept? Simple and flexible data structure.
the x86 highmem issues are a quickly fading transient in history.
Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |