lkml.org 
[lkml]   [1999]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Linux and physical memory
   Date: Thu, 21 Jan 1999 14:25:51 GMT
From: "Stephen C. Tweedie" <sct@redhat.com>

DANGER DANGER!!!

The SMP invalidation is probably the most interesting part of the whole
thing, in that there are lots of ways of doing it and it is not
particularly obvious up front what The Best way is. The rest of the
implementation should not be that complex.

This is where your whole scheme is dead wrong.

There should be no tlb invalidation issues, everything should be local
to the processor and the page tables being used by him. This is the
whole crux behind my "make the mapping locally and temporarily only at
the time of the access" scheme.

If you don't like that approach, ok, but find something else which
eliminates all of the tlb flush cross calls on SMP in all cases.
Ingo had some clever ideas last I talked to him.

When your mind starts to drift towards "clever cross call avoidance
schemes because we're mucking with kernel mappings", your design
is doomed already, go back to the beginning and start over. You've
destroyed the one major gain on SMP Linux has by mapping all of
physical memory at once.

Really, how much does the following cost:

__cli();
*pte1 = make_pte(PAGE_KERNEL, MAGIC_VADDR1);
*pte2 = make_pte(PAGE_KERNEL, MAGIC_VADDR1);
touch_high_pages_in_some_way()...
*pte1 = 0; flush_tlb_page(MAGIC_VADDR1);
*pte2 = 0; flush_tlb_page(MAGIC_VADDR1);
__sti();

compared to a tlb shootdown with 4 cpus on Intel for example? (and
yes, your 100MB pool may give some buffer of protection, but it is
unclean to rely on a scheme which "just so happens" to not thrash
through highmem pages)

And you can optimize this further, you know that only these code paths
access the magic kernel vaddrs ever, you know nobody else will
interrupt the local cpu while the access and mapping setup happens, so
you can do away with the clearing of the pte's and maybe end up with
something like:

__cli();
flush_tlb_page(MAGIC_VADDR1);
flush_tlb_page(MAGIC_VADDR1);
*pte1 = make_pte(PAGE_KERNEL, MAGIC_VADDR1);
*pte2 = make_pte(PAGE_KERNEL, MAGIC_VADDR1);
touch_high_pages_in_some_way()...
__sti();

1) Local tlb flushes == 2 * X86_CYCLES_PER_TLB_PAGE_FLUSH
2) 2 local memory references, highest cost is going out to main
memory, depending upon cache hit and write allocate strategy
3) cost of tlb loading the new mappings, depending upon whether the
ptes reside in the local cache or not
4) cost of local 'cli' and cost of local 'sti'

Later,
David S. Miller
davem@dm.cobaltmicro.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:49    [W:0.096 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site