Messages in this thread | | | Date | Thu, 7 Aug 1997 21:21:48 -0400 | From | "David S. Miller" <> | Subject | Re: Kernel virtual memory? |
| |
Date: Thu, 7 Aug 1997 23:29:53 +0000 ( ) From: Benjamin C R LaHaise <blah@dot.superaje.com>
struct task_struct *cpu_task[NR_CPUS];
void tlb_flush_mm(struct mm_struct *mm) { int i; if (current->mm == mm && mm->count == 1) tlb_flush_local() else for (i=0; i<NR_CPUS; i++) { struct task_struct *t = cpu_task[i]; if (t->mm == mm) tlb_flush_cpu(i); } }
On a TLB with context pids, the above has races and in general won't work at all.
I have a sparc64 SMP kernel with _only_ ever takes non-local flushes in the clone() thread case, and that doesn't even matter nor show up on the radar performance wise. Here is how it works:
1) Each time a cpu switches to a new task he goes:
switch_to(prev,next) { ... next->mm->cpu_vm_mask |= (1UL << smp_processor_id()); ... }
2) TLB flushes in general look something like:
static void smp_cross_call_avoidance(struct mm_struct *mm) { spin_lock(&scheduler_lock); get_new_mmu_context(mm, &tlb_context_cache); mm->cpu_vm_mask = (1UL << smp_processor_id()); if(current->tss.current_ds) { u32 ctx = mm->context & 0x1fff;
current->tss.ctx = ctx; spitfire_set_secondary_context(ctx); __asm__ __volatile__("flush %g6"); } spin_unlock(&scheduler_lock); }
void smp_flush_tlb_*(struct mm_struct *mm) { u32 ctx = mm->context & 0x1fff;
if(mm == current->mm && mm->count == 1) { if(mm->cpu_vm_mask == (1UL << smp_processor_id())) goto local_flush_and_out; return smp_cross_call_avoidance(mm); } smp_cross_call(&xcall_flush_tlb_*, ctx, 0, 0);
local_flush_and_out: __flush_tlb_*(ctx); }
3) page level tlb flush is special cased for swapping
void smp_flush_tlb_page(struct mm_struct *mm, unsigned long page) { u32 ctx = mm->context & 0x1fff;
if(mm == current->mm && mm->count == 1) { if(mm->cpu_vm_mask == (1UL << smp_processor_id())) goto local_flush_and_out; return smp_cross_call_avoidance(mm); } else if(mm != current->mm && mm->count == 1) { /* Try to handle two special cases to avoid cross calls * in common scenerios where we are swapping process * pages out. */ if((mm->context ^ tlb_context_cache) & CTX_VERSION_MASK) return; /* It's dead, nothing to do. */ if(mm->cpu_vm_mask == (1UL << smp_processor_id())) goto local_flush_and_out; } smp_cross_call(&xcall_flush_tlb_page, ctx, page, 0);
local_flush_and_out: __flush_tlb_page(ctx, page); }
4) The code in vmscan first tries to only swap out pages from processes whose mm->cpu_vm_mask == (1UL << smp_processor_id()) failing any success doing it that way (to prevent livelock) it allows swapping pages from tasks which may have tlb state on other processors
The above, plus a new hack I have which makes need_resched a bitmask (1 bit per processor) which tends to stick tasks to a single cpu, even under heavy swapping I _never_ see a cross tlb flush and this is on a machine with 8192 tlb context pids per mmu.
Only clone()'s make the cross tlb flushes happen, this case would be so complex to work around that I am convinced it is not worth the effort and verification/testing necessary to pull it off. It would also have a high cost to "innocent" non-clone() tasks, which makes it even less worthwhile to do.
Later, David "Sparc" Miller davem@caip.rutgers.edu
| |