lkml.org 
[lkml]   [2003]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Inefficient TLB flushing
David Mosberger <davidm@napali.hpl.hp.com> wrote:
>
> Jack> Here is the patch that I am currently testing:
>
> Jack> --- /usr/tmp/TmpDir.19957-0/linux/mm/memory.c_1.79 Wed Nov 12 13:56:25 2003
> Jack> +++ linux/mm/memory.c Wed Nov 12 12:57:25 2003
> Jack> @@ -574,9 +574,10 @@
> Jack> if ((long)zap_bytes > 0)
> Jack> continue;
> Jack> if (need_resched()) {
> Jack> + int fullmm = (*tlbp)->fullmm;
> Jack> tlb_finish_mmu(*tlbp, tlb_start, start);
> Jack> cond_resched_lock(&mm->page_table_lock);
> Jack> - *tlbp = tlb_gather_mmu(mm, 0);
> Jack> + *tlbp = tlb_gather_mmu(mm, fullmm);
> Jack> tlb_start_valid = 0;
> Jack> }
> Jack> zap_bytes = ZAP_BLOCK_SIZE;
>
> I think the patch will work fine, but it's not very clean, because it
> bypasses the TLB-flush API and directly accesses
> implementation-specific internals. Perhaps it would be better to pass
> a "fullmm" flag to unmap_vmas(). Andrew?

Either that, or add a new interface function

int mmu_gather_is_full_mm(mmu_gather *tlb);

and use it...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:58    [W:0.047 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site