lkml.org 
[lkml]   [2011]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH 06/17] arm: mmu_gather rework
    From
    Date
    On Mon, 2011-02-28 at 12:44 +0100, Peter Zijlstra wrote:
    > unmap_region()
    > tlb_gather_mmu()
    > unmap_vmas()
    > for (; vma; vma = vma->vm_next)
    > unmao_page_range()
    > tlb_start_vma() -> flush cache range

    So why is this correct? Can't we race with a concurrent access to the
    memory region (munmap() vs other thread access race)? While
    unmap_region() callers will have removed the vma from the tree so faults
    will not be satisfied, TLBs might still be present and allow us to
    access the memory and thereby reloading it in the cache.

    > zap_*_range()
    > ptep_get_and_clear_full() -> batch/track external tlbs
    > tlb_remove_tlb_entry() -> batch/track external tlbs
    > tlb_remove_page() -> track range/batch page
    > tlb_end_vma() -> flush tlb range
    >
    > [ for architectures that have hardware page table walkers
    > concurrent faults can still load the page tables ]
    >
    > free_pgtables()
    > while (vma)
    > unlink_*_vma()
    > free_*_range()
    > *_free_tlb()
    > tlb_finish_mmu()
    >
    > free vmas


    \
     
     \ /
      Last update: 2011-02-28 15:23    [W:4.026 / U:0.428 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site