[lkml]   [2010]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Subject[PATCH 00/20] mm: Preemptibility -v4
    This patch-set makes part of the mm a lot more preemptible. It converts
    i_mmap_lock and anon_vma->lock to mutexes and makes mmu_gather fully

    The main motivation was making mm_take_all_locks() preemptible, since it
    appears people are nesting hundreds of spinlocks there.

    The side-effects are that can finally make mmu_gather preemptible,
    something which lots of people have wanted to do for a long time.

    It also gets us anon_vma refcounting, which seems to result in a nice
    cleanup of the anon_vma lifetime rules wrt KSM and compaction.

    This patch-set it build and boot-tested on x86_64 (a previous version was
    also tested on Dave's Niagra2 machines, and I suppose s390 did too when
    Martin provided the conversion patch for his arch).

    There are no known architectures left unconverted, although some arch code
    never did see a compiler (superh and ia64 come to mind, I'll try and
    update my toolchains next week).

    Yanmin ran the last posting through the comprehensive Intel test farm
    and didn't find any regressions.

    ( Not included in this posting are the 4 Sparc64 patches that implement
    gup_fast, those can be applied separately after this series gets
    anywhere. )

    Full series (including the Sparc64 gup_fast bits) also available in -git
    form from (against Linus' tree as of about an hour ago):

    git:// mmu_preempt

    Do people feel its ready to get added to -next?

     \ /
      Last update: 2010-08-28 16:33    [W:0.036 / U:18.212 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site