lkml.org 
[lkml]   [2013]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCHv2 8/9] zswap: add to mm/
    On 01/07/2013 12:24 PM, Seth Jennings wrote:
    > +struct zswap_tree {
    > + struct rb_root rbroot;
    > + struct list_head lru;
    > + spinlock_t lock;
    > + struct zs_pool *pool;
    > +};

    BTW, I spent some time trying to get this lock contended. You thought
    the anon_vma locks would dominate and this spinlock would not end up
    very contended.

    I figured that if I hit zswap from a bunch of CPUs that _didn't_ use
    anonymous memory (and thus the anon_vma locks) that some more contention
    would pop up. I did that with a bunch of CPUs writing to tmpfs, and
    this lock was still well down below anon_vma. The anon_vma contention
    was obviously coming from _other_ anonymous memory around.

    IOW, I feel a bit better about this lock. I only tested on 16 cores on
    a system with relatively light NUMA characteristics, and it might be the
    bottleneck if all the anonymous memory on the system is mlock()'d and
    you're pounding on tmpfs, but that's pretty contrived.



    \
     
     \ /
      Last update: 2013-01-08 19:01    [W:0.025 / U:65.488 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site