lkml.org 
[lkml]   [2013]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v7 1/4] spinlock: A new lockref structure for lockless update of refcount
On Sun, Sep 01, 2013 at 11:35:21PM +0100, Al Viro wrote:
> > I wonder if there is some false sharing going on. But I don't see that
> > either, this is the percpu offset map afaik:
> >
> > 000000000000f560 d files_lglock_lock
> > 000000000000f564 d nr_dentry
> > 000000000000f568 d last_ino
> > 000000000000f56c d nr_unused
> > 000000000000f570 d nr_inodes
> > 000000000000f574 d vfsmount_lock_lock
> > 000000000000f580 d bh_accounting
> >
> > and I don't see anything there that would get cross-cpu accesses, so
> > there shouldn't be any cacheline bouncing. That's the whole point of
> > percpu variables, after all.
>
> Hell knows... Are you sure you don't see br_write_lock() at all? I don't
> see anything else that would cause cross-cpu traffic with that layout...

GRRR... I see something else:
void file_sb_list_del(struct file *file)
{
if (!list_empty(&file->f_u.fu_list)) {
lg_local_lock_cpu(&files_lglock, file_list_cpu(file));
list_del_init(&file->f_u.fu_list);
lg_local_unlock_cpu(&files_lglock, file_list_cpu(file));
}
}
will cheerfully cause cross-CPU traffic. If that's what is going on, the
earlier patch I've sent (not putting non-regulars and files opened r/o
on ->s_list) should reduce the cacheline bouncing on that cacheline.


\
 
 \ /
  Last update: 2013-09-02 01:01    [W:0.168 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site