lkml.org 
[lkml]   [2016]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks
On Wed, Feb 17, 2016 at 01:45:35PM -0500, Waiman Long wrote:
> The original code has one global lock and one single list that covers all
> the inodes in the filesystem. This patch essentially breaks it up into
> multiple smaller lists with one lock for each. So the lock hold time should
> have been greatly reduced unless we are unfortunately enough that most of
> the inodes are in one single list.

Most of the inode code has lock breaks in, but in general you cannot do
that.

The more I look at that inode code, the more I think you want an inode
specific visitor interface and not bother provide something generic.

iterate_bdevs(), drop_pagecache_sb(), wait_sb_inodes(), add_dquot_ref()
all have the same pattern. And maybe fsnotify_unmount_inodes() can be
man-handled into the same form.

Afaict only invalidate_inodes() really doesn't do a lock-break, but its
very similar in form to evict_inodes() which does.


> If lock hold time is a concern, I think in some cases we can set the an
> upper limit on how many inodes we want to process, release the lock,
> reacquire it and continue. I am just worry that using RCU and 16b cmpxchg
> will introduce too much complexity with no performance gain to show.

You don't actually need cmpxchg16b in order to use RCU. But given the
users of this, you don't actually need RCU either.

Just don't try and provide a for_each_list_entry() like construct.

\
 
 \ /
  Last update: 2016-02-17 21:21    [W:0.696 / U:0.880 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site