[lkml]   [2002]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [Lse-tech] lockmeter results comparing 2.4.17, 2.5.3, and 2.5.5
"Martin J. Bligh" wrote:
> > inode_lock hold times are a problem for other reasons. Leaving this
> > unfixed makes the preepmtible kernel rather pointless.... An ideal
> > fix would be to release inodes based on VM pressure against their backing
> > page. But I don't think anyone's started looking at inode_lock yet.
> >
> > The big one is lru_list_lock, of course. I'll be releasing code in
> > the next couple of days which should take that off the map. Testing
> > would be appreciated.
> Seeing as people seem to be interested ... there are some big holders
> of BKL around too - do_exit shows up badly (50ms in the data Hanna
> posted, and I've seen that a lot before).

That'll be where exit() takes down the tasks's address spaces.
zap_page_range(). That's a nasty one.

> I've seen sync_old_buffers
> hold the BKL for 64ms on an 8way Specweb99 run (22Gb of RAM?)
> (though this was on an older 2.4 kernel, and might be fixed by now).

That will still be there - presumably it's where we walk the
per-superblock dirty inode list. hmm.

For lru_list_lock we can do an end-around by not using
buffers at all.

The other big one is truncate_inode_pages(). With ratcache
it's not a contention problem, but it is a latency problem.
I expect that we can drastically reduce the lock hold time
there by simply snipping the wholly-truncated pages out of
the tree, and thus privatising them so they can be disposed
of outside any locking.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

 \ /
  Last update: 2005-03-22 13:24    [W:0.067 / U:7.056 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site