lkml.org 
[lkml]   [2012]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: seq_file: Use larger buffer to reduce time traversing lists
From
Date
On Fri, 2012-06-01 at 14:10 +0200, Eric Dumazet wrote:
> On Fri, 2012-06-01 at 11:39 +0100, Steven Whitehouse wrote:
> > I've just been taking a look at the seq_read() code, since we've noticed
> > that dumping files with large numbers of records can take some
> > considerable time. This is due to seq_read() using a buffer which, at
> > most is a single page in size, and that it has to find its place again
> > on every call to seq_read(). That makes it rather inefficient.
> >
> > As an example, I created a GFS2 filesystem with 100k inodes in it, and
> > then ran ls -l to get a decent number of cached inodes. This result in
> > there being approx 400k lines in the debugfs file containing GFS2's
> > glocks. I then timed how long it takes to read this file:
> >
> > [root@chywoon mnt]# time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks
> > of=/dev/null bs=1M
> > 0+5769 records in
> > 0+5769 records out
> > 23273958 bytes (23 MB) copied, 63.3681 s, 367 kB/s
>
> What time do you get if you do
>
> time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks of=/dev/null bs=4k
>
> This patch seems the wrong way to me.
>
> seq_read(size = 1MB) should perform many copy_to_user() calls instead of a single one.
>
> Instead of doing kmalloc(m->size <<= 1, GFP_KERNEL) each time we overflow the buffer,
> we should flush its content to user space.
>
>

by the way, is the following command even working ?

time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks of=/dev/null bs=16M

I guess not, it probably returns -ENOMEM





\
 
 \ /
  Last update: 2012-06-01 14:41    [W:0.107 / U:1.948 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site