lkml.org 
[lkml]   [2000]   [Jan]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Patch] Polling on more than 16000 file descriptors
On Tue, 18 Jan 2000, Alan Cox wrote:

> > I just noticed your patch but why do you use vmalloc() above just 1 page
> > and not the kmalloc()'s limit of 128K? Surely below 128K kmalloc() is
> > faster than vmalloc()? (I know ipc_alloc() does the same but why?).
>
> 128K kmallocs often fail. Probably something like a 32K kmalloc limit
> with vmalloc if the kmalloc fails or >32K ?

I know, which is why I said (to you in private) 256K is a bad idea
(fragmentation in a few minutes after using the box).

But, even better than your idea (dare I say :) is to have chunked
allocation and using kmalloc() all the time, something like (roughly):

#define POLL_PER_CHUNK 16000
#define POLL_NCHUNKS 8

struct pollfd * fds[POLL_NCHUNKS];

nchunks = 0;
nleft = nfds;
while (nleft > POLL_PER_CHUNK) {
fds[nchunks] = kmalloc(POLL_PER_CHUNK * sizeof(struct
pollfd));
if (fds[nchunks] == NULL) {
int k;

printk(KERN_ERR "poll(): can't kmalloc a
chunk\n");
for (k = nchunks-1; k > 0; k--)
kfree(fds[k]);
goto out;
}
nchunks++;
nleft -= POLL_PER_CHUNK;
}
if (nleft) {
fds[nchunks] = kmalloc(nleft * sizeof(struct pollfd));
nchunks++;
}

but do_poll() will need to be changed to accomodate passing new type of
arguments (and number of full chunks) so I intend to implement this in the
evening at home, if you guys agree with the approach in principle?

Regards,
------
Tigran A. Aivazian | http://www.sco.com
Escalations Research Group | tel: +44-(0)1923-813796
Santa Cruz Operation Ltd | http://www.ocston.org/~tigran




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:0.201 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site