[lkml]   [2004]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Hello All, 

    In one of the threads named: "Linux's implementation of poll() not
    Linus has stated the following:
    Neither poll() nor select() have this problem: they don't get more
    expensive as you have more and more events - their expense is the number
    of file descriptors, not the number of events per se. In fact, both poll()
    and select() tend to perform _better_ when you have pending events, as
    they are both amenable to optimizations when there is no need for waiting,
    and scanning the arrays can use early-out semantics.

    Please help me understand the above.. I'm using select in a server to read
    on multiple FDs and the clients are dumping messages (of fixed size) in a
    loop on these FDs and the server maintainig those FDs is not able to get all
    the messages.. Some of the last messages sent by each client are lost.
    If the number of clients and hence the number of FDs (in the server) is
    increased the loss of data is proportional.
    eg: 5 clients send messages (100 each) to 1 server and server receives
    96 messages from each client.
    10 clients send messages (100 by each) to 1 server and server again
    receives 96 from each client.

    If a small sleep in introduced between sending messages the loss of data
    Also please explain the algorithm select uses to read messages on FDs and
    how does it perform better when number of FDs increases.

    Thanks and Regards,
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 14:03    [W:0.029 / U:0.248 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site