Messages in this thread | | | Date | Mon, 29 Mar 2010 11:48:22 -0600 | From | "Chris Friesen" <> | Subject | Re: behavior of recvmmsg() on blocking sockets |
| |
On 03/29/2010 11:24 AM, Brandon Black wrote: > On Mon, Mar 29, 2010 at 11:18 AM, Chris Friesen <cfriesen@nortel.com> wrote: >> >> prev = current time >> loop forever >> cur = current time >> timeout = max_latency - (cur - prev) >> recvmmsg(timeout) >> process all received messages >> prev = cur >> >> >> Basically you determine the max latency you're willing to wait for a >> packet to be handled, then subtract the amount of time you spent >> processing messages from that and pass it into the recvmmsg() call as >> the timeout. That way no messages will be delayed for longer than the >> max latency. (Not considering scheduling delays.) > > With a blocking socket, you'd also need to set SO_RCVTIMEO on the > underlying socket to some value that makes sense and is below your max > latency, because recvmmsg()'s timeout argument only applies in-between > underlying recvmsg() calls, not during them.
Hmm...that's a good point. For some reason I had been under the impression that the timeout affected the underlying recvmsg() calls as well. It think it would make more sense for the kernel to abort a blocking recvmsg() call once the timeout expires.
As for spending a lot of time spinning if there are gaps in the input stream...in the cases where the time-based usage makes sense the normal situation is that there are a lot of packets coming in. A 10gig ethernet pipe can theoretically receive something like 19 packets per usec. Doesn't take much of a delay before you probably have packets waiting.
Chris
| |