lkml.org 
[lkml]   [2004]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: net_device->queue_lock contention on 32-way box

The net_tx_action() --> qdisc_run() --> qdisc_restart() code path
can hold the lock for a long time especially if lots of packets
have been enqueued before net_tx_action() had a chance to run.

For each enqueued packet, we go all the way into the device driver
to give the packet to the device. Given that PCI PIO accesses are
likely in these paths, along with some memory accesses (to setup
packet descriptors and the like) this could take quite a bit of
time.

We do temporarily release the dev->queue_lock in between each
packet while we go into the driver. It could be what you're
seeing is the latency to get the device's dev->xmit_lock because
we have to acquire that before we can release the dev->queue_lock

If you bind the device interrupts to one cpu, do things change?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:03    [W:0.039 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site