lkml.org 
[lkml]   [2018]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 5/5] net: mvneta: reduce smp_processor_id() calling in mvneta_tx_done_gbe
In the loop of mvneta_tx_done_gbe(), we call the smp_processor_id()
each time, move the call out of the loop to optimize the code a bit.

Before the patch, the loop looks like(under arm64):

ldr x1, [x29,#120]
...
ldr w24, [x1,#36]
...
bl 0 <_raw_spin_lock>
str w24, [x27,#132]
...

After the patch, the loop looks like(under arm64):

...
bl 0 <_raw_spin_lock>
str w23, [x28,#132]
...
where w23 is loaded so be ready before the loop.

From another side, mvneta_tx_done_gbe() is called from mvneta_poll()
which is in non-preemptible context, so it's safe to call the
smp_processor_id() function once.

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
---
drivers/net/ethernet/marvell/mvneta.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7d98f7828a30..62e81e267e13 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2507,12 +2507,13 @@ static void mvneta_tx_done_gbe(struct mvneta_port *pp, u32 cause_tx_done)
{
struct mvneta_tx_queue *txq;
struct netdev_queue *nq;
+ int cpu = smp_processor_id();

while (cause_tx_done) {
txq = mvneta_tx_done_policy(pp, cause_tx_done);

nq = netdev_get_tx_queue(pp->dev, txq->id);
- __netif_tx_lock(nq, smp_processor_id());
+ __netif_tx_lock(nq, cpu);

if (txq->count)
mvneta_txq_done(pp, txq);
--
2.18.0
\
 
 \ /
  Last update: 2018-08-29 10:34    [W:0.323 / U:0.616 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site