lkml.org 
[lkml]   [2019]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH net-next 2/2] net: core: increase the default size of GRO_NORMAL skb lists to flush
Alexander Lobakin wrote 11.10.2019 10:23:
> Hi Edward,
>
> Edward Cree wrote 10.10.2019 21:16:
>> On 10/10/2019 15:42, Alexander Lobakin wrote:
>>> Commit 323ebb61e32b ("net: use listified RX for handling GRO_NORMAL
>>> skbs") have introduced a sysctl variable gro_normal_batch for
>>> defining
>>> a limit for listified Rx of GRO_NORMAL skbs. The initial value of 8
>>> is
>>> purely arbitrary and has been chosen, I believe, as a minimal safe
>>> default.
>> 8 was chosen by performance tests on my setup with v1 of that patch;
>>  see https://www.spinics.net/lists/netdev/msg585001.html .
>> Sorry for not including that info in the final version of the patch.
>> While I didn't re-do tests on varying gro_normal_batch on the final
>>  version, I think changing it needs more evidence than just "we tested
>>  it; it's better".  In particular, increasing the batch size should be
>>  accompanied by demonstration that latency isn't increased in e.g. a
>>  multi-stream ping-pong test.
>>
>>> However, several tests show that it's rather suboptimal and doesn't
>>> allow to take a full advantage of listified processing. The best and
>>> the most balanced results have been achieved with a batches of 16
>>> skbs
>>> per flush.
>>> So double the default value to give a yet another boost for Rx path.
>>
>>> It remains configurable via sysctl anyway, so may be fine-tuned for
>>> each hardware.
>> I see this as a reason to leave the default as it is; the combination
>>  of your tests and mine have established that the optimal size does
>>  vary (I found 16 to be 2% slower than 8 with my setup), so any
>>  tweaking of the default is likely only worthwhile if we have data
>>  over lots of different hardware combinations.
>
> Agree, if you've got slower results on 16, we must leave the default
> value, as it seems to be VERY hardware- and driver- dependent.
> So, patch 2/2 is not actual any more (I supposed that it would likely
> go away before sending this series).

I've generated an another solution. Considering that gro_normal_batch
is very individual for every single case, maybe it would be better to
make it per-NAPI (or per-netdevice) variable rather than a global
across the kernel?
I think most of all network-capable configurations and systems has more
than one network device nowadays, and they might need different values
for achieving their bests.

One possible variant is:

#define THIS_DRIVER_GRO_NORMAL_BATCH 16

/* ... */

netif_napi_add(dev, napi, this_driver_rx_poll, NAPI_POLL_WEIGHT); /*
napi->gro_normal_batch will be set to the systcl value during NAPI
context initialization */
napi_set_gro_normal_batch(napi, THIS_DRIVER_GRO_NORMAL_BATCH); /* new
static inline helper, napi->gro_normal_batch will be set to the
driver-speficic value of 16 */

The second possible variant is to make gro_normal_batch sysctl
per-netdevice to tune it from userspace.
Or we can combine them into one to make it available for tweaking from
both driver and userspace, just like it's now with XPS CPUs setting.

If you'll find any of this reasonable and worth implementing, I'll come
with it in v2 after a proper testing.

>
>>> Signed-off-by: Alexander Lobakin <alobakin@dlink.ru>
>>> ---
>>> net/core/dev.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>> index a33f56b439ce..4f60444bb766 100644
>>> --- a/net/core/dev.c
>>> +++ b/net/core/dev.c
>>> @@ -4189,7 +4189,7 @@ int dev_weight_tx_bias __read_mostly = 1; /*
>>> bias for output_queue quota */
>>> int dev_rx_weight __read_mostly = 64;
>>> int dev_tx_weight __read_mostly = 64;
>>> /* Maximum number of GRO_NORMAL skbs to batch up for list-RX */
>>> -int gro_normal_batch __read_mostly = 8;
>>> +int gro_normal_batch __read_mostly = 16;
>>>
>>> /* Called with irq disabled */
>>> static inline void ____napi_schedule(struct softnet_data *sd,
>
> Regards,
> ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ

Regards,
ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ

\
 
 \ /
  Last update: 2019-10-12 11:25    [W:0.232 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site