[lkml]   [2012]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [RFC PATCH v1 2/2] virtio_net: Don't disable napi on low memory.
On Wed, Jan 4, 2012 at 6:46 PM, Mike Waychison <> wrote:
> On Wed, Jan 4, 2012 at 4:31 PM, Rusty Russell <> wrote:
>> 4) You use the skb data for the linked list; use the skb head's list.

What did you mean by this? I was under the impression that the ->next
and ->prev fields in sk_buff were the first two elements specifically
so that the pointer could be treated as a list_head. If it's the cast
in particular that you have an objection with, I can easily change
this to a singly linked list threaded through ->next if that's

>> Instead, here's how I think it should be done:
> This sounds reasonable to me.  I'll see what I can muster together this week.

So I started implementing it the way you were mentioning, and ran into
a problem with the original patchset.

Currently the "mergeable" and "big" receive buffers use a private page
free list (virtnet_info->pages) which has no synchronization itself.
This means that the batched version can't use get_a_page() and
give_pages() as is, which reduces the need to re-use the same alloc
halves that I've split. Alternatives I can think of at this point:

- pass in a flag to the allocators like "bool is_serial" that is true
if we are serializing with napi, (which determines if we can much with
- not use the same allocators for the "mergeable" and "big" paths.
The mergeable allocator in the non-serialized case reduces to
alloc_page(), while the big allocator looks like a copy and paste that
uses alloc_page instead of get_a_page().

Preferences? I'll code one of the two up and see what it looks like.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

 \ /
  Last update: 2012-01-06 18:57    [W:0.043 / U:3.308 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site