lkml.org 
[lkml]   [2023]   [Jan]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] net: page_pool: fix refcounting issues with fragmented allocation
On 24.01.23 16:57, Alexander H Duyck wrote:
> On Tue, 2023-01-24 at 16:11 +0200, Ilias Apalodimas wrote:
>> Hi Felix,
>>
>> ++cc Alexander and Yunsheng.
>>
>> Thanks for the report
>>
>> On Tue, 24 Jan 2023 at 14:43, Felix Fietkau <nbd@nbd.name> wrote:
>> >
>> > While testing fragmented page_pool allocation in the mt76 driver, I was able
>> > to reliably trigger page refcount underflow issues, which did not occur with
>> > full-page page_pool allocation.
>> > It appears to me, that handling refcounting in two separate counters
>> > (page->pp_frag_count and page refcount) is racy when page refcount gets
>> > incremented by code dealing with skb fragments directly, and
>> > page_pool_return_skb_page is called multiple times for the same fragment.
>> >
>> > Dropping page->pp_frag_count and relying entirely on the page refcount makes
>> > these underflow issues and crashes go away.
>> >
>>
>> This has been discussed here [1]. TL;DR changing this to page
>> refcount might blow up in other colorful ways. Can we look closer and
>> figure out why the underflow happens?
>>
>> [1] https://lore.kernel.org/netdev/1625903002-31619-4-git-send-email-linyunsheng@huawei.com/
>>
>> Thanks
>> /Ilias
>>
>>
>
> The logic should be safe in terms of the page pool itself as it should
> be holding one reference to the page while the pp_frag_count is non-
> zero. That one reference is what keeps the two halfs in sync as the
> page shouldn't be able to be freed until we exhaust the pp_frag_count.
>
> To have an underflow there are two possible scenarios. One is that
> either put_page or free_page is being called somewhere that the
> page_pool freeing functions should be used. The other possibility is
> that a pp_frag_count reference was taken somewhere a page reference
> should have.
>
> Do we have a backtrace for the spots that are showing this underrun? If
> nothing else we may want to look at tracking down the spots that are
> freeing the page pool pages via put_page or free_page to determine what
> paths these pages are taking.
Here's an example of the kind of traces that I was seeing with v6.1:
https://nbd.name/p/61a6617e
On v5.15 I also occasionally got traces like this:
https://nbd.name/p/0b9e4f0d

From what I can tell, it also triggered the warning that shows up when
page->pp_frag_count underflows. Unfortunately these traces don't
directly point to the place where things go wrong.
I do wonder if the pp_frag_count is maybe racy when we have a mix of
get_page + page_pool_put_page calls.

In case you're wondering what I was doing to trigger the crash: I simply
create 4 wireless client mode interfaces on the same card and pushed TCP
traffic from an AP to all 4 simultaneously. I can trigger it pretty much
immediately after TCP traffic ramps up.

- Felix

\
 
 \ /
  Last update: 2023-03-26 23:54    [W:0.152 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site