lkml.org 
[lkml]   [2021]   [Feb]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: drm/nouneau: 5.11 cycle regression bisected to 461619f5c324 "drm/nouveau: switch to new allocator"
From
Date


Am 10.02.21 um 13:22 schrieb Mike Galbraith:
> On Wed, 2021-02-10 at 12:44 +0100, Christian König wrote:
>> Please try to add a "return NULL" at the beginning of ttm_pool_type_take().
>>
>> That should effectively disable using the pool.
> That did away with the yield looping, but it doesn't take long for the
> display to freeze. I ssh'd in from lappy, but there was nada in dmesg.

Yeah, that is expected. Without taking pages from the pool we leak
memory like sieve.

At least we could narrow down the problem quite a bit with that.

Can you test the attached patch and see if it helps?

Thanks,
Christian.

>
>> Thanks for testing,
> Happy to.
>
> -Mike
>

From 1e623dc5c535de2d0af3c5c6107c08ffffa4fe07 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com>
Date: Wed, 10 Feb 2021 14:24:27 +0100
Subject: [PATCH] drm/ttm: make sure pool pages are cleared
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The old implementation wasn't consistend on this.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/ttm/ttm_pool.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 74bf1c84b637..6e27cb1bf48b 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -33,6 +33,7 @@

#include <linux/module.h>
#include <linux/dma-mapping.h>
+#include <linux/highmem.h>

#ifdef CONFIG_X86
#include <asm/set_memory.h>
@@ -218,6 +219,15 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr,
/* Give pages into a specific pool_type */
static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
{
+ unsigned int i, num_pages = 1 << pt->order;
+
+ for (i = 0; i < num_pages; ++i) {
+ if (PageHighMem(p))
+ clear_highpage(p + i);
+ else
+ clear_page(page_address(p + i));
+ }
+
spin_lock(&pt->lock);
list_add(&p->lru, &pt->pages);
spin_unlock(&pt->lock);
--
2.25.1
\
 
 \ /
  Last update: 2021-02-10 14:27    [W:0.101 / U:1.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site