lkml.org 
[lkml]   [2018]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 1/2] Revert "drm/sched: fix timeout handling v2"
Date
This reverts commit 0efd2d2f68cd5dbddf4ecd974c33133257d16a8e.  Fixes
this failure in V3D GPU reset:

[ 1418.227796] Unable to handle kernel NULL pointer dereference at virtual address 00000018
[ 1418.235947] pgd = dc4c55ca
[ 1418.238695] [00000018] *pgd=80000040004003, *pmd=00000000
[ 1418.244132] Internal error: Oops: 206 [#1] SMP ARM
[ 1418.248934] Modules linked in:
[ 1418.252001] CPU: 0 PID: 10253 Comm: kworker/0:0 Not tainted 4.19.0-rc6+ #486
[ 1418.259058] Hardware name: Broadcom STB (Flattened Device Tree)
[ 1418.265002] Workqueue: events drm_sched_job_timedout
[ 1418.269986] PC is at dma_fence_remove_callback+0x8/0x50
[ 1418.275218] LR is at drm_sched_job_timedout+0x4c/0x118
...
[ 1418.415891] [<c086b754>] (dma_fence_remove_callback) from [<c06e7e6c>] (drm_sched_job_timedout+0x4c/0x118)
[ 1418.425571] [<c06e7e6c>] (drm_sched_job_timedout) from [<c0242500>] (process_one_work+0x2c8/0x7bc)
[ 1418.434552] [<c0242500>] (process_one_work) from [<c0242a38>] (worker_thread+0x44/0x590)
[ 1418.442663] [<c0242a38>] (worker_thread) from [<c0249b10>] (kthread+0x160/0x168)
[ 1418.450076] [<c0249b10>] (kthread) from [<c02010ac>] (ret_from_fork+0x14/0x28)

Cc: Christian König <christian.koenig@amd.com>
Cc: Nayan Deshmukh <nayan26deshmukh@gmail.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
---
drivers/gpu/drm/scheduler/sched_main.c | 30 +-------------------------
1 file changed, 1 insertion(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 44fe587aaef9..bd7d11c47202 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -249,41 +249,13 @@ static void drm_sched_job_timedout(struct work_struct *work)
{
struct drm_gpu_scheduler *sched;
struct drm_sched_job *job;
- int r;

sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
-
- spin_lock(&sched->job_list_lock);
- list_for_each_entry_reverse(job, &sched->ring_mirror_list, node) {
- struct drm_sched_fence *fence = job->s_fence;
-
- if (!dma_fence_remove_callback(fence->parent, &fence->cb))
- goto already_signaled;
- }
-
job = list_first_entry_or_null(&sched->ring_mirror_list,
struct drm_sched_job, node);
- spin_unlock(&sched->job_list_lock);

if (job)
- sched->ops->timedout_job(job);
-
- spin_lock(&sched->job_list_lock);
- list_for_each_entry(job, &sched->ring_mirror_list, node) {
- struct drm_sched_fence *fence = job->s_fence;
-
- if (!fence->parent || !list_empty(&fence->cb.node))
- continue;
-
- r = dma_fence_add_callback(fence->parent, &fence->cb,
- drm_sched_process_job);
- if (r)
- drm_sched_process_job(fence->parent, &fence->cb);
-
-already_signaled:
- ;
- }
- spin_unlock(&sched->job_list_lock);
+ job->sched->ops->timedout_job(job);
}

/**
--
2.19.1
\
 
 \ /
  Last update: 2018-11-08 17:05    [W:0.797 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site