lkml.org 
[lkml]   [2018]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] drm/scheduler: Don't call wait_event_killable for signaled process.
On 04/24, Andrey Grodzovsky wrote:
>
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -227,9 +227,10 @@ void drm_sched_entity_do_release(struct drm_gpu_scheduler *sched,
> return;
> /**
> * The client will not queue more IBs during this fini, consume existing
> - * queued IBs or discard them on SIGKILL
> + * queued IBs or discard them when in death signal state since
> + * wait_event_killable can't receive signals in that state.
> */
> - if ((current->flags & PF_SIGNALED) && current->exit_code == SIGKILL)
> + if (current->flags & PF_SIGNALED)

please do not use PF_SIGNALED, it must die. Besides you can't rely on this flag
in multi-threaded case. current->exit_code doesn't look right too.

> entity->fini_status = -ERESTARTSYS;
> else
> entity->fini_status = wait_event_killable(sched->job_scheduled,

So afaics the problem is that fatal_signal_pending() is not necessarily true
after SIGKILL was already dequeued and thus wait_event_killable(), right?

This was already discussed, but it is not clear what we can/should do. We can
probably change get_signal() to not dequeue SIGKILL or do something else to keep
fatal_signal_pending() == T for the exiting killed thread.

But in this case we probably also want to discriminate the "real" SIGKILL's from
group_exit/exec/coredump.

Oleg.

\
 
 \ /
  Last update: 2018-04-25 15:06    [W:0.203 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site