diff options
author | Andrey Grodzovsky <andrey.grodzovsky@amd.com> | 2018-10-18 12:32:46 -0400 |
---|---|---|
committer | Alex Deucher <alexander.deucher@amd.com> | 2018-11-05 14:21:22 -0500 |
commit | faf6e1a87e07423a729e04fb2e8188742e89ea4c (patch) | |
tree | 51c205851d3e99371bbf271b8f7e3e59b32ceb04 /drivers/gpu/drm/scheduler | |
parent | 2bb42410b1bd324912389c6ac748df1c1befd69f (diff) |
drm/sched: Add boolean to mark if sched is ready to work v5
Problem:
A particular scheduler may become unsuable (underlying HW) after
some event (e.g. GPU reset). If it's later chosen by
the get free sched. policy a command will fail to be
submitted.
Fix:
Add a driver specific callback to report the sched status so
rq with bad sched can be avoided in favor of working one or
none in which case job init will fail.
v2: Switch from driver callback to flag in scheduler.
v3: rebase
v4: Remove ready paramter from drm_sched_init, set
uncoditionally to true once init done.
v5: fix missed change in v3d in v4 (Alex)
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Diffstat (limited to 'drivers/gpu/drm/scheduler')
-rw-r--r-- | drivers/gpu/drm/scheduler/sched_entity.c | 9 | ||||
-rw-r--r-- | drivers/gpu/drm/scheduler/sched_main.c | 6 |
2 files changed, 14 insertions, 1 deletions
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 3e22a54a99c2..ba54c30a466e 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -130,7 +130,14 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) int i; for (i = 0; i < entity->num_rq_list; ++i) { - num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs); + struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched; + + if (!entity->rq_list[i]->sched->ready) { + DRM_WARN("sched%s is not ready, skipping", sched->name); + continue; + } + + num_jobs = atomic_read(&sched->num_jobs); if (num_jobs < min_jobs) { min_jobs = num_jobs; rq = entity->rq_list[i]; diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 63b997d9c562..6b2fd49334f7 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -420,6 +420,9 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_gpu_scheduler *sched; drm_sched_entity_select_rq(entity); + if (!entity->rq) + return -ENOENT; + sched = entity->rq->sched; job->sched = sched; @@ -633,6 +636,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, return PTR_ERR(sched->thread); } + sched->ready = true; return 0; } EXPORT_SYMBOL(drm_sched_init); @@ -648,5 +652,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) { if (sched->thread) kthread_stop(sched->thread); + + sched->ready = false; } EXPORT_SYMBOL(drm_sched_fini); |