diff options
author | Rob Clark <robdclark@chromium.org> | 2021-07-27 18:06:12 -0700 |
---|---|---|
committer | Rob Clark <robdclark@chromium.org> | 2021-07-27 18:09:18 -0700 |
commit | a61acbbe9cf873f869fc634ae6f72f214f5994cc (patch) | |
tree | 1c4533b31c383fda8359250d5a967cccdec8d98e /drivers/gpu/drm/msm/msm_submitqueue.c | |
parent | be40596bb5cf20cf9eaeddeeb57de7c4f570c886 (diff) |
drm/msm: Track "seqno" fences by idr
Previously the (non-fd) fence returned from submit ioctl was a raw
seqno, which is scoped to the ring. But from UABI standpoint, the
ioctls related to seqno fences all specify a submitqueue. We can
take advantage of that to replace the seqno fences with a cyclic idr
handle.
This is in preperation for moving to drm scheduler, at which point
the submit ioctl will return after queuing the submit job to the
scheduler, but before the submit is written into the ring (and
therefore before a ring seqno has been assigned). Which means we
need to replace the dma_fence that userspace may need to wait on
with a scheduler fence.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-8-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Diffstat (limited to 'drivers/gpu/drm/msm/msm_submitqueue.c')
-rw-r--r-- | drivers/gpu/drm/msm/msm_submitqueue.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 9e9fec61d629..66f8d0fb38b0 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -12,6 +12,8 @@ void msm_submitqueue_destroy(struct kref *kref) struct msm_gpu_submitqueue *queue = container_of(kref, struct msm_gpu_submitqueue, ref); + idr_destroy(&queue->fence_idr); + msm_file_private_put(queue->ctx); kfree(queue); @@ -89,6 +91,9 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, if (id) *id = queue->id; + idr_init(&queue->fence_idr); + mutex_init(&queue->lock); + list_add_tail(&queue->node, &ctx->submitqueues); write_unlock(&ctx->queuelock); |