diff options
author | Matthew Brost <matthew.brost@intel.com> | 2023-09-14 13:40:50 -0700 |
---|---|---|
committer | Rodrigo Vivi <rodrigo.vivi@intel.com> | 2023-12-21 11:43:17 -0500 |
commit | f3e9b1f43458746e7e0211dbe4289412e5c0d16a (patch) | |
tree | 940d985a2beae34e33f367c66075fde054ccb25a /drivers/gpu/drm/xe/xe_exec.c | |
parent | b21ae51dcf41ce12bb8e2a7c989863ee9d04ae4b (diff) |
drm/xe: Remove async worker and rework sync binds
Async worker is gone. All jobs and memory allocations done in IOCTL to
align with dma fencing rules.
Async vs. sync now means when do bind operations complete relative to
the IOCTL. Async completes when out-syncs signal while sync completes
when the IOCTL returns. In-syncs and out-syncs are only allowed in async
mode.
If memory allocations fail in the job creation step the VM is killed.
This is temporary, eventually a proper unwind will be done and VM will
be usable.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Diffstat (limited to 'drivers/gpu/drm/xe/xe_exec.c')
-rw-r--r-- | drivers/gpu/drm/xe/xe_exec.c | 43 |
1 files changed, 0 insertions, 43 deletions
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c index 7cf4215b2b2e..85a8a793f527 100644 --- a/drivers/gpu/drm/xe/xe_exec.c +++ b/drivers/gpu/drm/xe/xe_exec.c @@ -196,27 +196,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file) } } - /* - * We can't install a job into the VM dma-resv shared slot before an - * async VM bind passed in as a fence without the risk of deadlocking as - * the bind can trigger an eviction which in turn depends on anything in - * the VM dma-resv shared slots. Not an ideal solution, but we wait for - * all dependent async VM binds to start (install correct fences into - * dma-resv slots) before moving forward. - */ - if (!xe_vm_no_dma_fences(vm) && - vm->flags & XE_VM_FLAG_ASYNC_BIND_OPS) { - for (i = 0; i < args->num_syncs; i++) { - struct dma_fence *fence = syncs[i].fence; - - if (fence) { - err = xe_vm_async_fence_wait_start(fence); - if (err) - goto err_syncs; - } - } - } - retry: if (!xe_vm_no_dma_fences(vm) && xe_vm_userptr_check_repin(vm)) { err = down_write_killable(&vm->lock); @@ -229,28 +208,6 @@ retry: if (err) goto err_syncs; - /* We don't allow execs while the VM is in error state */ - if (vm->async_ops.error) { - err = vm->async_ops.error; - goto err_unlock_list; - } - - /* - * Extreme corner where we exit a VM error state with a munmap style VM - * unbind inflight which requires a rebind. In this case the rebind - * needs to install some fences into the dma-resv slots. The worker to - * do this queued, let that worker make progress by dropping vm->lock, - * flushing the worker and retrying the exec. - */ - if (vm->async_ops.munmap_rebind_inflight) { - if (write_locked) - up_write(&vm->lock); - else - up_read(&vm->lock); - flush_work(&vm->async_ops.work); - goto retry; - } - if (write_locked) { err = xe_vm_userptr_pin(vm); downgrade_write(&vm->lock); |