diff options
author | Jens Axboe <axboe@kernel.dk> | 2020-06-17 15:00:04 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-06-17 15:05:08 -0600 |
commit | 56952e91acc93ed624fe9da840900defb75f1323 (patch) | |
tree | 0e3cbc16f147b9fdd243a296b2d139df000bf2cc /fs/io_uring.c | |
parent | 9d8426a09195e2dcf2aa249de2aaadd792d491c7 (diff) |
io_uring: reap poll completions while waiting for refs to drop on exit
If we're doing polled IO and end up having requests being submitted
async, then completions can come in while we're waiting for refs to
drop. We need to reap these manually, as nobody else will be looking
for them.
Break the wait into 1/20th of a second time waits, and check for done
poll completions if we time out. Otherwise we can have done poll
completions sitting in ctx->poll_list, which needs us to reap them but
we're just waiting for them.
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io_uring.c')
-rw-r--r-- | fs/io_uring.c | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index 98c83fbf4f88..2038d52c5450 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7363,7 +7363,17 @@ static void io_ring_exit_work(struct work_struct *work) if (ctx->rings) io_cqring_overflow_flush(ctx, true); - wait_for_completion(&ctx->ref_comp); + /* + * If we're doing polled IO and end up having requests being + * submitted async (out-of-line), then completions can come in while + * we're waiting for refs to drop. We need to reap these manually, + * as nobody else will be looking for them. + */ + while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20)) { + io_iopoll_reap_events(ctx); + if (ctx->rings) + io_cqring_overflow_flush(ctx, true); + } io_ring_ctx_free(ctx); } |