diff options
author | Jens Axboe <axboe@kernel.dk> | 2024-01-30 07:04:52 -0700 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2024-02-08 13:27:06 -0700 |
commit | 170539bdf1094e6e43e9aa86bf2dcaff0857df41 (patch) | |
tree | 8d700ed231a911786b52fa8baed69a2fcb1ea643 /io_uring | |
parent | 4c98b89175a229a1eb9e6db67b4b7c0d712c86a7 (diff) |
io_uring: handle traditional task_work in FIFO order
For local task_work, which is used if IORING_SETUP_DEFER_TASKRUN is set,
we reverse the order of the lockless list before processing the work.
This is done to process items in the order in which they were queued, as
the llist always adds to the head.
Do the same for traditional task_work, so we have the same behavior for
both types.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring')
-rw-r--r-- | io_uring/io_uring.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index b1892cdd25c6..4678d9ec810b 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1266,7 +1266,7 @@ void tctx_task_work(struct callback_head *cb) node = llist_del_all(&tctx->task_list); if (node) - count = handle_tw_list(node, &ctx, &ts); + count = handle_tw_list(llist_reverse_order(node), &ctx, &ts); ctx_flush_and_put(ctx, &ts); |