diff options
author | Ming Lei <ming.lei@redhat.com> | 2024-05-15 09:31:57 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2024-05-15 20:14:20 -0600 |
commit | d0aac2363549e12cc79b8e285f13d5a9f42fd08e (patch) | |
tree | 454260b00912e2dbb2c92e7214affd3d48975d28 /block/blk-cgroup.c | |
parent | 6da6680632792709cecf2b006f2fe3ca7857e791 (diff) |
blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
__blkcg_rstat_flush() can be run anytime, especially when blk_cgroup_bio_start
is being executed.
If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in
the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one
stat instance being added in blk_cgroup_bio_start(), then the local
list in __blkcg_rstat_flush() could be corrupted.
Fix the issue by adding one barrier.
Cc: Tejun Heo <tj@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20240515013157.443672-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-cgroup.c')
-rw-r--r-- | block/blk-cgroup.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 8699f193cf31..52367a4501d0 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1035,6 +1035,16 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu) struct blkg_iostat cur; unsigned int seq; + /* + * Order assignment of `next_bisc` from `bisc->lnode.next` in + * llist_for_each_entry_safe and clearing `bisc->lqueued` for + * avoiding to assign `next_bisc` with new next pointer added + * in blk_cgroup_bio_start() in case of re-ordering. + * + * The pair barrier is implied in llist_add() in blk_cgroup_bio_start(). + */ + smp_mb(); + WRITE_ONCE(bisc->lqueued, false); /* fetch the current per-cpu values */ |