diff options
author | Uday Shankar <ushankar@purestorage.com> | 2023-02-28 17:06:55 -0700 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-03-02 21:00:20 -0700 |
commit | 49d24398327e32265eccdeec4baeb5a6a609c0bd (patch) | |
tree | eac5516659021cb104b90d7b398f18843ff84d8a /block/blk.h | |
parent | 326ac2c5133e5da7ccdd08d4f9c562f2323021aa (diff) |
blk-mq: enforce op-specific segment limits in blk_insert_cloned_request
The block layer might merge together discard requests up until the
max_discard_segments limit is hit, but blk_insert_cloned_request checks
the segment count against max_segments regardless of the req op. This
can result in errors like the following when discards are issued through
a DM device and max_discard_segments exceeds max_segments for the queue
of the chosen underlying device.
blk_insert_cloned_request: over max segments limit. (256 > 129)
Fix this by looking at the req_op and enforcing the appropriate segment
limit - max_discard_segments for REQ_OP_DISCARDs and max_segments for
everything else.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20230301000655.48112-1-ushankar@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk.h')
-rw-r--r-- | block/blk.h | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/block/blk.h b/block/blk.h index e835f21d48af..cc4e8873dfde 100644 --- a/block/blk.h +++ b/block/blk.h @@ -156,6 +156,13 @@ static inline bool blk_discard_mergable(struct request *req) return false; } +static inline unsigned int blk_rq_get_max_segments(struct request *rq) +{ + if (req_op(rq) == REQ_OP_DISCARD) + return queue_max_discard_segments(rq->q); + return queue_max_segments(rq->q); +} + static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, enum req_op op) { |