diff options
author | Jens Axboe <axboe@kernel.dk> | 2021-10-05 09:23:59 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-10-18 06:17:03 -0600 |
commit | 0006707723233cb2a9a23ca19fc3d0864835704c (patch) | |
tree | 6bbd7e9c8b99456d8d993b38aacbc2e07129e6dd /block/blk-mq.c | |
parent | a7b36ee6ba299ffa5c3b36af187b4d0fb32a557c (diff) |
block: inherit request start time from bio for BLK_CGROUP
Doing high IOPS testing with blk-cgroups enabled spends ~15-20% of the
time just doing ktime_get_ns() -> readtsc. We essentially read and
set the start time twice, one for the bio and then again when that bio
is mapped to a request.
Given that the time between the two is very short, inherit the bio
start time instead of reading it again. This cuts 1/3rd of the overhead
of the time keeping.
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index f7428e11b109..8e41d88fcb8a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -725,7 +725,14 @@ void blk_mq_start_request(struct request *rq) trace_block_rq_issue(rq); if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) { - rq->io_start_time_ns = ktime_get_ns(); + u64 start_time; +#ifdef CONFIG_BLK_CGROUP + if (rq->bio) + start_time = bio_issue_time(&rq->bio->bi_issue); + else +#endif + start_time = ktime_get_ns(); + rq->io_start_time_ns = start_time; rq->stats_sectors = blk_rq_sectors(rq); rq->rq_flags |= RQF_STATS; rq_qos_issue(q, rq); |