diff options
author | Peter Zijlstra <peterz@infradead.org> | 2023-09-26 14:29:50 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2023-10-03 12:32:29 +0200 |
commit | 650cad561cce04b62a8c8e0446b685ef171bc3bb (patch) | |
tree | 421042af043a1dbb19bf534076439ebe958f86b0 /kernel | |
parent | 2f2fc17bab0011430ceb6f2dc1959e7d1f981444 (diff) |
sched/eevdf: Fix avg_vruntime()
The expectation is that placing a task at avg_vruntime() makes it
eligible. Turns out there is a corner case where this is not the case.
Specifically, avg_vruntime() relies on the fact that integer division
is a flooring function (eg. it discards the remainder). By this
property the value returned is slightly left of the true average.
However! when the average is a negative (relative to min_vruntime) the
effect is flipped and it becomes a ceil, with the result that the
returned value is just right of the average and thus not eligible.
Fixes: af4cf40470c2 ("sched/fair: Add cfs_rq::avg_vruntime")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7d73652acbb2..ef7490c4b8b4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -664,6 +664,10 @@ void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta) cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta; } +/* + * Specifically: avg_runtime() + 0 must result in entity_eligible() := true + * For this to be so, the result of this function must have a left bias. + */ u64 avg_vruntime(struct cfs_rq *cfs_rq) { struct sched_entity *curr = cfs_rq->curr; @@ -677,8 +681,12 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq) load += weight; } - if (load) + if (load) { + /* sign flips effective floor / ceil */ + if (avg < 0) + avg -= (load - 1); avg = div_s64(avg, load); + } return cfs_rq->min_vruntime + avg; } |