diff options
author | Gregory Haskins <ghaskins@novell.com> | 2008-12-29 09:39:50 -0500 |
---|---|---|
committer | Gregory Haskins <ghaskins@novell.com> | 2008-12-29 09:39:50 -0500 |
commit | 74ab8e4f6412c0b2d730fe5de28dc21de8b92c01 (patch) | |
tree | c1bce6a8e23fa58677de23989fa81bc1fcfc0118 | |
parent | a8728944efe23417e38bf22063f06d9d8ee21d59 (diff) |
sched: use highest_prio.next to optimize pull operations
We currently take the rq->lock for every cpu in an overload state during
pull_rt_tasks(). However, we now have enough information via the
highest_prio.[curr|next] fields to determine if there is any tasks of
interest to warrant the overhead of the rq->lock, before we actually take
it. So we use this information to reduce lock contention during the
pull for the case where the source-rq doesnt have tasks that preempt
the current task.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
-rw-r--r-- | kernel/sched_rt.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c index f8fb3edadcaa..d047f288c411 100644 --- a/kernel/sched_rt.c +++ b/kernel/sched_rt.c @@ -1218,6 +1218,18 @@ static int pull_rt_task(struct rq *this_rq) continue; src_rq = cpu_rq(cpu); + + /* + * Don't bother taking the src_rq->lock if the next highest + * task is known to be lower-priority than our current task. + * This may look racy, but if this value is about to go + * logically higher, the src_rq will push this task away. + * And if its going logically lower, we do not care + */ + if (src_rq->rt.highest_prio.next >= + this_rq->rt.highest_prio.curr) + continue; + /* * We can potentially drop this_rq's lock in * double_lock_balance, and another CPU could |