From 96611c26dc351c33f73b48756a9feacc109e5bab Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 18 Oct 2021 16:41:05 +0200 Subject: sched: Improve wake_up_all_idle_cpus() take #2 As reported by syzbot and experienced by Pavel, using cpus_read_lock() in wake_up_all_idle_cpus() generates lock inversion (against mmap_sem and possibly others). Instead, shrink the preempt disable region by iterating all CPUs and checking the online status for each individual CPU while having preemption disabled. Fixes: 8850cb663b5c ("sched: Simplify wake_up_*idle*()") Reported-by: syzbot+d5b23b18d2f4feae8a67@syzkaller.appspotmail.com Reported-by: Pavel Machek Reported-by: Qian Cai Signed-off-by: Peter Zijlstra (Intel) Tested-by: Qian Cai --- kernel/smp.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) (limited to 'kernel/smp.c') diff --git a/kernel/smp.c b/kernel/smp.c index ad0b68a3a3d3..01a7c1706a58 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -1170,14 +1170,12 @@ void wake_up_all_idle_cpus(void) { int cpu; - cpus_read_lock(); - for_each_online_cpu(cpu) { - if (cpu == raw_smp_processor_id()) - continue; - - wake_up_if_idle(cpu); + for_each_possible_cpu(cpu) { + preempt_disable(); + if (cpu != smp_processor_id() && cpu_online(cpu)) + wake_up_if_idle(cpu); + preempt_enable(); } - cpus_read_unlock(); } EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus); -- cgit v1.2.3-58-ga151