diff options
author | Michael Ellerman <mpe@ellerman.id.au> | 2023-04-07 00:45:33 +1000 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2023-04-20 13:21:49 +1000 |
commit | 6fee130204650515af80c2786176da0fe7e94482 (patch) | |
tree | 5aab397122c561508d189383ad27a545b821b53d | |
parent | 7640854d966449e5befeff02c45c799cfc3d4fcf (diff) |
powerpc/64: Don't call trace_hardirqs_on() in prep_irq_for_idle()
Since commit a01353cf1896 ("cpuidle: Fix ct_idle_*() usage"), the
cpuidle entry code calls trace_hardirqs_on() (actually
trace_hardirqs_on_prepare()) in ct_cpuidle_enter() before calling into
the cpuidle driver.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230406144535.3786008-2-mpe@ellerman.id.au
-rw-r--r-- | arch/powerpc/kernel/irq_64.c | 8 |
1 files changed, 2 insertions, 6 deletions
diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c index 2ab0e8d84c1d..938e66829eae 100644 --- a/arch/powerpc/kernel/irq_64.c +++ b/arch/powerpc/kernel/irq_64.c @@ -348,9 +348,8 @@ EXPORT_SYMBOL(arch_local_irq_restore); * already the case when ppc_md.power_save is called). The function * will return whether to enter power save or just return. * - * In the former case, it will have notified lockdep of interrupts - * being re-enabled and generally sanitized the lazy irq state, - * and in the latter case it will leave with interrupts hard + * In the former case, it will have generally sanitized the lazy irq + * state, and in the latter case it will leave with interrupts hard * disabled and marked as such, so the local_irq_enable() call * in arch_cpu_idle() will properly re-enable everything. */ @@ -370,9 +369,6 @@ __cpuidle bool prep_irq_for_idle(void) if (lazy_irq_pending()) return false; - /* Tell lockdep we are about to re-enable */ - trace_hardirqs_on(); - /* * Mark interrupts as soft-enabled and clear the * PACA_IRQ_HARD_DIS from the pending mask since we |