diff options
author | Andrea Parri <parri.andrea@gmail.com> | 2024-01-31 15:49:33 +0100 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2024-02-15 08:04:11 -0800 |
commit | d6cfd1770f20392d7009ae1fdb04733794514fa9 (patch) | |
tree | c2cd9f5a579cd13931cc7e78536e4211aa91eae0 /kernel | |
parent | 6613476e225e090cc9aad49be7fa504e290dd33d (diff) |
membarrier: riscv: Add full memory barrier in switch_mm()
The membarrier system call requires a full memory barrier after storing
to rq->curr, before going back to user-space. The barrier is only
needed when switching between processes: the barrier is implied by
mmdrop() when switching from kernel to userspace, and it's not needed
when switching from userspace to kernel.
Rely on the feature/mechanism ARCH_HAS_MEMBARRIER_CALLBACKS and on the
primitive membarrier_arch_switch_mm(), already adopted by the PowerPC
architecture, to insert the required barrier.
Fixes: fab957c11efe2f ("RISC-V: Atomic and Locking Code")
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/20240131144936.29190-2-parri.andrea@gmail.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/core.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9116bcc90346..c4ca8085885a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6709,8 +6709,9 @@ static void __sched notrace __schedule(unsigned int sched_mode) * * Here are the schemes providing that barrier on the * various architectures: - * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC. - * switch_mm() rely on membarrier_arch_switch_mm() on PowerPC. + * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC, + * RISC-V. switch_mm() relies on membarrier_arch_switch_mm() + * on PowerPC and on RISC-V. * - finish_lock_switch() for weakly-ordered * architectures where spin_unlock is a full barrier, * - switch_to() for arm64 (weakly-ordered, spin_unlock |