diff options
author | Nicholas Piggin <npiggin@gmail.com> | 2023-02-03 17:18:33 +1000 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-03-28 16:20:07 -0700 |
commit | 6cad87b0d216c6acdc40c5531c7b62db33fef5b1 (patch) | |
tree | e3170d69da823325cfea6ac19c7c8be223d17432 | |
parent | 62bf1258ec90554c3c0925951149cfe4b67a3e98 (diff) |
kthread: simplify kthread_use_mm refcounting
Patch series "shoot lazy tlbs (lazy tlb refcount scalability
improvement)", v7.
This series improves scalability of context switching between user and
kernel threads on large systems with a threaded process spread across a
lot of CPUs.
Discussion of v6 here:
https://lore.kernel.org/linux-mm/20230118080011.2258375-1-npiggin@gmail.com/
This patch (of 5):
Remove the special case avoiding refcounting when the mm to be used is the
same as the kernel thread's active (lazy tlb) mm. kthread_use_mm() should
not be such a performance critical path that this matters much. This
simplifies a later change to lazy tlb mm refcounting.
Link: https://lkml.kernel.org/r/20230203071837.1136453-1-npiggin@gmail.com
Link: https://lkml.kernel.org/r/20230203071837.1136453-2-npiggin@gmail.com
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r-- | kernel/kthread.c | 14 |
1 files changed, 5 insertions, 9 deletions
diff --git a/kernel/kthread.c b/kernel/kthread.c index 7e6751b29101..1f1b60f1a746 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1415,14 +1415,13 @@ void kthread_use_mm(struct mm_struct *mm) WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); WARN_ON_ONCE(tsk->mm); + mmgrab(mm); + task_lock(tsk); /* Hold off tlb flush IPIs while switching mm's */ local_irq_disable(); active_mm = tsk->active_mm; - if (active_mm != mm) { - mmgrab(mm); - tsk->active_mm = mm; - } + tsk->active_mm = mm; tsk->mm = mm; membarrier_update_current_mm(mm); switch_mm_irqs_off(active_mm, mm, tsk); @@ -1439,12 +1438,9 @@ void kthread_use_mm(struct mm_struct *mm) * memory barrier after storing to tsk->mm, before accessing * user-space memory. A full memory barrier for membarrier * {PRIVATE,GLOBAL}_EXPEDITED is implicitly provided by - * mmdrop(), or explicitly with smp_mb(). + * mmdrop(). */ - if (active_mm != mm) - mmdrop(active_mm); - else - smp_mb(); + mmdrop(active_mm); } EXPORT_SYMBOL_GPL(kthread_use_mm); |