diff options
author | Yosry Ahmed <yosryahmed@google.com> | 2024-02-22 19:09:10 +0000 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-03-04 17:01:14 -0800 |
commit | cd87d9f58439a114f38cec276f3ec1600729c8bf (patch) | |
tree | eae0e93f7270176884af353b1040b62e1def8b36 /arch/x86/mm | |
parent | f5eec03611d9352c8b19134ecd4742a6ebc45282 (diff) |
x86/mm: further clarify switch_mm_irqs_off() documentation
Commit accf6b23d1e5a ("x86/mm: clarify "prev" usage in
switch_mm_irqs_off()") attempted to clarify x86's usage of the arguments
passed by generic code, specifically the "prev" argument the is unused by
x86. However, it could have done a better job with the comment above
switch_mm_irqs_off(). Rewrite this comment according to Dave Hansen's
suggestion.
Link: https://lkml.kernel.org/r/20240222190911.1903054-1-yosryahmed@google.com
Fixes: 3cfd6625a6cf ("x86/mm: clarify "prev" usage in switch_mm_irqs_off()")
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'arch/x86/mm')
-rw-r--r-- | arch/x86/mm/tlb.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index bf9605caf24f..b67545baf697 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -493,10 +493,10 @@ static inline void cr4_update_pce_mm(struct mm_struct *mm) { } #endif /* - * The "prev" argument passed by the caller does not always match CR3. For - * example, the scheduler passes in active_mm when switching from lazy TLB mode - * to normal mode, but switch_mm_irqs_off() can be called from x86 code without - * updating active_mm. Use cpu_tlbstate.loaded_mm instead. + * This optimizes when not actually switching mm's. Some architectures use the + * 'unused' argument for this optimization, but x86 must use + * 'cpu_tlbstate.loaded_mm' instead because it does not always keep + * 'current->active_mm' up to date. */ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, struct task_struct *tsk) |