summaryrefslogtreecommitdiff
path: root/arch/arc/include/asm/mmu_context.h
AgeCommit message (Collapse)Author
2024-04-01ARC: Fix typosBjorn Helgaas
Fix typos, most reported by "codespell arch/arc". Only touches comments, no code changes. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2020-10-26arc: use asm-generic/mmu_context.h for no-op implementationsNicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-snps-arc@lists.infradead.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-10-28ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch regVineet Gupta
ARC700 exception (and intr handling) didn't have auto stack switching thus had to rely on stashing a reg temporarily (to free it up) at a known place in memory, allowing to code up the low level stack switching. This however was not re-entrant in SMP which thus had to repurpose the per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg pointer (vs. reading it from mm struct) The newer HS cores do have auto-stack switching and thus even SMP builds can use the MMU SCRATCH reg as originally intended. This patch fixes the restriction to ARC700 SMP builds only Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar
<linux/sched/mm.h> We are going to split <linux/sched/mm.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/mm.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. The APIs that are going to be moved first are: mm_alloc() __mmdrop() mmdrop() mmdrop_async_fn() mmdrop_async() mmget_not_zero() mmput() mmput_async() get_task_mm() mm_access() mm_release() Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-30Fix typosAndrea Gelmini
Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] TLB flushVineet Gupta
- Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] ASID allocationVineet Gupta
-Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Track ASID allocation cycles/generationsVineet Gupta
This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] activate_mm() == switch_mm()Vineet Gupta
ASID allocation changes/2 Use the fact that switch_mm() and activate_mm() are exactly same code now while acknowledging the semantical difference in comment Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASIDVineet Gupta
ASID allocation changes/1 This patch does 2 things: (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it was from a prev allocation cycle/generation OR if mm had no ASID allocated (vs. before would unconditionally moving to a new ASID) Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm() (for parent's address space invalidation at fork) need to first force the parent to an unallocated ASID. (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new ASID value. The gains are: - consolidation of all asid alloc logic into get_new_mmu_context() - avoiding code duplication in switch_mm() for PID reg setting - Enables future change to fold activate_mm() into switch_mm() Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Remove legacy/unused debug codeVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15ARC: SMP supportVineet Gupta
ARC common code to enable a SMP system + ISS provided SMP extensions. ARC700 natively lacks SMP support, hence some of the core features are are only enabled if SoCs have the necessary h/w pixie-dust. This includes: -Inter Processor Interrupts (IPI) -Cache coherency -load-locked/store-conditional ... The low level exception handling would be completely broken in SMP because we don't have hardware assisted stack switching. Thus a fair bit of this code is repurposing the MMU_SCRATCH reg for event handler prologues to keep them re-entrant. Many thanks to Rajeshwar Ranga for his initial "major" contributions to SMP Port (back in 2008), and to Noam Camus and Gilad Ben-Yossef for help with resurrecting that in 3.2 kernel (2012). Note that this platform code is again singleton design pattern - so multiple SMP platforms won't build at the moment - this deficiency is addressed in subsequent patches within this series. Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rajeshwar Ranga <rajeshwar.ranga@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Gilad Ben-Yossef <gilad@benyossef.com>
2013-02-15ARC: MMU Context ManagementVineet Gupta
ARC700 MMU provides for tagging TLB entries with a 8-bit ASID to avoid having to flush the TLB every task switch. It also allows for a quick way to invalidate all the TLB entries for task useful for: * COW sementics during fork() * task exit()ing Signed-off-by: Vineet Gupta <vgupta@synopsys.com>