summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2021-08-26Merge branch 'for-next/entry' into for-next/coreCatalin Marinas
* for-next/entry: : More entry.S clean-ups and conversion to C. arm64: entry: call exit_to_user_mode() from C arm64: entry: move bulk of ret_to_user to C arm64: entry: clarify entry/exit helpers arm64: entry: consolidate entry/exit helpers
2021-08-26Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest', ↵Catalin Marinas
remote-tracking branch 'arm64/for-next/perf' into for-next/core * arm64/for-next/perf: arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEF * for-next/mte: : Miscellaneous MTE improvements. arm64/cpufeature: Optionally disable MTE via command-line arm64: kasan: mte: remove redundant mte_report_once logic arm64: kasan: mte: use a constant kernel GCR_EL1 value arm64: avoid double ISB on kernel entry arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Documentation: document the preferred tag checking mode feature arm64: mte: introduce a per-CPU tag checking mode preference arm64: move preemption disablement to prctl handlers arm64: mte: change ASYNC and SYNC TCF settings into bitfields arm64: mte: rename gcr_user_excl to mte_ctrl arm64: mte: avoid TFSRE0_EL1 related operations unless in async mode * for-next/misc: : Miscellaneous updates. arm64: Do not trap PMSNEVFR_EL1 arm64: mm: fix comment typo of pud_offset_phys() arm64: signal32: Drop pointless call to sigdelsetmask() arm64/sve: Better handle failure to allocate SVE register storage arm64: Document the requirement for SCR_EL3.HCE arm64: head: avoid over-mapping in map_memory arm64/sve: Add a comment documenting the binutils needed for SVE asm arm64/sve: Add some comments for sve_save/load_state() arm64: replace in_irq() with in_hardirq() arm64: mm: Fix TLBI vs ASID rollover arm64: entry: Add SYM_CODE annotation for __bad_stack arm64: fix typo in a comment arm64: move the (z)install rules to arch/arm64/Makefile arm64/sve: Make fpsimd_bind_task_to_cpu() static arm64: unnecessary end 'return;' in void functions arm64/sme: Document boot requirements for SME arm64: use __func__ to get function name in pr_err arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATE arm64: cpufeature: Use defined macro instead of magic numbers arm64/kexec: Test page size support with new TGRAN range values * for-next/kselftest: : Kselftest additions for arm64. kselftest/arm64: signal: Add a TODO list for signal handling tests kselftest/arm64: signal: Add test case for SVE register state in signals kselftest/arm64: signal: Verify that signals can't change the SVE vector length kselftest/arm64: signal: Check SVE signal frame shows expected vector length kselftest/arm64: signal: Support signal frames with SVE register data kselftest/arm64: signal: Add SVE to the set of features we can check for kselftest/arm64: pac: Fix skipping of tests on systems without PAC kselftest/arm64: mte: Fix misleading output when skipping tests kselftest/arm64: Add a TODO list for floating point tests kselftest/arm64: Add tests for SVE vector configuration kselftest/arm64: Validate vector lengths are set in sve-probe-vls kselftest/arm64: Provide a helper binary and "library" for SVE RDVL kselftest/arm64: Ignore check_gcr_el1_cswitch binary
2021-08-26arm64: Do not trap PMSNEVFR_EL1Alexandru Elisei
Commit 31c00d2aeaa2 ("arm64: Disable fine grained traps on boot") zeroed the fine grained trap registers to prevent unwanted register traps from occuring. However, for the PMSNEVFR_EL1 register, the corresponding HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write traps. Fixes: 31c00d2aeaa2 ("arm64: Disable fine grained traps on boot") Cc: <stable@vger.kernel.org> # 5.13.x Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-25arm64: mm: fix comment typo of pud_offset_phys()Xujun Leng
Fix a typo in the comment of macro pud_offset_phys(). Signed-off-by: Xujun Leng <lengxujun2007@126.com> Link: https://lore.kernel.org/r/20210825150526.12582-1-lengxujun2007@126.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-25arm64: signal32: Drop pointless call to sigdelsetmask()Will Deacon
Commit 77097ae503b1 ("most of set_current_blocked() callers want SIGKILL/SIGSTOP removed from set") extended set_current_blocked() to remove SIGKILL and SIGSTOP from the new signal set and updated all callers accordingly. Unfortunately, this collided with the merge of the arm64 architecture, which duly removes these signals when restoring the compat sigframe, as this was what was previously done by arch/arm/. Remove the redundant call to sigdelsetmask() from compat_restore_sigframe(). Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210825093911.24493-1-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64/sve: Better handle failure to allocate SVE register storageMark Brown
Currently we "handle" failure to allocate the SVE register storage by doing a BUG_ON() and hoping for the best. This is obviously not great and the memory allocation failure will already be loud enough without the BUG_ON(). As the comment says it is a corner case but let's try to do a bit better, remove the BUG_ON() and add code to handle the failure in the callers. For the ptrace and signal code we can return -ENOMEM gracefully however we have no real error reporting path available to us for the SVE access trap so instead generate a SIGKILL if the allocation fails there. This at least means that we won't try to soldier on and end up trying to access the nonexistant state and while it's obviously not ideal for userspace SIGKILL doesn't allow any handling so minimises the ABI impact, making it easier to improve the interface later if we come up with a better idea. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64: head: avoid over-mapping in map_memoryMark Rutland
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64/sve: Add a comment documenting the binutils needed for SVE asmMark Brown
At some point it would be nice to avoid the need to manually encode SVE instructions, add a note of the binutils version required to save looking it up. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210816125024.8112-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-24arm64/sve: Add some comments for sve_save/load_state()Mark Brown
The use of macros for the actual function bodies means legibility is always going to be a bit of a challenge, especially while we can't rely on SVE support in the toolchain, but this helps a little. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210812201143.35578-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-20arm64: replace in_irq() with in_hardirq()Changbin Du
Replace the obsolete and ambiguos macro in_irq() with new macro in_hardirq(). Signed-off-by: Changbin Du <changbin.du@gmail.com> Link: https://lore.kernel.org/r/20210814005405.2658-1-changbin.du@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-11arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEFAnshuman Khandual
ID_AA64DFR0_PMUVER_IMP_DEF, indicating an "implementation defined" PMU, never actually gets used although there are '0xf' instances scattered all around. Use the symbolic name instead of the raw hex constant. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1628652427-24695-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-08-06arm64: mm: Fix TLBI vs ASID rolloverWill Deacon
When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-06arm64: entry: Add SYM_CODE annotation for __bad_stackMark Brown
When converting arm64 to modern assembler annotations __bad_stack was left as a raw local label without annotations. While this will have little if any practical impact at present it may cause issues in the future if we start using the annotations for things like reliable stack trace. Add SYM_CODE annotations to fix this. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210804181710.19059-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-05arm64: entry: call exit_to_user_mode() from CMark Rutland
When handling an exception from EL0, we perform the entry work in that exception's C handler, and once the C handler has finished, we return back to the entry assembly. Subsequently in the common `ret_to_user` assembly we perform the exit work that balances with the entry work. This can be somewhat difficult to follow, and makes it hard to rework the return paths (e.g. to pass additional context to the exit code, or to have exception return logic for specific exceptions). This patch reworks the entry code such that each EL0 C exception handler is responsible for both the entry and exit work. This clearly balances the two (and will permit additional variation in future), and avoids an unnecessary bounce between assembly and C in the common case, leaving `ret_from_fork` as the only place assembly has to call the exit code. This means that the exit work is now inlined into the C handler, which is already the case for the entry work, and allows the compiler to generate better code (e.g. by immediately returning when there is no exit work to perform). To align with other exception entry/exit helpers, enter_from_user_mode() is updated to take the EL0 pt_regs as a parameter, though this is currently unused. There should be no functional change as a result of this patch. However, this should lead to slightly better backtraces when an error is encountered within do_notify_resume(), as the C handler should appear in the backtrace, indicating the specific exception that the kernel was entered with. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-05arm64: entry: move bulk of ret_to_user to CMark Rutland
In `ret_to_user` we perform some conditional work depending on the thread flags, then perform some IRQ/context tracking which is intended to balance with the IRQ/context tracking performed in the entry C code. For simplicity and consistency, it would be preferable to move this all to C. As a step towards that, this patch moves the conditional work and IRQ/context tracking into a C helper function. To aid bisectability, this is called from the `ret_to_user` assembly, and a subsequent patch will move the call to C code. As local_daif_mask() handles all necessary tracing and PMR manipulation, we no longer need to handle this explicitly. As we call exit_to_user_mode() directly, the `user_enter_irqoff` macro is no longer used, and can be removed. As enter_from_user_mode() and exit_to_user_mode() are no longer called from assembly, these can be made static, and as these are typically very small, they are marked __always_inline to avoid the overhead of a function call. For now, enablement of single-step is left in entry.S, and for this we still need to read the flags in ret_to_user(). It is safe to read this separately as TIF_SINGLESTEP is not part of _TIF_WORK_MASK. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-4-mark.rutland@arm.com [catalin.marinas@arm.com: removed unused gic_prio_kentry_setup macro] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-05arm64: entry: clarify entry/exit helpersMark Rutland
When entering an exception, we must perform irq/context state management before we can use instrumentable C code. Similarly, when exiting an exception we cannot use instrumentable C code after we perform irq/context state management. Originally, we'd intended that the enter_from_*() and exit_to_*() helpers would enforce this by virtue of being the first and last functions called, respectively, in an exception handler. However, as they now call instrumentable code themselves, this is not as clearly true. To make this more robust, this patch splits the irq/context state management into separate helpers, with all the helpers commented to make their intended purpose more obvious. In exit_to_kernel_mode() we'll now check TFSR_EL1 before we assert that IRQs are disabled, but this ordering is not important, and other than this there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-3-mark.rutland@arm.com [catalin.marinas@arm.com: comment typos fix-up] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-05arm64: entry: consolidate entry/exit helpersMark Rutland
To make the various entry/exit helpers easier to understand and easier to compare, this patch moves all the entry/exit helpers to be adjacent at the top of entry-common.c, rather than being spread out throughout the file. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-03arm64: fix typo in a commentJason Wang
The double 'the' after 'If' in this comment "If the the TLB range ops are supported..." is repeated. Consequently, one 'the' should be removed from the comment. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210803142020.124230-1-wangborong@cdjrlc.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-03arm64: move the (z)install rules to arch/arm64/MakefileMasahiro Yamada
Currently, the (z)install targets in arch/arm64/Makefile descend into arch/arm64/boot/Makefile to invoke the shell script, but there is no good reason to do so. arch/arm64/Makefile can run the shell script directly. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Link: https://lore.kernel.org/r/20210729140527.443116-1-masahiroy@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-03arm64/cpufeature: Optionally disable MTE via command-lineYee Lee
MTE support needs to be optionally disabled in runtime for HW issue workaround, FW development and some evaluation works on system resource and performance. This patch makes two changes: (1) moves init of tag-allocation bits(ATA/ATA0) to cpu_enable_mte() as not cached in TLB. (2) allows ID_AA64PFR1_EL1.MTE to be overridden on its shadow value by giving "arm64.nomte" on cmdline. When the feature value is off, ATA and TCF will not set and the related functionalities are accordingly suppressed. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Marc Zyngier <maz@kernel.org> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Yee Lee <yee.lee@mediatek.com> Link: https://lore.kernel.org/r/20210803070824.7586-2-yee.lee@mediatek.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-02arm64: kasan: mte: remove redundant mte_report_once logicMark Rutland
We have special logic to suppress MTE tag check fault reporting, based on a global `mte_report_once` and `reported` variables. These can be used to suppress calling kasan_report() when taking a tag check fault, but do not prevent taking the fault in the first place, nor does they affect the way we disable tag checks upon taking a fault. The core KASAN code already defaults to reporting a single fault, and has a `multi_shot` control to permit reporting multiple faults. The only place we transiently alter `mte_report_once` is in lib/test_kasan.c, where we also the `multi_shot` state as the same time. Thus `mte_report_once` and `reported` are redundant, and can be removed. When a tag check fault is taken, tag checking will be disabled by `do_tag_recovery` and must be explicitly re-enabled if desired. The test code does this by calling kasan_enable_tagging_sync(). This patch removes the redundant mte_report_once() logic and associated variables. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210714143843.56537-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-02arm64: kasan: mte: use a constant kernel GCR_EL1 valueMark Rutland
When KASAN_HW_TAGS is selected, KASAN is enabled at boot time, and the hardware supports MTE, we'll initialize `kernel_gcr_excl` with a value dependent on KASAN_TAG_MAX. While the resulting value is a constant which depends on KASAN_TAG_MAX, we have to perform some runtime work to generate the value, and have to read the value from memory during the exception entry path. It would be better if we could generate this as a constant at compile-time, and use it as such directly. Early in boot within __cpu_setup(), we initialize GCR_EL1 to a safe value, and later override this with the value required by KASAN. If CONFIG_KASAN_HW_TAGS is not selected, or if KASAN is disabeld at boot time, the kernel will not use IRG instructions, and so the initial value of GCR_EL1 is does not matter to the kernel. Thus, we can instead have __cpu_setup() initialize GCR_EL1 to a value consistent with KASAN_TAG_MAX, and avoid the need to re-initialize it during hotplug and resume form suspend. This patch makes arem64 use a compile-time constant KERNEL_GCR_EL1 value, which is compatible with KASAN_HW_TAGS when this is selected. This removes the need to re-initialize GCR_EL1 dynamically, and acts as an optimization to the entry assembly, which no longer needs to load this value from memory. The redundant initialization hooks are removed. In order to do this, KASAN_TAG_MAX needs to be visible outside of the core KASAN code. To do this, I've moved the KASAN_TAG_* values into <linux/kasan-tags.h>. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210714143843.56537-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-02arm64/sve: Make fpsimd_bind_task_to_cpu() staticMark Brown
This function is not referenced outside fpsimd.c so can be static, making it that little bit easier to follow what is called from where. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210730165846.18558-1-broonie@kernel.org Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-02arm64: unnecessary end 'return;' in void functionsJason Wang
The end 'return;' in a void function is useless and verbose. It can be removed safely. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210726123940.63232-1-wangborong@cdjrlc.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64: use __func__ to get function name in pr_errJason Wang
Prefer using '"%s...", __func__' to get current function's name in a debug message. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210726122907.51529-1-wangborong@cdjrlc.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATELingyan Huang
The current code to print PSTATE when generating backtraces does not include SSBS bit and DIT bit, so add this information. Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1626920436-54816-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64: cpufeature: Use defined macro instead of magic numbersShaokun Zhang
Use defined macro to simplify the code and make it more readable. Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1626415089-57584-1-git-send-email-zhangshaokun@hisilicon.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-30arm64/kexec: Test page size support with new TGRAN range valuesAnshuman Khandual
The commit 26f55386f964 ("arm64/mm: Fix __enable_mmu() for new TGRAN range values") had already switched into testing ID_AA64MMFR0_TGRAN range values. This just changes system_supports_[4|16|64]kb_granule() helpers to perform similar range tests as well. While here, it standardizes page size specific supported min and max TGRAN values. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1626237975-1909-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: avoid double ISB on kernel entryPeter Collingbourne
Although an ISB is required in order to make the MTE-related system register update to GCR_EL1 effective, and the same is true for PAC-related updates to SCTLR_EL1 or APIAKey{Hi,Lo}_EL1, we issue two ISBs on machines that support both features while we only need to issue one. To avoid the unnecessary additional ISB, remove the ISBs from the PAC and MTE-specific alternative blocks and add a couple of additional blocks that cause us to only execute one ISB if both features are supported. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Idee7e8114d5ae5a0b171d06220a0eb4bb015a51c Link: https://lore.kernel.org/r/20210727205439.2557419-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: optimize GCR_EL1 modification on kernel entry/exitPeter Collingbourne
Accessing GCR_EL1 and issuing an ISB can be expensive on some microarchitectures. Although we must write to GCR_EL1, we can restructure the code to avoid reading from it because the new value can be derived entirely from the exclusion mask, which is already in a GPR. Do so. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75 Link: https://lore.kernel.org/r/20210714013638.3995315-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: introduce a per-CPU tag checking mode preferencePeter Collingbourne
Add a per-CPU sysfs node, mte_tcf_preferred, that allows the preferred tag checking mode to be configured. The current possible values are async and sync. Link: https://linux-review.googlesource.com/id/I7493dcd533a2785a1437b16c3f6b50919f840854 Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-5-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: move preemption disablement to prctl handlersPeter Collingbourne
In the next patch, we will start reading sctlr_user from mte_update_sctlr_user and subsequently writing a new value based on the task's TCF setting and potentially the per-CPU TCF preference. This means that we need to be careful to disable preemption around any code sequences that read from sctlr_user and subsequently write to sctlr_user and/or SCTLR_EL1, so that we don't end up writing a stale value (based on the previous CPU's TCF preference) to either of them. We currently have four such sequences, in the prctl handlers for PR_SET_TAGGED_ADDR_CTRL and PR_PAC_SET_ENABLED_KEYS, as well as in the task initialization code that resets the prctl settings. Change the prctl handlers to disable preemption in the handlers themselves rather than the functions that they call, and change the task initialization code to call the respective prctl handlers instead of setting sctlr_user directly. As a result of this change, we no longer need the helper function set_task_sctlr_el1, nor does its behavior make sense any more, so remove it. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ic0e8a0c00bb47d786c1e8011df0b7fe99bee4bb5 Link: https://lore.kernel.org/r/20210727205300.2554659-4-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: change ASYNC and SYNC TCF settings into bitfieldsPeter Collingbourne
Allow the user program to specify both ASYNC and SYNC TCF modes by repurposing the existing constants as bitfields. This will allow the kernel to select one of the modes on behalf of the user program. With this patch the kernel will always select async mode, but a subsequent patch will make this configurable. Link: https://linux-review.googlesource.com/id/Icc5923c85a8ea284588cc399ae74fd19ec291230 Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-3-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-28arm64: mte: rename gcr_user_excl to mte_ctrlPeter Collingbourne
We are going to use this field to store more data. To prepare for that, rename it and change the users to rely on the bit position of gcr_user_excl in mte_ctrl. Link: https://linux-review.googlesource.com/id/Ie1fd18e480100655f5d22137f5b22f4f3a9f9e2e Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210727205300.2554659-2-pcc@google.com Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-27arm64: mte: avoid TFSRE0_EL1 related operations unless in async modePeter Collingbourne
There is no reason to touch TFSRE0_EL1 nor issue a DSB unless our task is in asynchronous mode. Since these operations (especially the DSB) may be expensive on certain microarchitectures, only perform them if necessary. Furthermore, stop clearing TFSRE0_EL1 on entry because it will be cleared on exit and it is not necessary to have any particular value in TFSRE0_EL1 between entry and exit. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ib353a63e3d0abc2b0b008e96aa2d9692cfc1b815 Link: https://lore.kernel.org/r/20210709023532.2133673-1-pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-07-25Merge tag 'powerpc-5.14-3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - Fix guest to host memory corruption in H_RTAS due to missing nargs check. - Fix guest triggerable host crashes due to bad handling of nested guest TM state. - Fix possible crashes due to incorrect reference counting in kvm_arch_vcpu_ioctl(). - Two commits fixing some regressions in KVM transactional memory handling introduced by the recent rework of the KVM code. Thanks to Nicholas Piggin, Alexey Kardashevskiy, and Michael Neuling. * tag 'powerpc-5.14-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: KVM: PPC: Book3S HV Nested: Sanitise H_ENTER_NESTED TM state KVM: PPC: Book3S: Fix H_RTAS rets buffer overflow KVM: PPC: Fix kvm_arch_vcpu_ioctl vcpu_load leak KVM: PPC: Book3S: Fix CONFIG_TRANSACTIONAL_MEM=n crash KVM: PPC: Book3S HV P9: Fix guest TM support
2021-07-25Merge tag 'locking-urgent-2021-07-25' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 jump label fix from Thomas Gleixner: "A single fix for jump labels to prevent the compiler from agressive un-inlining which results in a section mismatch" * tag 'locking-urgent-2021-07-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: jump_labels: Mark __jump_label_transform() as __always_inlined to work around aggressive compiler un-inlining
2021-07-24Merge tag 'riscv-for-linus-5.14-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fixes from Palmer Dabbelt: - properly set the memory size, which fixes 32-bit systems - allow initrd to load anywhere in memory, rather that restricting it to the first 256MiB - fix the 'mem=' parameter on 64-bit systems to properly account for the maximum supported memory now that the kernel is outside the linear map - avoid installing mappings into the last 4KiB of memory, which conflicts with error values - avoid the stack from being freed while it is being walked - a handful of fixes to the new copy to/from user routines * tag 'riscv-for-linus-5.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: __asm_copy_to-from_user: Fix: Typos in comments riscv: __asm_copy_to-from_user: Remove unnecessary size check riscv: __asm_copy_to-from_user: Fix: fail on RV32 riscv: __asm_copy_to-from_user: Fix: overrun copy riscv: stacktrace: pin the task's stack in get_wchan riscv: Make sure the kernel mapping does not overlap with IS_ERR_VALUE riscv: Make sure the linear mapping does not use the kernel mapping riscv: Fix memory_limit for 64-bit kernel RISC-V: load initrd wherever it fits into memory riscv: Fix 32-bit RISC-V boot failure
2021-07-23riscv: __asm_copy_to-from_user: Fix: Typos in commentsAkira Tsukamoto
Fixing typos and grammar mistakes and using more intuitive label name. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> Fixes: ca6eaaa210de ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23riscv: __asm_copy_to-from_user: Remove unnecessary size checkAkira Tsukamoto
Clean up: The size of 0 will be evaluated in the next step. Not required here. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> Fixes: ca6eaaa210de ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23riscv: __asm_copy_to-from_user: Fix: fail on RV32Akira Tsukamoto
Had a bug when converting bytes to bits when the cpu was rv32. The a3 contains the number of bytes and multiple of 8 would be the bits. The LGREG is holding 2 for RV32 and 3 for RV32, so to achieve multiple of 8 it must always be constant 3. The 2 was mistakenly used for rv32. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> Fixes: ca6eaaa210de ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23riscv: __asm_copy_to-from_user: Fix: overrun copyAkira Tsukamoto
There were two causes for the overrun memory access. The threshold size was too small. The aligning dst require one SZREG and unrolling word copy requires 8*SZREG, total have to be at least 9*SZREG. Inside the unrolling copy, the subtracting -(8*SZREG-1) would make iteration happening one extra loop. Proper value is -(8*SZREG). Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> Fixes: ca6eaaa210de ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23riscv: stacktrace: pin the task's stack in get_wchanJisheng Zhang
Pin the task's stack before calling walk_stackframe() in get_wchan(). This can fix the panic as reported by Andreas when CONFIG_VMAP_STACK=y: [ 65.609696] Unable to handle kernel paging request at virtual address ffffffd0003bbde8 [ 65.610460] Oops [#1] [ 65.610626] Modules linked in: virtio_blk virtio_mmio rtc_goldfish btrfs blake2b_generic libcrc32c xor raid6_pq sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua efivarfs [ 65.611670] CPU: 2 PID: 1 Comm: systemd Not tainted 5.14.0-rc1-1.g34fe32a-default #1 openSUSE Tumbleweed (unreleased) c62f7109153e5a0897ee58ba52393ad99b070fd2 [ 65.612334] Hardware name: riscv-virtio,qemu (DT) [ 65.613008] epc : get_wchan+0x5c/0x88 [ 65.613334] ra : get_wchan+0x42/0x88 [ 65.613625] epc : ffffffff800048a4 ra : ffffffff8000488a sp : ffffffd00021bb90 [ 65.614008] gp : ffffffff817709f8 tp : ffffffe07fe91b80 t0 : 00000000000001f8 [ 65.614411] t1 : 0000000000020000 t2 : 0000000000000000 s0 : ffffffd00021bbd0 [ 65.614818] s1 : ffffffd0003bbdf0 a0 : 0000000000000001 a1 : 0000000000000002 [ 65.615237] a2 : ffffffff81618008 a3 : 0000000000000000 a4 : 0000000000000000 [ 65.615637] a5 : ffffffd0003bc000 a6 : 0000000000000002 a7 : ffffffe27d370000 [ 65.616022] s2 : ffffffd0003bbd90 s3 : ffffffff8071a81e s4 : 0000000000003fff [ 65.616407] s5 : ffffffffffffc000 s6 : 0000000000000000 s7 : ffffffff81618008 [ 65.616845] s8 : 0000000000000001 s9 : 0000000180000040 s10: 0000000000000000 [ 65.617248] s11: 000000000000016b t3 : 000000ff00000000 t4 : 0c6aec92de5e3fd7 [ 65.617672] t5 : fff78f60608fcfff t6 : 0000000000000078 [ 65.618088] status: 0000000000000120 badaddr: ffffffd0003bbde8 cause: 000000000000000d [ 65.618621] [<ffffffff800048a4>] get_wchan+0x5c/0x88 [ 65.619008] [<ffffffff8022da88>] do_task_stat+0x7a2/0xa46 [ 65.619325] [<ffffffff8022e87e>] proc_tgid_stat+0xe/0x16 [ 65.619637] [<ffffffff80227dd6>] proc_single_show+0x46/0x96 [ 65.619979] [<ffffffff801ccb1e>] seq_read_iter+0x190/0x31e [ 65.620341] [<ffffffff801ccd70>] seq_read+0xc4/0x104 [ 65.620633] [<ffffffff801a6bfe>] vfs_read+0x6a/0x112 [ 65.620922] [<ffffffff801a701c>] ksys_read+0x54/0xbe [ 65.621206] [<ffffffff801a7094>] sys_read+0xe/0x16 [ 65.621474] [<ffffffff8000303e>] ret_from_syscall+0x0/0x2 [ 65.622169] ---[ end trace f24856ed2b8789c5 ]--- [ 65.622832] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23Merge tag 'm68k-for-v5.14-tag2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k Pull m68k fix from Geert Uytterhoeven: - Fix a Mac defconfig regression due to the IDE -> ATA switch * tag 'm68k-for-v5.14-tag2' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k: m68k: MAC should select HAVE_PATA_PLATFORM
2021-07-23Merge tag 'char-misc-5.14-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc fixes from Greg KH: "Here are some small char/misc driver fixes for 5.14-rc3. Included in here are: - MAINTAINERS file updates for two changes in different driver subsystems - mhi bus bugfixes - nds32 bugfix that resolves a reported problem All have been in linux-next with no reported problems" * tag 'char-misc-5.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: nds32: fix up stack guard gap MAINTAINERS: Change ACRN HSM driver maintainer MAINTAINERS: Update for VMCI driver bus: mhi: pci_generic: Fix inbound IPCR channel bus: mhi: core: Validate channel ID when processing command completions bus: mhi: pci_generic: Apply no-op for wake using sideband wake boolean
2021-07-23Merge tag 'usb-5.14-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb Pull USB fixes from Greg KH: "Here are some USB fixes for 5.14-rc3 to resolve a bunch of tiny problems reported. Included in here are: - dtsi revert to resolve a problem which broke android systems that relied on the dts name to find the USB controller device. People are still working out the "real" solution for this, but for now the revert is needed. - core USB fix for pipe calculation found by syzbot - typec fixes - gadget driver fixes - new usb-serial device ids - new USB quirks - xhci fixes - usb hub fixes for power management issues reported - other tiny fixes All have been in linux-next with no reported problems" * tag 'usb-5.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (27 commits) USB: serial: cp210x: add ID for CEL EM3588 USB ZigBee stick Revert "USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE modem" usb: cdc-wdm: fix build error when CONFIG_WWAN_CORE is not set Revert "arm64: dts: qcom: Harmonize DWC USB3 DT nodes name" usb: dwc2: gadget: Fix sending zero length packet in DDMA mode. usb: dwc2: Skip clock gating on Samsung SoCs usb: renesas_usbhs: Fix superfluous irqs happen after usb_pkt_pop() usb: dwc2: gadget: Fix GOUTNAK flow for Slave mode. usb: phy: Fix page fault from usb_phy_uevent usb: xhci: avoid renesas_usb_fw.mem when it's unusable usb: gadget: u_serial: remove WARN_ON on null port usb: dwc3: avoid NULL access of usb_gadget_driver usb: max-3421: Prevent corruption of freed memory usb: gadget: Fix Unbalanced pm_runtime_enable in tegra_xudc_probe MAINTAINERS: repair reference in USB IP DRIVER FOR HISILICON KIRIN 970 usb: typec: stusb160x: Don't block probing of consumer of "connector" nodes usb: typec: stusb160x: register role switch before interrupt registration USB: usb-storage: Add LaCie Rugged USB3-FW to IGNORE_UAS usb: ehci: Prevent missed ehci interrupts with edge-triggered MSI usb: hub: Disable USB 3 device initiated lpm if exit latency is too high ...
2021-07-23KVM: PPC: Book3S HV Nested: Sanitise H_ENTER_NESTED TM stateNicholas Piggin
The H_ENTER_NESTED hypercall is handled by the L0, and it is a request by the L1 to switch the context of the vCPU over to that of its L2 guest, and return with an interrupt indication. The L1 is responsible for switching some registers to guest context, and the L0 switches others (including all the hypervisor privileged state). If the L2 MSR has TM active, then the L1 is responsible for recheckpointing the L2 TM state. Then the L1 exits to L0 via the H_ENTER_NESTED hcall, and the L0 saves the TM state as part of the exit, and then it recheckpoints the TM state as part of the nested entry and finally HRFIDs into the L2 with TM active MSR. Not efficient, but about the simplest approach for something that's horrendously complicated. Problems arise if the L1 exits to the L0 with a TM state which does not match the L2 TM state being requested. For example if the L1 is transactional but the L2 MSR is non-transactional, or vice versa. The L0's HRFID can take a TM Bad Thing interrupt and crash. Fix this by disallowing H_ENTER_NESTED in TM[T] state entirely, and then ensuring that if the L1 is suspended then the L2 must have TM active, and if the L1 is not suspended then the L2 must not have TM active. Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall") Cc: stable@vger.kernel.org # v4.20+ Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru> Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2021-07-23KVM: PPC: Book3S: Fix H_RTAS rets buffer overflowNicholas Piggin
The kvmppc_rtas_hcall() sets the host rtas_args.rets pointer based on the rtas_args.nargs that was provided by the guest. That guest nargs value is not range checked, so the guest can cause the host rets pointer to be pointed outside the args array. The individual rtas function handlers check the nargs and nrets values to ensure they are correct, but if they are not, the handlers store a -3 (0xfffffffd) failure indication in rets[0] which corrupts host memory. Fix this by testing up front whether the guest supplied nargs and nret would exceed the array size, and fail the hcall directly without storing a failure indication to rets[0]. Also expand on a comment about why we kill the guest and try not to return errors directly if we have a valid rets[0] pointer. Fixes: 8e591cb72047 ("KVM: PPC: Book3S: Add infrastructure to implement kernel-side RTAS calls") Cc: stable@vger.kernel.org # v3.10+ Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2021-07-22riscv: Make sure the kernel mapping does not overlap with IS_ERR_VALUEAlexandre Ghiti
The check that is done in setup_bootmem currently only works for 32-bit kernel since the kernel mapping has been moved outside of the linear mapping for 64-bit kernel. So make sure that for 64-bit kernel, the kernel mapping does not overlap with the last 4K of the addressable memory. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Fixes: 2bfc6cd81bd1 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-22riscv: Make sure the linear mapping does not use the kernel mappingAlexandre Ghiti
For 64-bit kernel, the end of the address space is occupied by the kernel mapping and currently, the functions to populate the kernel page tables (i.e. create_p*d_mapping) do not override existing mapping so we must make sure the linear mapping does not map memory in the kernel mapping by clipping the memory above the memory limit. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Fixes: c9811e379b21 ("riscv: Add mem kernel parameter support") Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>