summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)Author
2023-10-23powerpc/mm: Fix boot crash with FLATMEMMichael Ellerman
Erhard reported that his G5 was crashing with v6.6-rc kernels: mpic: Setting up HT PICs workarounds for U3/U4 BUG: Unable to handle kernel data access at 0xfeffbb62ffec65fe Faulting instruction address: 0xc00000000005dc40 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2 PowerMac Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Tainted: G T 6.6.0-rc3-PMacGS #1 Hardware name: PowerMac11,2 PPC970MP 0x440101 PowerMac NIP: c00000000005dc40 LR: c000000000066660 CTR: c000000000007730 REGS: c0000000022bf510 TRAP: 0380 Tainted: G T (6.6.0-rc3-PMacGS) MSR: 9000000000001032 <SF,HV,ME,IR,DR,RI> CR: 44004242 XER: 00000000 IRQMASK: 3 GPR00: 0000000000000000 c0000000022bf7b0 c0000000010c0b00 00000000000001ac GPR04: 0000000003c80000 0000000000000300 c0000000f20001ae 0000000000000300 GPR08: 0000000000000006 feffbb62ffec65ff 0000000000000001 0000000000000000 GPR12: 9000000000001032 c000000002362000 c000000000f76b80 000000000349ecd8 GPR16: 0000000002367ba8 0000000002367f08 0000000000000006 0000000000000000 GPR20: 00000000000001ac c000000000f6f920 c0000000022cd985 000000000000000c GPR24: 0000000000000300 00000003b0a3691d c0003e008030000e 0000000000000000 GPR28: c00000000000000c c0000000f20001ee feffbb62ffec65fe 00000000000001ac NIP hash_page_do_lazy_icache+0x50/0x100 LR __hash_page_4K+0x420/0x590 Call Trace: hash_page_mm+0x364/0x6f0 do_hash_fault+0x114/0x2b0 data_access_common_virt+0x198/0x1f0 --- interrupt: 300 at mpic_init+0x4bc/0x10c4 NIP: c000000002020a5c LR: c000000002020a04 CTR: 0000000000000000 REGS: c0000000022bf9f0 TRAP: 0300 Tainted: G T (6.6.0-rc3-PMacGS) MSR: 9000000000001032 <SF,HV,ME,IR,DR,RI> CR: 24004248 XER: 00000000 DAR: c0003e008030000e DSISR: 40000000 IRQMASK: 1 ... NIP mpic_init+0x4bc/0x10c4 LR mpic_init+0x464/0x10c4 --- interrupt: 300 pmac_setup_one_mpic+0x258/0x2dc pmac_pic_init+0x28c/0x3d8 init_IRQ+0x90/0x140 start_kernel+0x57c/0x78c start_here_common+0x1c/0x20 A bisect pointed to the breakage beginning with commit 9fee28baa601 ("powerpc: implement the new page table range API"). Analysis of the oops pointed to a struct page with a corrupted compound_head being loaded via page_folio() -> _compound_head() in hash_page_do_lazy_icache(). The access by the mpic code is to an MMIO address, so the expectation is that the struct page for that address would be initialised by init_unavailable_range(), as pointed out by Aneesh. Instrumentation showed that was not the case, which eventually lead to the realisation that pfn_valid() was returning false for that address, causing the struct page to not be initialised. Because the system is using FLATMEM, the version of pfn_valid() in memory_model.h is used: static inline int pfn_valid(unsigned long pfn) { ... return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr; } Which relies on max_mapnr being initialised. Early in boot max_mapnr is zero meaning no PFNs are valid. max_mapnr is initialised in mem_init() called via: start_kernel() mm_core_init() # init/main.c:928 mem_init() But that is too late for the usage in init_unavailable_range() called via: start_kernel() setup_arch() # init/main.c:893 paging_init() free_area_init() init_unavailable_range() Although max_mapnr is currently set in mem_init(), the value is actually already available much earlier, as soon as mem_topology_setup() has completed, which is also before paging_init() is called. So move the initialisation there, which causes paging_init() to correctly initialise the struct page and fixes the bug. This bug seems to have been lurking for years, but went unnoticed because the pre-folio code was inspecting the uninitialised page->flags but not dereferencing it. Thanks to Erhard and Aneesh for help debugging. Reported-by: Erhard Furtner <erhard_f@mailbox.org> Closes: https://lore.kernel.org/all/20230929132750.3cd98452@yea/ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231023112500.1550208-1-mpe@ellerman.id.au
2023-10-11powerpc/47x: Fix 47x syscall return crashMichael Ellerman
Eddie reported that newer kernels were crashing during boot on his 476 FSP2 system: kernel tried to execute user page (b7ee2000) - exploit attempt? (uid: 0) BUG: Unable to handle kernel instruction fetch Faulting instruction address: 0xb7ee2000 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K FSP-2 Modules linked in: CPU: 0 PID: 61 Comm: mount Not tainted 6.1.55-d23900f.ppcnf-fsp2 #1 Hardware name: ibm,fsp2 476fpe 0x7ff520c0 FSP-2 NIP:  b7ee2000 LR: 8c008000 CTR: 00000000 REGS: bffebd83 TRAP: 0400   Not tainted (6.1.55-d23900f.ppcnf-fs p2) MSR:  00000030 <IR,DR>  CR: 00001000  XER: 20000000 GPR00: c00110ac bffebe63 bffebe7e bffebe88 8c008000 00001000 00000d12 b7ee2000 GPR08: 00000033 00000000 00000000 c139df10 48224824 1016c314 10160000 00000000 GPR16: 10160000 10160000 00000008 00000000 10160000 00000000 10160000 1017f5b0 GPR24: 1017fa50 1017f4f0 1017fa50 1017f740 1017f630 00000000 00000000 1017f4f0 NIP [b7ee2000] 0xb7ee2000 LR [8c008000] 0x8c008000 Call Trace: Instruction dump: XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX ---[ end trace 0000000000000000 ]--- The problem is in ret_from_syscall where the check for icache_44x_need_flush is done. When the flush is needed the code jumps out-of-line to do the flush, and then intends to jump back to continue the syscall return. However the branch back to label 1b doesn't return to the correct location, instead branching back just prior to the return to userspace, causing bogus register values to be used by the rfi. The breakage was introduced by commit 6f76a01173cc ("powerpc/syscall: implement system call entry/exit logic in C for PPC32") which inadvertently removed the "1" label and reused it elsewhere. Fix it by adding named local labels in the correct locations. Note that the return label needs to be outside the ifdef so that CONFIG_PPC_47x=n compiles. Fixes: 6f76a01173cc ("powerpc/syscall: implement system call entry/exit logic in C for PPC32") Cc: stable@vger.kernel.org # v5.12+ Reported-by: Eddie James <eajames@linux.ibm.com> Tested-by: Eddie James <eajames@linux.ibm.com> Link: https://lore.kernel.org/linuxppc-dev/fdaadc46-7476-9237-e104-1d2168526e72@linux.ibm.com/ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Link: https://msgid.link/20231010114750.847794-1-mpe@ellerman.id.au
2023-10-10powerpc/85xx: Fix math emulation exceptionChristophe Leroy
Booting mpc85xx_defconfig kernel on QEMU leads to: Bad trap at PC: fe9bab0, SR: 2d000, vector=800 awk[82]: unhandled trap (5) at 0 nip fe9bab0 lr fe9e01c code 5 in libc-2.27.so[fe5a000+17a000] awk[82]: code: 3aa00000 3a800010 4bffe03c 9421fff0 7ca62b78 38a00000 93c10008 83c10008 awk[82]: code: 38210010 4bffdec8 9421ffc0 7c0802a6 <fc00048e> d8010008 4815190d 93810030 Trace/breakpoint trap WARNING: no useful console This is because allthough CONFIG_MATH_EMULATION is selected, Exception 800 calls unknown_exception(). Call emulation_assist_interrupt() instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/066caa6d9480365da9b8ed83692d7101e10ac5f8.1695657339.git.christophe.leroy@csgroup.eu
2023-09-22powerpc/stacktrace: Fix arch_stack_walk_reliable()Michael Ellerman
The changes to copy_thread() made in commit eed7c420aac7 ("powerpc: copy_thread differentiate kthreads and user mode threads") inadvertently broke arch_stack_walk_reliable() because it has knowledge of the stack layout. Fix it by changing the condition to match the new logic in copy_thread(). The changes make the comments about the stack layout incorrect, rather than rephrasing them just refer the reader to copy_thread(). Also the comment about the stack backchain is no longer true, since commit edbd0387f324 ("powerpc: copy_thread add a back chain to the switch stack frame"), so remove that as well. Fixes: eed7c420aac7 ("powerpc: copy_thread differentiate kthreads and user mode threads") Reported-by: Joe Lawrence <joe.lawrence@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230921232441.1181843-1-mpe@ellerman.id.au
2023-09-18powerpc/dexcr: Move HASHCHK trap handlerBenjamin Gray
Syzkaller reported a sleep in atomic context bug relating to the HASHCHK handler logic: BUG: sleeping function called from invalid context at arch/powerpc/kernel/traps.c:1518 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 25040, name: syz-executor preempt_count: 0, expected: 0 RCU nest depth: 0, expected: 0 no locks held by syz-executor/25040. irq event stamp: 34 hardirqs last enabled at (33): [<c000000000048b38>] prep_irq_for_enabled_exit arch/powerpc/kernel/interrupt.c:56 [inline] hardirqs last enabled at (33): [<c000000000048b38>] interrupt_exit_user_prepare_main+0x148/0x600 arch/powerpc/kernel/interrupt.c:230 hardirqs last disabled at (34): [<c00000000003e6a4>] interrupt_enter_prepare+0x144/0x4f0 arch/powerpc/include/asm/interrupt.h:176 softirqs last enabled at (0): [<c000000000281954>] copy_process+0x16e4/0x4750 kernel/fork.c:2436 softirqs last disabled at (0): [<0000000000000000>] 0x0 CPU: 15 PID: 25040 Comm: syz-executor Not tainted 6.5.0-rc5-00001-g3ccdff6bb06d #3 Hardware name: IBM,9105-22A POWER10 (raw) 0x800200 0xf000006 of:IBM,FW1040.00 (NL1040_021) hv:phyp pSeries Call Trace: [c0000000a8247ce0] [c00000000032b0e4] __might_resched+0x3b4/0x400 kernel/sched/core.c:10189 [c0000000a8247d80] [c0000000008c7dc8] __might_fault+0xa8/0x170 mm/memory.c:5853 [c0000000a8247dc0] [c00000000004160c] do_program_check+0x32c/0xb20 arch/powerpc/kernel/traps.c:1518 [c0000000a8247e50] [c000000000009b2c] program_check_common_virt+0x3bc/0x3c0 To determine if a trap was caused by a HASHCHK instruction, we inspect the user instruction that triggered the trap. However this may sleep if the page needs to be faulted in (get_user_instr() reaches __get_user(), which calls might_fault() and triggers the bug message). Move the HASHCHK handler logic to after we allow IRQs, which is fine because we are only interested in HASHCHK if it's a user space trap. Fixes: 5bcba4e6c13f ("powerpc/dexcr: Handle hashchk exception") Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230915034604.45393-1-bgray@linux.ibm.com
2023-09-18powerpc/watchpoints: Annotate atomic context in more placesBenjamin Gray
It can be easy to miss that the notifier mechanism invokes the callbacks in an atomic context, so add some comments to that effect on the two handlers we register here. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230829063457.54157-4-bgray@linux.ibm.com
2023-09-18powerpc/watchpoint: Disable pagefaults when getting user instructionBenjamin Gray
This is called in an atomic context, so is not allowed to sleep if a user page needs to be faulted in and has nowhere it can be deferred to. The pagefault_disabled() function is documented as preventing user access methods from sleeping. In practice the page will be mapped in nearly always because we are reading the instruction that just triggered the watchpoint trap. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230829063457.54157-3-bgray@linux.ibm.com
2023-09-18powerpc/watchpoints: Disable preemption in thread_change_pc()Benjamin Gray
thread_change_pc() uses CPU local data, so must be protected from swapping CPUs while it is reading the breakpoint struct. The error is more noticeable after 1e60f3564bad ("powerpc/watchpoints: Track perf single step directly on the breakpoint"), which added an unconditional __this_cpu_read() call in thread_change_pc(). However the existing __this_cpu_read() that runs if a breakpoint does need to be re-inserted has the same issue. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230829063457.54157-2-bgray@linux.ibm.com
2023-08-31Merge tag 'powerpc-6.6-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: - Add HOTPLUG_SMT support (/sys/devices/system/cpu/smt) and honour the configured SMT state when hotplugging CPUs into the system - Combine final TLB flush and lazy TLB mm shootdown IPIs when using the Radix MMU to avoid a broadcast TLBIE flush on exit - Drop the exclusion between ptrace/perf watchpoints, and drop the now unused associated arch hooks - Add support for the "nohlt" command line option to disable CPU idle - Add support for -fpatchable-function-entry for ftrace, with GCC >= 13.1 - Rework memory block size determination, and support 256MB size on systems with GPUs that have hotpluggable memory - Various other small features and fixes Thanks to Andrew Donnellan, Aneesh Kumar K.V, Arnd Bergmann, Athira Rajeev, Benjamin Gray, Christophe Leroy, Frederic Barrat, Gautam Menghani, Geoff Levand, Hari Bathini, Immad Mir, Jialin Zhang, Joel Stanley, Jordan Niethe, Justin Stitt, Kajol Jain, Kees Cook, Krzysztof Kozlowski, Laurent Dufour, Liang He, Linus Walleij, Mahesh Salgaonkar, Masahiro Yamada, Michal Suchanek, Nageswara R Sastry, Nathan Chancellor, Nathan Lynch, Naveen N Rao, Nicholas Piggin, Nick Desaulniers, Omar Sandoval, Randy Dunlap, Reza Arbab, Rob Herring, Russell Currey, Sourabh Jain, Thomas Gleixner, Trevor Woerner, Uwe Kleine-König, Vaibhav Jain, Xiongfeng Wang, Yuan Tan, Zhang Rui, and Zheng Zengkai. * tag 'powerpc-6.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (135 commits) macintosh/ams: linux/platform_device.h is needed powerpc/xmon: Reapply "Relax frame size for clang" powerpc/mm/book3s64: Use 256M as the upper limit with coherent device memory attached powerpc/mm/book3s64: Fix build error with SPARSEMEM disabled powerpc/iommu: Fix notifiers being shared by PCI and VIO buses powerpc/mpc5xxx: Add missing fwnode_handle_put() powerpc/config: Disable SLAB_DEBUG_ON in skiroot powerpc/pseries: Remove unused hcall tracing instruction powerpc/pseries: Fix hcall tracepoints with JUMP_LABEL=n powerpc: dts: add missing space before { powerpc/eeh: Use pci_dev_id() to simplify the code powerpc/64s: Move CPU -mtune options into Kconfig powerpc/powermac: Fix unused function warning powerpc/pseries: Rework lppaca_shared_proc() to avoid DEBUG_PREEMPT powerpc: Don't include lppaca.h in paca.h powerpc/pseries: Move hcall_vphn() prototype into vphn.h powerpc/pseries: Move VPHN constants into vphn.h cxl: Drop unused detach_spa() powerpc: Drop zalloc_maybe_bootmem() powerpc/powernv: Use struct opal_prd_msg in more places ...
2023-08-30Merge tag 'integrity-v6.6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity Pull integrity subsystem updates from Mimi Zohar: - With commit 099f26f22f58 ("integrity: machine keyring CA configuration") certificates may be loaded onto the IMA keyring, directly or indirectly signed by keys on either the "builtin" or the "machine" keyrings. With the ability for the system/machine owner to sign the IMA policy itself without needing to recompile the kernel, update the IMA architecture specific policy rules to require the IMA policy itself be signed. [ As commit 099f26f22f58 was upstreamed in linux-6.4, updating the IMA architecture specific policy now to require signed IMA policies may break userspace expectations. ] - IMA only checked the file data hash was not on the system blacklist keyring for files with an appended signature (e.g. kernel modules, Power kernel image). Check all file data hashes regardless of how it was signed - Code cleanup, and a kernel-doc update * tag 'integrity-v6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity: kexec_lock: Replace kexec_mutex() by kexec_lock() in two comments ima: require signed IMA policy when UEFI secure boot is enabled integrity: Always reference the blacklist keyring with appraisal ima: Remove deprecated IMA_TRUSTED_KEYRING Kconfig
2023-08-29Merge tag 'dma-mapping-6.6-2023-08-29' of ↵Linus Torvalds
git://git.infradead.org/users/hch/dma-mapping Pull dma-maping updates from Christoph Hellwig: - allow dynamic sizing of the swiotlb buffer, to cater for secure virtualization workloads that require all I/O to be bounce buffered (Petr Tesarik) - move a declaration to a header (Arnd Bergmann) - check for memory region overlap in dma-contiguous (Binglei Wang) - remove the somewhat dangerous runtime swiotlb-xen enablement and unexport is_swiotlb_active (Christoph Hellwig, Juergen Gross) - per-node CMA improvements (Yajun Deng) * tag 'dma-mapping-6.6-2023-08-29' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: optimize get_max_slots() swiotlb: move slot allocation explanation comment where it belongs swiotlb: search the software IO TLB only if the device makes use of it swiotlb: allocate a new memory pool when existing pools are full swiotlb: determine potential physical address limit swiotlb: if swiotlb is full, fall back to a transient memory pool swiotlb: add a flag whether SWIOTLB is allowed to grow swiotlb: separate memory pool data from other allocator data swiotlb: add documentation and rename swiotlb_do_find_slots() swiotlb: make io_tlb_default_mem local to swiotlb.c swiotlb: bail out of swiotlb_init_late() if swiotlb is already allocated dma-contiguous: check for memory region overlap dma-contiguous: support numa CMA for specified node dma-contiguous: support per-numa CMA for all architectures dma-mapping: move arch_dma_set_mask() declaration to header swiotlb: unexport is_swiotlb_active x86: always initialize xen-swiotlb when xen-pcifront is enabling xen/pci: add flag for PCI passthrough being possible
2023-08-29Merge tag 'mm-nonmm-stable-2023-08-28-22-48' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-MM updates from Andrew Morton: - An extensive rework of kexec and crash Kconfig from Eric DeVolder ("refactor Kconfig to consolidate KEXEC and CRASH options") - kernel.h slimming work from Andy Shevchenko ("kernel.h: Split out a couple of macros to args.h") - gdb feature work from Kuan-Ying Lee ("Add GDB memory helper commands") - vsprintf inclusion rationalization from Andy Shevchenko ("lib/vsprintf: Rework header inclusions") - Switch the handling of kdump from a udev scheme to in-kernel handling, by Eric DeVolder ("crash: Kernel handling of CPU and memory hot un/plug") - Many singleton patches to various parts of the tree * tag 'mm-nonmm-stable-2023-08-28-22-48' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (81 commits) document while_each_thread(), change first_tid() to use for_each_thread() drivers/char/mem.c: shrink character device's devlist[] array x86/crash: optimize CPU changes crash: change crash_prepare_elf64_headers() to for_each_possible_cpu() crash: hotplug support for kexec_load() x86/crash: add x86 crash hotplug support crash: memory and CPU hotplug sysfs attributes kexec: exclude elfcorehdr from the segment digest crash: add generic infrastructure for crash hotplug support crash: move a few code bits to setup support of crash hotplug kstrtox: consistently use _tolower() kill do_each_thread() nilfs2: fix WARNING in mark_buffer_dirty due to discarded buffer reuse scripts/bloat-o-meter: count weak symbol sizes treewide: drop CONFIG_EMBEDDED lockdep: fix static memory detection even more lib/vsprintf: declare no_hash_pointers in sprintf.h lib/vsprintf: split out sprintf() and friends kernel/fork: stop playing lockless games for exe_file replacement adfs: delete unused "union adfs_dirtail" definition ...
2023-08-28Merge tag 'v6.6-vfs.fchmodat2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs Pull fchmodat2 system call from Christian Brauner: "This adds the fchmodat2() system call. It is a revised version of the fchmodat() system call, adding a missing flag argument. Support for both AT_SYMLINK_NOFOLLOW and AT_EMPTY_PATH are included. Adding this system call revision has been a longstanding request but so far has always fallen through the cracks. While the kernel implementation of fchmodat() does not have a flag argument the libc provided POSIX-compliant fchmodat(3) version does. Both glibc and musl have to implement a workaround in order to support AT_SYMLINK_NOFOLLOW (see [1] and [2]). The workaround is brittle because it relies not just on O_PATH and O_NOFOLLOW semantics and procfs magic links but also on our rather inconsistent symlink semantics. This gives userspace a proper fchmodat2() system call that libcs can use to properly implement fchmodat(3) and allows them to get rid of their hacks. In this case it will immediately benefit them as the current workaround is already defunct because of aformentioned inconsistencies. In addition to AT_SYMLINK_NOFOLLOW, give userspace the ability to use AT_EMPTY_PATH with fchmodat2(). This is already possible with fchownat() so there's no reason to not also support it for fchmodat2(). The implementation is simple and comes with selftests. Implementation of the system call and wiring up the system call are done as separate patches even though they could arguably be one patch. But in case there are merge conflicts from other system call additions it can be beneficial to have separate patches" Link: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/fchmodat.c;h=17eca54051ee28ba1ec3f9aed170a62630959143;hb=a492b1e5ef7ab50c6fdd4e4e9879ea5569ab0a6c#l35 [1] Link: https://git.musl-libc.org/cgit/musl/tree/src/stat/fchmodat.c?id=718f363bc2067b6487900eddc9180c84e7739f80#n28 [2] * tag 'v6.6-vfs.fchmodat2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: selftests: fchmodat2: remove duplicate unneeded defines fchmodat2: add support for AT_EMPTY_PATH selftests: Add fchmodat2 selftest arch: Register fchmodat2, usually as syscall 452 fs: Add fchmodat2() Non-functional cleanup of a "__user * filename"
2023-08-25powerpc/iommu: Fix notifiers being shared by PCI and VIO busesRussell Currey
fail_iommu_setup() registers the fail_iommu_bus_notifier struct to both PCI and VIO buses. struct notifier_block is a linked list node, so this causes any notifiers later registered to either bus type to also be registered to the other since they share the same node. This causes issues in (at least) the vgaarb code, which registers a notifier for PCI buses. pci_notify() ends up being called on a vio device, converted with to_pci_dev() even though it's not a PCI device, and finally makes a bad access in vga_arbiter_add_pci_device() as discovered with KASAN: BUG: KASAN: slab-out-of-bounds in vga_arbiter_add_pci_device+0x60/0xe00 Read of size 4 at addr c000000264c26fdc by task swapper/0/1 Call Trace: dump_stack_lvl+0x1bc/0x2b8 (unreliable) print_report+0x3f4/0xc60 kasan_report+0x244/0x698 __asan_load4+0xe8/0x250 vga_arbiter_add_pci_device+0x60/0xe00 pci_notify+0x88/0x444 notifier_call_chain+0x104/0x320 blocking_notifier_call_chain+0xa0/0x140 device_add+0xac8/0x1d30 device_register+0x58/0x80 vio_register_device_node+0x9ac/0xce0 vio_bus_scan_register_devices+0xc4/0x13c __machine_initcall_pseries_vio_device_init+0x94/0xf0 do_one_initcall+0x12c/0xaa8 kernel_init_freeable+0xa48/0xba8 kernel_init+0x64/0x400 ret_from_kernel_thread+0x5c/0x64 Fix this by creating separate notifier_block structs for each bus type. Fixes: d6b9a81b2a45 ("powerpc: IOMMU fault injection") Reported-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Signed-off-by: Russell Currey <ruscur@russell.cc> Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> [mpe: Add #ifdef to fix CONFIG_IBMVIO=n build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230322035322.328709-1-ruscur@russell.cc
2023-08-24powerpc: Drop zalloc_maybe_bootmem()Michael Ellerman
The only callers of zalloc_maybe_bootmem() are PCI setup routines. These used to be called early during boot before slab setup, and also during runtime due to hotplug. But commit 5537fcb319d0 ("powerpc/pci: Add ppc_md.discover_phbs()") moved the boot-time calls later, after slab setup, meaning there's no longer any need for zalloc_maybe_bootmem(), kzalloc() can be used in all cases. Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230823055430.752550-1-mpe@ellerman.id.au
2023-08-23powerpc/85xx: Mark some functions static and add missing includes to fix no ↵Christophe Leroy
previous prototype error corenet{32/64}_smp_defconfig leads to: CC arch/powerpc/sysdev/ehv_pic.o arch/powerpc/sysdev/ehv_pic.c:45:6: error: no previous prototype for 'ehv_pic_unmask_irq' [-Werror=missing-prototypes] 45 | void ehv_pic_unmask_irq(struct irq_data *d) | ^~~~~~~~~~~~~~~~~~ arch/powerpc/sysdev/ehv_pic.c:52:6: error: no previous prototype for 'ehv_pic_mask_irq' [-Werror=missing-prototypes] 52 | void ehv_pic_mask_irq(struct irq_data *d) | ^~~~~~~~~~~~~~~~ arch/powerpc/sysdev/ehv_pic.c:59:6: error: no previous prototype for 'ehv_pic_end_irq' [-Werror=missing-prototypes] 59 | void ehv_pic_end_irq(struct irq_data *d) | ^~~~~~~~~~~~~~~ arch/powerpc/sysdev/ehv_pic.c:66:6: error: no previous prototype for 'ehv_pic_direct_end_irq' [-Werror=missing-prototypes] 66 | void ehv_pic_direct_end_irq(struct irq_data *d) | ^~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/sysdev/ehv_pic.c:71:5: error: no previous prototype for 'ehv_pic_set_affinity' [-Werror=missing-prototypes] 71 | int ehv_pic_set_affinity(struct irq_data *d, const struct cpumask *dest, | ^~~~~~~~~~~~~~~~~~~~ arch/powerpc/sysdev/ehv_pic.c:112:5: error: no previous prototype for 'ehv_pic_set_irq_type' [-Werror=missing-prototypes] 112 | int ehv_pic_set_irq_type(struct irq_data *d, unsigned int flow_type) | ^~~~~~~~~~~~~~~~~~~~ CC arch/powerpc/sysdev/fsl_rio.o arch/powerpc/sysdev/fsl_rio.c:102:5: error: no previous prototype for 'fsl_rio_mcheck_exception' [-Werror=missing-prototypes] 102 | int fsl_rio_mcheck_exception(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/sysdev/fsl_rio.c:306:5: error: no previous prototype for 'fsl_map_inb_mem' [-Werror=missing-prototypes] 306 | int fsl_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, | ^~~~~~~~~~~~~~~ arch/powerpc/sysdev/fsl_rio.c:357:6: error: no previous prototype for 'fsl_unmap_inb_mem' [-Werror=missing-prototypes] 357 | void fsl_unmap_inb_mem(struct rio_mport *mport, dma_addr_t lstart) | ^~~~~~~~~~~~~~~~~ arch/powerpc/sysdev/fsl_rio.c:445:5: error: no previous prototype for 'fsl_rio_setup' [-Werror=missing-prototypes] 445 | int fsl_rio_setup(struct platform_device *dev) | ^~~~~~~~~~~~~ CC arch/powerpc/sysdev/fsl_rmu.o arch/powerpc/sysdev/fsl_rmu.c:362:6: error: no previous prototype for 'msg_unit_error_handler' [-Werror=missing-prototypes] 362 | void msg_unit_error_handler(void) | ^~~~~~~~~~~~~~~~~~~~~~ CC arch/powerpc/platforms/85xx/corenet_generic.o arch/powerpc/platforms/85xx/corenet_generic.c:33:13: error: no previous prototype for 'corenet_gen_pic_init' [-Werror=missing-prototypes] 33 | void __init corenet_gen_pic_init(void) | ^~~~~~~~~~~~~~~~~~~~ arch/powerpc/platforms/85xx/corenet_generic.c:51:13: error: no previous prototype for 'corenet_gen_setup_arch' [-Werror=missing-prototypes] 51 | void __init corenet_gen_setup_arch(void) | ^~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/platforms/85xx/corenet_generic.c:104:12: error: no previous prototype for 'corenet_gen_publish_devices' [-Werror=missing-prototypes] 104 | int __init corenet_gen_publish_devices(void) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ CC arch/powerpc/platforms/85xx/qemu_e500.o arch/powerpc/platforms/85xx/qemu_e500.c:28:13: error: no previous prototype for 'qemu_e500_pic_init' [-Werror=missing-prototypes] 28 | void __init qemu_e500_pic_init(void) | ^~~~~~~~~~~~~~~~~~ CC arch/powerpc/kernel/pmc.o arch/powerpc/kernel/pmc.c:78:6: error: no previous prototype for 'power4_enable_pmcs' [-Werror=missing-prototypes] 78 | void power4_enable_pmcs(void) | ^~~~~~~~~~~~~~~~~~ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/c90780017b624b91771a3e4240dcbadc68137915.1692684784.git.christophe.leroy@csgroup.eu
2023-08-22powerpc/ftrace: Add support for -fpatchable-function-entryNaveen N Rao
GCC v13.1 updated support for -fpatchable-function-entry on ppc64le to emit nops after the local entry point, rather than before it. This allows us to use this in the kernel for ftrace purposes. A new script is added under arch/powerpc/tools/ to help detect if nops are emitted after the function local entry point, or before the global entry point. With -fpatchable-function-entry, we no longer have the profiling instructions generated at function entry, so we only need to validate the presence of two nops at the ftrace location in ftrace_init_nop(). We patch the preceding instruction with 'mflr r0' to match the -mprofile-kernel ABI for subsequent ftrace use. This changes the profiling instructions used on ppc32. The default -pg option emits an additional 'stw' instruction after 'mflr r0' and before the branch to _mcount 'bl _mcount'. This is very similar to the original -mprofile-kernel implementation on ppc64le, where an additional 'std' instruction was used to save LR to its save location in the caller's stackframe. Subsequently, this additional store was removed in later compiler versions for performance reasons. The same reasons apply for ppc32 so we only patch in a 'mflr r0'. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/68586d22981a2c3bb45f27a2b621173d10a7d092.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Implement ftrace_replace_code()Naveen N Rao
Implement ftrace_replace_code() to consolidate logic from the different ftrace patching routines: ftrace_make_nop(), ftrace_make_call() and ftrace_modify_call(). Note that ftrace_make_call() is still required primarily to handle patching modules during their load time. The other two routines should no longer be called. This lays the groundwork to enable better control in patching ftrace locations, including the ability to nop-out preceding profiling instructions when ftrace is disabled. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/c28f852225646b0561bbf3c1d22d03f041ace8e0.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Replace use of ftrace_call_replace() with ↵Naveen N Rao
ftrace_create_branch_inst() ftrace_create_branch_inst() is clearer about its intent than ftrace_call_replace(). Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/953513b88fa922ba7a66d772dc1310710efe9177.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Simplify ftrace_modify_call()Naveen N Rao
Now that we validate the ftrace location during initialization in ftrace_init_nop(), we can simplify ftrace_modify_call() to patch-in the updated branch instruction without worrying about the instructions surrounding the ftrace location. Note that we continue to ensure we have the expected branch instruction at the ftrace location before patching it with the updated branch destination. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/06275720939f8ee4c2f61c9e9a3e89b1fa3c441d.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Simplify ftrace_make_call()Naveen N Rao
Now that we validate the ftrace location during initialization in ftrace_init_nop(), we can simplify ftrace_make_call() to replace the nop without worrying about the instructions surrounding the ftrace location. Note that we continue to ensure that we have a nop at the ftrace location before patching it. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/2d28866d2f556488a663981abe5621511efb207b.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Simplify ftrace_make_nop()Naveen N Rao
Now that we validate the ftrace location during initialization in ftrace_init_nop(), we can simplify ftrace_make_nop() to patch-in the nop without worrying about the instructions surrounding the ftrace location. Note that we continue to ensure that we have a bl to ftrace_[regs_]caller at the ftrace location before nop-ing it out. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/e12ccbf28c50c3a07fb614f4d392e55f7098a729.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Add separate ftrace_init_nop() with additional validationNaveen N Rao
Currently, we validate instructions around the ftrace location every time we have to enable/disable ftrace. Introduce ftrace_init_nop() to instead perform all the validation during ftrace initialization. This allows us to simply patch the necessary instructions during enabling/disabling ftrace. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/f373684081e8e98be09b7f44d2d93069768324dc.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Stop re-purposing linker generated long branches for ftraceNaveen N Rao
Commit 67361cf8071286 ("powerpc/ftrace: Handle large kernel configs") added ftrace support for ppc64 kernel images with a text section larger than 32MB. The patch did two things: 1. Add stubs at the end of .text to branch into ftrace_[regs_]caller for functions that were out of branch range. 2. Re-purpose linker-generated long branches to _mcount to instead branch to ftrace_[regs_]caller. Before that, we only supported kernel .text up to ~32MB. With the above, we now support up to ~96MB: - The first 32MB of kernel text can branch directly into ftrace_[regs_]caller since that symbol is usually at the beginning. - The modified long_branch from (2) above is used by the next 32MB of kernel text. - The next 32MB of kernel text can use the stub at the end of text to branch back to ftrace_[regs_]caller. While re-purposing the long branch works in practice, it still restricts ftrace to kernel text up to ~96MB. The stub at the end of kernel text from (1) already enables us to extend ftrace support for kernel text up to 64MB, which fulfils the original requirement. Further, once we switch to -fpatchable-function-entry, there will not be a long branch that we can use. Stop re-purposing the linker-generated long branches for ftrace to simplify the code. If there are good reasons to support ftrace on kernels beyond 64MB, we can consider adding support by using -fpatchable-function-entry. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/33fa3be97f8e1f2171254ef2e1b0d5c8836c11fd.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Refactor ftrace_modify_code()Naveen N Rao
Split up ftrace_modify_code() into a few helpers for future use. Also update error messages accordingly. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/a8daa49712b44ff539e6c22a2ea649a540386798.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Consolidate ftrace support into fewer filesNaveen N Rao
ftrace_low.S has just the _mcount stub and return_to_handler(). Merge this back into ftrace_mprofile.S and ftrace_64_pg.S to keep all ftrace code together, and to allow those to evolve independently. ftrace_mprofile.S is also not an entirely accurate name since this also holds ppc32 code. This will be all the more incorrect once support for -fpatchable-function-entry is added. Rename files here to more accurately describe the code: - ftrace_mprofile.S is renamed to ftrace_entry.S - ftrace_pg.c is renamed to ftrace_64_pg.c - ftrace_64_pg.S is rename to ftrace_64_pg_entry.S Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/b900c9a8bba9d6c3c295e0f99886acf3e5bf6f7b.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Extend ftrace support for large kernels to ppc32Naveen N Rao
Commit 67361cf8071286 ("powerpc/ftrace: Handle large kernel configs") added ftrace support for ppc64 kernel images with a text section larger than 32MB. The approach itself isn't specific to ppc64, so extend the same to also work on ppc32. While at it, reduce the space reserved for the stub from 64 bytes to 32 bytes since the different stub variants are all less than 8 instructions. To reduce use of #ifdef, a stub implementation is provided for kernel_toc_address() and -SZ_2G is cast to 'long long' to prevent errors on ppc32. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/9fa3258cbb9105cf8a0a8135214d44ffbc75fe84.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Use FTRACE_REGS_ADDR to identify the correct ftrace trampolineNaveen N Rao
Instead of keying off DYNAMIC_FTRACE_WITH_REGS, use FTRACE_REGS_ADDR to identify the proper ftrace trampoline address to use. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/6045a280a57a7ea937a5bb13ccac747026dbfb07.1687166935.git.naveen@kernel.org
2023-08-22powerpc/ftrace: Simplify function_graph support in ftrace.cNaveen N Rao
Since we now support DYNAMIC_FTRACE_WITH_ARGS across ppc32 and ppc64 ELFv2, we can simplify function_graph tracer support code in ftrace.c Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/4dc92c4b1ed444dc62b748ae7327acdb9e096864.1687166935.git.naveen@kernel.org
2023-08-22powerpc64/ftrace: Move ELFv1 and -pg support code into a separate fileNaveen N Rao
ELFv1 support is deprecated and on the way out. Pre -mprofile-kernel ftrace support (-pg only) is very limited and is retained primarily for clang builds. It won't be necessary once clang lands support for -fpatchable-function-entry. Copy the existing ftrace code supporting these into ftrace_pg.c. ftrace.c can then be refactored and enhanced with a focus on ppc32 and ppc64 ELFv2. Signed-off-by: Naveen N Rao <naveen@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/1eb6cc6c3141ddb77a2a25f8a9e83d83ff312b02.1687166935.git.naveen@kernel.org
2023-08-18nmi_backtrace: allow excluding an arbitrary CPUDouglas Anderson
The APIs that allow backtracing across CPUs have always had a way to exclude the current CPU. This convenience means callers didn't need to find a place to allocate a CPU mask just to handle the common case. Let's extend the API to take a CPU ID to exclude instead of just a boolean. This isn't any more complex for the API to handle and allows the hardlockup detector to exclude a different CPU (the one it already did a trace for) without needing to find space for a CPU mask. Arguably, this new API also encourages safer behavior. Specifically if the caller wants to avoid tracing the current CPU (maybe because they already traced the current CPU) this makes it more obvious to the caller that they need to make sure that the current CPU ID can't change. [akpm@linux-foundation.org: fix trigger_allbutcpu_cpu_backtrace() stub] Link: https://lkml.kernel.org/r/20230804065935.v4.1.Ia35521b91fc781368945161d7b28538f9996c182@changeid Signed-off-by: Douglas Anderson <dianders@chromium.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: kernel test robot <lkp@intel.com> Cc: Lecopzer Chen <lecopzer.chen@mediatek.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Pingfan Liu <kernelfans@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-18powerpc/rtas: export rtas_error_rc() for reuse.Mahesh Salgaonkar
Also, #define descriptive names for common rtas return codes and use it instead of numeric values. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/169235811556.193557.1023625262204809514.stgit@jupiter
2023-08-18powerpc/fadump: invoke ibm,os-term with rtas_call_unlocked()Hari Bathini
Invoke ibm,os-term call with rtas_call_unlocked(), without using the RTAS spinlock, to avoid deadlock in the unlikely event of a machine crash while making an RTAS call. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609071404.425529-1-hbathini@linux.ibm.com
2023-08-18powerpc/fadump: reset dump area size if fadump memory reserve failsSourabh Jain
In case fadump_reserve_mem() fails to reserve memory, the reserve_dump_area_size variable will retain the reserve area size. This will lead to /sys/kernel/fadump/mem_reserved node displaying an incorrect memory reserved by fadump. To fix this problem, reserve dump area size variable is set to 0 if fadump failed to reserve memory. Fixes: 8255da95e545 ("powerpc/fadump: release all the memory above boot memory size") Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Acked-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230704050715.203581-1-sourabhjain@linux.ibm.com
2023-08-18powerpc/4xx: Remove WatchdogHandler() to fix no previous prototype errorChristophe Leroy
Building ppc40x_defconfig throws the following error: CC arch/powerpc/kernel/traps.o arch/powerpc/kernel/traps.c:2232:29: warning: no previous prototype for 'WatchdogHandler' [-Wmissing-prototypes] 2232 | void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs) | ^~~~~~~~~~~~~~~ This function was imported by commit 14cf11af6cf6 ("powerpc: Merge enough to start building in arch/powerpc.") as a weak function but never defined and/or called outside traps.c As it has only one caller fold it inside its caller and remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/38fe1078eb403eef74dc8f29387636fd7ecdf43c.1692276041.git.christophe.leroy@csgroup.eu
2023-08-17powerpc/rtas_flash: allow user copy to flash block cache objectsNathan Lynch
With hardened usercopy enabled (CONFIG_HARDENED_USERCOPY=y), using the /proc/powerpc/rtas/firmware_update interface to prepare a system firmware update yields a BUG(): kernel BUG at mm/usercopy.c:102! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: CPU: 0 PID: 2232 Comm: dd Not tainted 6.5.0-rc3+ #2 Hardware name: IBM,8408-E8E POWER8E (raw) 0x4b0201 0xf000004 of:IBM,FW860.50 (SV860_146) hv:phyp pSeries NIP: c0000000005991d0 LR: c0000000005991cc CTR: 0000000000000000 REGS: c0000000148c76a0 TRAP: 0700 Not tainted (6.5.0-rc3+) MSR: 8000000000029033 <SF,EE,ME,IR,DR,RI,LE> CR: 24002242 XER: 0000000c CFAR: c0000000001fbd34 IRQMASK: 0 [ ... GPRs omitted ... ] NIP usercopy_abort+0xa0/0xb0 LR usercopy_abort+0x9c/0xb0 Call Trace: usercopy_abort+0x9c/0xb0 (unreliable) __check_heap_object+0x1b4/0x1d0 __check_object_size+0x2d0/0x380 rtas_flash_write+0xe4/0x250 proc_reg_write+0xfc/0x160 vfs_write+0xfc/0x4e0 ksys_write+0x90/0x160 system_call_exception+0x178/0x320 system_call_common+0x160/0x2c4 The blocks of the firmware image are copied directly from user memory to objects allocated from flash_block_cache, so flash_block_cache must be created using kmem_cache_create_usercopy() to mark it safe for user access. Fixes: 6d07d1cd300f ("usercopy: Restrict non-usercopy caches to size 0") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Kees Cook <keescook@chromium.org> [mpe: Trim and indent oops] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230810-rtas-flash-vs-hardened-usercopy-v2-1-dcf63793a938@linux.ibm.com
2023-08-16powerpc/ptrace: Split gpr32_set_commonChristophe Leroy
objtool reports the following warning: arch/powerpc/kernel/ptrace/ptrace-view.o: warning: objtool: gpr32_set_common+0x23c (.text+0x860): redundant UACCESS disable gpr32_set_common() conditionally opens and closes UACCESS based on whether kbuf pointer is NULL or not. This is wackelig. Split gpr32_set_common() in two fonctions, one for user one for kernel. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix oops in gpr32_set_common_user() due to NULL kbuf] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/b8d6ae4483fcfd17524e79d803c969694a85cc02.1687428075.git.christophe.leroy@csgroup.eu
2023-08-16powerpc/watchpoints: Remove ptrace/perf exclusion trackingBenjamin Gray
ptrace and perf watchpoints were considered incompatible in commit 29da4f91c0c1 ("powerpc/watchpoint: Don't allow concurrent perf and ptrace events"), but the logic in that commit doesn't really apply. Ptrace doesn't automatically single step; the ptracer must request this explicitly. And the ptracer can do so regardless of whether a ptrace/perf watchpoint triggered or not: it could single step every instruction if it wanted to. Whatever stopped the ptracee before executing the instruction that would trigger the perf watchpoint is no longer relevant by this point. To get correct behaviour when perf and ptrace are watching the same data we must ignore the perf watchpoint. After all, ptrace has before-execute semantics, and perf is after-execute, so perf doesn't actually care about the watchpoint trigger at this point in time. Pausing before execution does not mean we will actually end up executing the instruction. Importantly though, we don't remove the perf watchpoint yet. This is key. The ptracer is free to do whatever it likes right now. E.g., it can continue the process, single step. or even set the child PC somewhere completely different. If it does try to execute the instruction though, without reinserting the watchpoint (in which case we go back to the start of this example), the perf watchpoint would immediately trigger. This time there is no ptrace watchpoint, so we can safely perform a single step and increment the perf counter. Upon receiving the single step exception, the existing code already handles propagating or consuming it based on whether another subsystem (e.g. ptrace) requested a single step. Again, this is needed with or without perf/ptrace exclusion, because ptrace could be single stepping this instruction regardless of if a watchpoint is involved. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230801011744.153973-6-bgray@linux.ibm.com
2023-08-16powerpc/watchpoints: Simplify watchpoint reinsertionBenjamin Gray
We only remove watchpoints when they have the perf_single_step flag set, so we can reinsert them during the first iteration. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230801011744.153973-5-bgray@linux.ibm.com
2023-08-16powerpc/watchpoints: Track perf single step directly on the breakpointBenjamin Gray
There is a bug in the current watchpoint tracking logic, where the teardown in arch_unregister_hw_breakpoint() uses bp->ctx->task, which it does not have a reference of and parallel threads may be in the process of destroying. This was partially addressed in commit fb822e6076d9 ("powerpc/hw_breakpoint: Fix oops when destroying hw_breakpoint event"), but the underlying issue of accessing a struct member in an unknown state still remained. Syzkaller managed to trigger a null pointer derefernce due to the race between the task destructor and checking the pointer and dereferencing it in the loop. While this null pointer dereference could be fixed by using READ_ONCE to access the task up front, that just changes the error to manipulating possbily freed memory. Instead, the breakpoint logic needs to be reworked to remove any dependency on a context or task struct during breakpoint removal. The reason we have this currently is to clear thread.last_hit_ubp. This member is used to differentiate the perf DAWR single-step sequence from other causes of single-step, such as userspace just calling ptrace(PTRACE_SINGLESTEP, ...). We need to differentiate them because, when the single step interrupt is received, we need to know whether to re-insert the DAWR breakpoint (perf) or not (ptrace / other). arch_unregister_hw_breakpoint() needs to clear this information to prevent dangling pointers to possibly freed memory. These pointers are dereferenced in single_step_dabr_instruction() without a way to check their validity. This patch moves the tracking of this information to the breakpoint itself. This means we no longer have to do anything special to clean up. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230801011744.153973-4-bgray@linux.ibm.com
2023-08-16powerpc/watchpoints: Don't track info persistentlyBenjamin Gray
info is cheap to retrieve, and is likely optimised by the compiler anyway. On the other hand, propagating it across the functions makes it possible to be inconsistent and adds needless complexity. Remove it, and invoke counter_arch_bp() when we need to work with it. As we don't persist it, we just use the local bp array to track whether we are ignoring a breakpoint. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230801011744.153973-3-bgray@linux.ibm.com
2023-08-16powerpc/watchpoints: Explain thread_change_pc() moreBenjamin Gray
The behaviour of the thread_change_pc() function is a bit cryptic without being more familiar with how the watchpoint logic handles perf's after-execute semantics. Expand the comment to explain why we can re-insert the breakpoint and unset the perf_single_step flag. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230801011744.153973-2-bgray@linux.ibm.com
2023-08-16powerpc: replace #include <asm/export.h> with #include <linux/export.h>Masahiro Yamada
Commit ddb5cdbafaaa ("kbuild: generate KSYMTAB entries by modpost") deprecated <asm/export.h>, which is now a wrapper of <linux/export.h>. Replace #include <asm/export.h> with #include <linux/export.h>. After all the <asm/export.h> lines are converted, <asm/export.h> and <asm-generic/export.h> will be removed. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> [mpe: Fixup selftests that stub asm/export.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230806150954.394189-2-masahiroy@kernel.org
2023-08-16powerpc: remove unneeded #include <asm/export.h>Masahiro Yamada
There is no EXPORT_SYMBOL line there, hence #include <asm/export.h> is unneeded. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230806150954.394189-1-masahiroy@kernel.org
2023-08-14Merge branch 'topic/cpu-smt' into nextMichael Ellerman
Merge SMT changes we are sharing with the tip tree.
2023-08-02powerpc: Add HOTPLUG_SMT supportMichael Ellerman
Add support for HOTPLUG_SMT, which enables the generic sysfs SMT support files in /sys/devices/system/cpu/smt, as well as the "nosmt" boot parameter. Implement the recently added hooks to allow partial SMT states, allow any number of threads per core. Tie the config symbol to HOTPLUG_CPU, which enables it on the major platforms that support SMT. If there are other platforms that want the SMT support that can be tweaked in future. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> [ldufour: remove topology_smt_supported] [ldufour: remove topology_smt_threads_supported] [ldufour: select CONFIG_SMT_NUM_THREADS_DYNAMIC] [ldufour: update kernel-parameters.txt] Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Link: https://msgid.link/20230705145143.40545-10-ldufour@linux.ibm.com
2023-08-02powerpc: address missing-prototypes warningsArnd Bergmann
There are a few warnings in powerpc64 defconfig builds after -Wmissing-prototypes gets promoted from W=1 to the default warning set: arch/powerpc/mm/book3s64/pgtable.c:422:6: error: no previous prototype for 'arch_report_meminfo' [-Werror=missing-prototypes] arch/powerpc/platforms/cell/ras.c:275:5: error: no previous prototype for 'cbe_sysreset_hack' [-Werror=missing-prototypes] arch/powerpc/platforms/cell/spu_manage.c:29:21: error: no previous prototype for 'spu_devnode' [-Werror=missing-prototypes] arch/powerpc/platforms/pasemi/time.c:12:17: error: no previous prototype for 'pas_get_boot_time' [-Werror=missing-prototypes] arch/powerpc/platforms/powermac/feature.c:1532:13: error: no previous prototype for 'g5_phy_disable_cpu1' [-Werror=missing-prototypes] arch/powerpc/platforms/86xx/pic.c:28:13: error: no previous prototype for 'mpc86xx_init_irq' [-Werror=missing-prototypes] drivers/pci/pci-sysfs.c:936:13: error: no previous prototype for 'pci_adjust_legacy_attr' [-Werror=missing-prototypes] Address these by including the right header files or marking the functions static. The audit.c one is a bit tricky since compat_audit.h cannot include regular kernel headers tht have conflicting types on 32-bit powerpc. Signed-off-by: Arnd Bergmann <arnd@arndb.de> [mpe: Drop change to __vmemmap_free() which only exists in mm] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230727122720.2558065-1-arnd@kernel.org
2023-08-02powerpc: Explicitly include correct DT includesRob Herring
The DT of_device.h and of_platform.h date back to the separate of_platform_bus_type before it as merged into the regular platform bus. As part of that merge prepping Arm DT support 13 years ago, they "temporarily" include each other. They also include platform_device.h and of.h. As a result, there's a pretty much random mix of those include files used throughout the tree. In order to detangle these headers and replace the implicit includes with struct declarations, users need to explicitly include the correct includes. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Fixup maple/setup.c which needs platform_device] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230724210247.778034-1-robh@kernel.org
2023-08-02powerpc: Account mm_cpumask and active_cpus in init_mmNicholas Piggin
init_mm mm_cpumask and context.active_cpus is not maintained at boot and hotplug. This seems to be harmless because init_mm does not have a userspace and so never gets user TLBs flushed, but it looks odd and it prevents some sanity checks being added. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230524060821.148015-2-npiggin@gmail.com
2023-08-02powerpc/kuap: Simplify KUAP lock/unlock on BOOK3S/32Christophe Leroy
On book3s/32 KUAP is performed at segment level. At the moment, when enabling userspace access, only current segment is modified. Then if a write is performed on another user segment, a fault is taken and all other user segments get enabled for userspace access. This then require special attention when disabling userspace access. Having a userspace write access crossing a segment boundary is unlikely. Having a userspace write access crossing a segment boundary back and forth is even more unlikely. So, instead of enabling userspace access on all segments when a write fault occurs, just change which segment has userspace access enabled in order to eliminate the case when more than one segment has userspace access enabled. That simplifies userspace access deactivation. There is however a corner case which is even more unlikely but has to be handled anyway: an unaligned access which is crossing a segment boundary. That would definitely require at least having userspace access enabled on the two segments. To avoid complicating the likely case for a so unlikely happening, handle such situation like an alignment exception and emulate the store. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/8de8580513c1a6e880bad1ba9a69d3efad3d4fa5.1689091022.git.christophe.leroy@csgroup.eu