diff options
author | Al Viro <viro@zeniv.linux.org.uk> | 2023-01-31 00:03:40 -0500 |
---|---|---|
committer | Al Viro <viro@zeniv.linux.org.uk> | 2023-03-02 12:32:19 -0500 |
commit | 79c54c97c7735e09f9e2193a713f30e56eec00dc (patch) | |
tree | 926d7e247c867930c355e6e0df114652e8500990 /arch | |
parent | dce45493aff3fdd57fed2a0da264e585dba88433 (diff) |
sparc: fix livelock in uaccess
sparc equivalent of 26178ec11ef3 "x86: mm: consolidate VM_FAULT_RETRY handling"
If e.g. get_user() triggers a page fault and a fatal signal is caught, we might
end up with handle_mm_fault() returning VM_FAULT_RETRY and not doing anything
to page tables. In such case we must *not* return to the faulting insn -
that would repeat the entire thing without making any progress; what we need
instead is to treat that as failed (user) memory access.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/sparc/mm/fault_32.c | 5 | ||||
-rw-r--r-- | arch/sparc/mm/fault_64.c | 7 |
2 files changed, 10 insertions, 2 deletions
diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 91259f291c54..179295b14664 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -187,8 +187,11 @@ good_area: */ fault = handle_mm_fault(vma, address, flags, regs); - if (fault_signal_pending(fault, regs)) + if (fault_signal_pending(fault, regs)) { + if (!from_user) + goto no_context; return; + } /* The fault is fully completed (including releasing mmap lock) */ if (fault & VM_FAULT_COMPLETED) diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 4acc12eafbf5..d91305de694c 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -424,8 +424,13 @@ good_area: fault = handle_mm_fault(vma, address, flags, regs); - if (fault_signal_pending(fault, regs)) + if (fault_signal_pending(fault, regs)) { + if (regs->tstate & TSTATE_PRIV) { + insn = get_fault_insn(regs, insn); + goto handle_kernel_fault; + } goto exit_exception; + } /* The fault is fully completed (including releasing mmap lock) */ if (fault & VM_FAULT_COMPLETED) |