diff options
author | Xiao Wang <xiao.w.wang@intel.com> | 2024-03-13 17:19:29 +0800 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2024-05-22 16:12:55 -0700 |
commit | 9850e73e82972f518b75dd0d94d2322f44d9191d (patch) | |
tree | 9a51d20116015461923b0dd1fc8525ebe00655dc /arch/riscv | |
parent | f1905946bed052522522303f1d144f506ef5d9f9 (diff) |
riscv: uaccess: Relax the threshold for fast path
The bytes copy for unaligned head would cover at most SZREG-1 bytes, so
it's better to set the threshold as >= (SZREG-1 + word_copy stride size)
which equals to 9*SZREG-1.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20240313091929.4029960-1-xiao.w.wang@intel.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv')
-rw-r--r-- | arch/riscv/lib/uaccess.S | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 64792a7ae72e..1399d797d81b 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -44,7 +44,7 @@ SYM_FUNC_START(fallback_scalar_usercopy) * Use byte copy only if too small. * SZREG holds 4 for RV32 and 8 for RV64 */ - li a3, 9*SZREG /* size must be larger than size in word_copy */ + li a3, 9*SZREG-1 /* size must >= (word_copy stride + SZREG-1) */ bltu a2, a3, .Lbyte_copy_tail /* |