diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-08-04 14:47:45 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-08-04 14:47:45 -0700 |
commit | 9ba27414f2ec2bfb019d9e9170fd2308aebab63a (patch) | |
tree | 1ecbed91b2c68b4c28709fcb36689236eac7d304 /arch/ia64 | |
parent | 0a72761b27fe3b10e3f336bf2f2aa22635504cdd (diff) | |
parent | 714acdbd1c94e7e3ab90f6b6938f1ccb27b662f0 (diff) |
Merge tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux
Pull fork cleanups from Christian Brauner:
"This is cleanup series from when we reworked a chunk of the process
creation paths in the kernel and switched to struct
{kernel_}clone_args.
High-level this does two main things:
- Remove the double export of both do_fork() and _do_fork() where
do_fork() used the incosistent legacy clone calling convention.
Now we only export _do_fork() which is based on struct
kernel_clone_args.
- Remove the copy_thread_tls()/copy_thread() split making the
architecture specific HAVE_COYP_THREAD_TLS config option obsolete.
This switches all remaining architectures to select
HAVE_COPY_THREAD_TLS and thus to the copy_thread_tls() calling
convention. The current split makes the process creation codepaths
more convoluted than they need to be. Each architecture has their own
copy_thread() function unless it selects HAVE_COPY_THREAD_TLS then it
has a copy_thread_tls() function.
The split is not needed anymore nowadays, all architectures support
CLONE_SETTLS but quite a few of them never bothered to select
HAVE_COPY_THREAD_TLS and instead simply continued to use copy_thread()
and use the old calling convention. Removing this split cleans up the
process creation codepaths and paves the way for implementing clone3()
on such architectures since it requires the copy_thread_tls() calling
convention.
After having made each architectures support copy_thread_tls() this
series simply renames that function back to copy_thread(). It also
switches all architectures that call do_fork() directly over to
_do_fork() and the struct kernel_clone_args calling convention. This
is a corollary of switching the architectures that did not yet support
it over to copy_thread_tls() since do_fork() is conditional on not
supporting copy_thread_tls() (Mostly because it lacks a separate
argument for tls which is trivial to fix but there's no need for this
function to exist.).
The do_fork() removal is in itself already useful as it allows to to
remove the export of both do_fork() and _do_fork() we currently have
in favor of only _do_fork(). This has already been discussed back when
we added clone3(). The legacy clone() calling convention is - as is
probably well-known - somewhat odd:
#
# ABI hall of shame
#
config CLONE_BACKWARDS
config CLONE_BACKWARDS2
config CLONE_BACKWARDS3
that is aggravated by the fact that some architectures such as sparc
follow the CLONE_BACKWARDSx calling convention but don't really select
the corresponding config option since they call do_fork() directly.
So do_fork() enforces a somewhat arbitrary calling convention in the
first place that doesn't really help the individual architectures that
deviate from it. They can thus simply be switched to _do_fork()
enforcing a single calling convention. (I really hope that any new
architectures will __not__ try to implement their own calling
conventions...)
Most architectures already have made a similar switch (m68k comes to
mind).
Overall this removes more code than it adds even with a good portion
of added comments. It simplifies a chunk of arch specific assembly
either by moving the code into C or by simply rewriting the assembly.
Architectures that have been touched in non-trivial ways have all been
actually boot and stress tested: sparc and ia64 have been tested with
Debian 9 images. They are the two architectures which have been
touched the most. All non-trivial changes to architectures have seen
acks from the relevant maintainers. nios2 with a custom built
buildroot image. h8300 I couldn't get something bootable to test on
but the changes have been fairly automatic and I'm sure we'll hear
people yell if I broke something there.
All other architectures that have been touched in trivial ways have
been compile tested for each single patch of the series via git rebase
-x "make ..." v5.8-rc2. arm{64} and x86{_64} have been boot tested
even though they have just been trivially touched (removal of the
HAVE_COPY_THREAD_TLS macro from their Kconfig) because well they are
basically "core architectures" and since it is trivial to get your
hands on a useable image"
* tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
arch: rename copy_thread_tls() back to copy_thread()
arch: remove HAVE_COPY_THREAD_TLS
unicore: switch to copy_thread_tls()
sh: switch to copy_thread_tls()
nds32: switch to copy_thread_tls()
microblaze: switch to copy_thread_tls()
hexagon: switch to copy_thread_tls()
c6x: switch to copy_thread_tls()
alpha: switch to copy_thread_tls()
fork: remove do_fork()
h8300: select HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
nios2: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
ia64: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
sparc: unconditionally enable HAVE_COPY_THREAD_TLS
sparc: share process creation helpers between sparc and sparc64
sparc64: enable HAVE_COPY_THREAD_TLS
fork: fold legacy_clone_args_valid() into _do_fork()
Diffstat (limited to 'arch/ia64')
-rw-r--r-- | arch/ia64/kernel/entry.S | 32 | ||||
-rw-r--r-- | arch/ia64/kernel/process.c | 29 |
2 files changed, 36 insertions, 25 deletions
diff --git a/arch/ia64/kernel/entry.S b/arch/ia64/kernel/entry.S index c5efac285bc3..e98e3dafffd8 100644 --- a/arch/ia64/kernel/entry.S +++ b/arch/ia64/kernel/entry.S @@ -112,19 +112,16 @@ GLOBAL_ENTRY(sys_clone2) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) alloc r16=ar.pfs,8,2,6,0 DO_SAVE_SWITCH_STACK - adds r2=PT(R16)+IA64_SWITCH_STACK_SIZE+16,sp mov loc0=rp - mov loc1=r16 // save ar.pfs across do_fork + mov loc1=r16 // save ar.pfs across ia64_clone .body + mov out0=in0 mov out1=in1 mov out2=in2 - tbit.nz p6,p0=in0,CLONE_SETTLS_BIT - mov out3=in3 // parent_tidptr: valid only w/CLONE_PARENT_SETTID - ;; -(p6) st8 [r2]=in5 // store TLS in r16 for copy_thread() - mov out4=in4 // child_tidptr: valid only w/CLONE_CHILD_SETTID or CLONE_CHILD_CLEARTID - mov out0=in0 // out0 = clone_flags - br.call.sptk.many rp=do_fork + mov out3=in3 + mov out4=in4 + mov out5=in5 + br.call.sptk.many rp=ia64_clone .ret1: .restore sp adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack mov ar.pfs=loc1 @@ -143,19 +140,16 @@ GLOBAL_ENTRY(sys_clone) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) alloc r16=ar.pfs,8,2,6,0 DO_SAVE_SWITCH_STACK - adds r2=PT(R16)+IA64_SWITCH_STACK_SIZE+16,sp mov loc0=rp - mov loc1=r16 // save ar.pfs across do_fork + mov loc1=r16 // save ar.pfs across ia64_clone .body + mov out0=in0 mov out1=in1 mov out2=16 // stacksize (compensates for 16-byte scratch area) - tbit.nz p6,p0=in0,CLONE_SETTLS_BIT - mov out3=in2 // parent_tidptr: valid only w/CLONE_PARENT_SETTID - ;; -(p6) st8 [r2]=in4 // store TLS in r13 (tp) - mov out4=in3 // child_tidptr: valid only w/CLONE_CHILD_SETTID or CLONE_CHILD_CLEARTID - mov out0=in0 // out0 = clone_flags - br.call.sptk.many rp=do_fork + mov out3=in3 + mov out4=in4 + mov out5=in5 + br.call.sptk.many rp=ia64_clone .ret2: .restore sp adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack mov ar.pfs=loc1 @@ -590,7 +584,7 @@ GLOBAL_ENTRY(ia64_ret_from_clone) nop.i 0 /* * We need to call schedule_tail() to complete the scheduling process. - * Called by ia64_switch_to() after do_fork()->copy_thread(). r8 contains the + * Called by ia64_switch_to() after ia64_clone()->copy_thread(). r8 contains the * address of the previously executing task. */ br.call.sptk.many rp=ia64_invoke_schedule_tail diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c index da55b41ae33e..7a4de9d994c5 100644 --- a/arch/ia64/kernel/process.c +++ b/arch/ia64/kernel/process.c @@ -296,7 +296,7 @@ ia64_load_extra (struct task_struct *task) pfm_load_regs(task); info = __this_cpu_read(pfm_syst_info); - if (info & PFM_CPUINFO_SYST_WIDE) + if (info & PFM_CPUINFO_SYST_WIDE) pfm_syst_wide_update_task(task, info, 1); #endif } @@ -310,7 +310,7 @@ ia64_load_extra (struct task_struct *task) * * <clone syscall> <some kernel call frames> * sys_clone : - * do_fork do_fork + * _do_fork _do_fork * copy_thread copy_thread * * This means that the stack layout is as follows: @@ -333,9 +333,8 @@ ia64_load_extra (struct task_struct *task) * so there is nothing to worry about. */ int -copy_thread(unsigned long clone_flags, - unsigned long user_stack_base, unsigned long user_stack_size, - struct task_struct *p) +copy_thread(unsigned long clone_flags, unsigned long user_stack_base, + unsigned long user_stack_size, struct task_struct *p, unsigned long tls) { extern char ia64_ret_from_clone; struct switch_stack *child_stack, *stack; @@ -416,7 +415,7 @@ copy_thread(unsigned long clone_flags, rbs_size = stack->ar_bspstore - rbs; memcpy((void *) child_rbs, (void *) rbs, rbs_size); if (clone_flags & CLONE_SETTLS) - child_ptregs->r13 = regs->r16; /* see sys_clone2() in entry.S */ + child_ptregs->r13 = tls; if (user_stack_base) { child_ptregs->r12 = user_stack_base + user_stack_size - 16; child_ptregs->ar_bspstore = user_stack_base; @@ -441,6 +440,24 @@ copy_thread(unsigned long clone_flags, return retval; } +asmlinkage long ia64_clone(unsigned long clone_flags, unsigned long stack_start, + unsigned long stack_size, unsigned long parent_tidptr, + unsigned long child_tidptr, unsigned long tls) +{ + struct kernel_clone_args args = { + .flags = (lower_32_bits(clone_flags) & ~CSIGNAL), + .pidfd = (int __user *)parent_tidptr, + .child_tid = (int __user *)child_tidptr, + .parent_tid = (int __user *)parent_tidptr, + .exit_signal = (lower_32_bits(clone_flags) & CSIGNAL), + .stack = stack_start, + .stack_size = stack_size, + .tls = tls, + }; + + return _do_fork(&args); +} + static void do_copy_task_regs (struct task_struct *task, struct unw_frame_info *info, void *arg) { |