diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-10-09 10:20:44 -0700 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2017-12-04 10:52:53 -0800 |
commit | b393e8b33efd2ee08576ceddc10c2b4bfb3b5435 (patch) | |
tree | 1e0f9f3aa01484a8c815812f14e0f5a4c6890526 /lib | |
parent | 7088efa9137a15d7d21e3abce73e40c9c8a18d68 (diff) |
percpu: READ_ONCE() now implies smp_read_barrier_depends()
Because READ_ONCE() now implies smp_read_barrier_depends(), this commit
removes the now-redundant smp_read_barrier_depends() following the
READ_ONCE() in __ref_is_percpu().
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/percpu-refcount.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index fe03c6d52761..30e7dd88148b 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -197,10 +197,10 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref) atomic_long_add(PERCPU_COUNT_BIAS, &ref->count); /* - * Restore per-cpu operation. smp_store_release() is paired with - * smp_read_barrier_depends() in __ref_is_percpu() and guarantees - * that the zeroing is visible to all percpu accesses which can see - * the following __PERCPU_REF_ATOMIC clearing. + * Restore per-cpu operation. smp_store_release() is paired + * with READ_ONCE() in __ref_is_percpu() and guarantees that the + * zeroing is visible to all percpu accesses which can see the + * following __PERCPU_REF_ATOMIC clearing. */ for_each_possible_cpu(cpu) *per_cpu_ptr(percpu_count, cpu) = 0; |