summaryrefslogtreecommitdiff
path: root/net
diff options
context:
space:
mode:
authorManfred Spraul <manfred@colorfullife.com>2021-06-27 18:19:18 +0200
committerPablo Neira Ayuso <pablo@netfilter.org>2021-07-06 14:15:13 +0200
commitcf4466ea47db891be785f867ca7f99e0cd9898c6 (patch)
treec6fa68cd742bfc9648cad717a0cfd5dbee7f4fee /net
parent1da4cd82dd180224503e745ccf3220e3490d8897 (diff)
netfilter: conntrack: Mark access for KCSAN
KCSAN detected an data race with ipc/sem.c that is intentional. As nf_conntrack_lock() uses the same algorithm: Update nf_conntrack_core as well: nf_conntrack_lock() contains a1) spin_lock() a2) smp_load_acquire(nf_conntrack_locks_all). a1) actually accesses one lock from an array of locks. nf_conntrack_locks_all() contains b1) nf_conntrack_locks_all=true (normal write) b2) spin_lock() b3) spin_unlock() b2 and b3 are done for every lock. This guarantees that nf_conntrack_locks_all() prevents any concurrent nf_conntrack_lock() owners: If a thread past a1), then b2) will block until that thread releases the lock. If the threat is before a1, then b3)+a1) ensure the write b1) is visible, thus a2) is guaranteed to see the updated value. But: This is only the latest time when b1) becomes visible. It may also happen that b1) is visible an undefined amount of time before the b3). And thus KCSAN will notice a data race. In addition, the compiler might be too clever. Solution: Use WRITE_ONCE(). Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to 'net')
-rw-r--r--net/netfilter/nf_conntrack_core.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 085a11f1eb43..83c52df85870 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -149,7 +149,15 @@ static void nf_conntrack_all_lock(void)
spin_lock(&nf_conntrack_locks_all_lock);
- nf_conntrack_locks_all = true;
+ /* For nf_contrack_locks_all, only the latest time when another
+ * CPU will see an update is controlled, by the "release" of the
+ * spin_lock below.
+ * The earliest time is not controlled, an thus KCSAN could detect
+ * a race when nf_conntract_lock() reads the variable.
+ * WRITE_ONCE() is used to ensure the compiler will not
+ * optimize the write.
+ */
+ WRITE_ONCE(nf_conntrack_locks_all, true);
for (i = 0; i < CONNTRACK_LOCKS; i++) {
spin_lock(&nf_conntrack_locks[i]);