diff options
author | Eric Dumazet <edumazet@google.com> | 2019-10-23 22:44:51 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-10-28 13:33:41 -0700 |
commit | 3f926af3f4d688e2e11e7f8ed04e277a14d4d4a4 (patch) | |
tree | 3cdc7bbab6fc9caa1282afe98eddd91add1aabd7 /net/ipv4 | |
parent | 3ef7cf57c72f32f61e97f8fa401bc39ea1f1a5d4 (diff) |
net: use skb_queue_empty_lockless() in busy poll contexts
Busy polling usually runs without locks.
Let's use skb_queue_empty_lockless() instead of skb_queue_empty()
Also uses READ_ONCE() in __skb_try_recv_datagram() to address
a similar potential problem.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
-rw-r--r-- | net/ipv4/tcp.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index ffef502f5292..d8876f0e9672 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1964,7 +1964,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, if (unlikely(flags & MSG_ERRQUEUE)) return inet_recv_error(sk, msg, len, addr_len); - if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) && + if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue) && (sk->sk_state == TCP_ESTABLISHED)) sk_busy_loop(sk, nonblock); |