diff options
author | Eric Dumazet <edumazet@google.com> | 2016-12-08 11:41:57 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-12-09 22:12:21 -0500 |
commit | 02ab0d139cff1efc5aa1fb4378c727668334fe97 (patch) | |
tree | 7aea980802490b448bedac089074929b9c47d13b /net | |
parent | 6b229cf77d683f634f0edd876c6d1015402303ad (diff) |
udp: udp_rmem_release() should touch sk_rmem_alloc later
In flood situations, keeping sk_rmem_alloc at a high value
prevents producers from touching the socket.
It makes sense to lower sk_rmem_alloc only at the end
of udp_rmem_release() after the thread draining receive
queue in udp_recvmsg() finished the writes to sk_forward_alloc.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/ipv4/udp.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 5a38faa12cde..9ca279b130d5 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1191,13 +1191,14 @@ static void udp_rmem_release(struct sock *sk, int size, int partial) } up->forward_deficit = 0; - atomic_sub(size, &sk->sk_rmem_alloc); sk->sk_forward_alloc += size; amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1); sk->sk_forward_alloc -= amt; if (amt) __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT); + + atomic_sub(size, &sk->sk_rmem_alloc); } /* Note: called with sk_receive_queue.lock held. |