diff options
author | David S. Miller <davem@davemloft.net> | 2022-10-24 10:52:50 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2022-10-24 10:52:50 +0100 |
commit | b29e0dece45174f8c791853cbf40a24212148b47 (patch) | |
tree | cd84d6b901d6d1e14185481909982e4f267cd512 /net/rds/tcp.c | |
parent | ea5ed0f00b07d5aba09189729d54751c98494f25 (diff) | |
parent | 8a3854c7b8e4532063b14bed34115079b7d0cb36 (diff) |
Merge branch 'udp-false-sharing'
Paolo Abeni says:
====================
udp: avoid false sharing on receive
Under high UDP load, the BH processing and the user-space receiver can
run on different cores.
The UDP implementation does a lot of effort to avoid false sharing in
the receive path, but recent changes to the struct sock layout moved
the sk_forward_alloc and the sk_rcvbuf fields on the same cacheline:
/* --- cacheline 4 boundary (256 bytes) --- */
struct sk_buff * tail;
} sk_backlog;
int sk_forward_alloc;
unsigned int sk_reserved_mem;
unsigned int sk_ll_usec;
unsigned int sk_napi_id;
int sk_rcvbuf;
sk_forward_alloc is updated by the BH, while sk_rcvbuf is accessed by
udp_recvmsg(), causing false sharing.
A possible solution would be to re-order the struct sock fields to avoid
the false sharing. Such change is subject to being invalidated by future
changes and could have negative side effects on other workload.
Instead this series uses a different approach, touching only the UDP
socket layout.
The first patch generalizes the custom setsockopt infrastructure, to
allow UDP tracking the buffer size, and the second patch addresses the
issue, copying the relevant buffer information into an already hot
cacheline.
Overall the above gives a 10% peek throughput increase under UDP flood.
v1 -> v2:
- introduce and use a common helper to initialize the UDP v4/v6 sockets
(Kuniyuki)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/rds/tcp.c')
0 files changed, 0 insertions, 0 deletions