summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorLiang Chen <liangchen.linux@gmail.com>2023-12-15 11:30:11 +0800
committerDavid S. Miller <davem@davemloft.net>2023-12-17 10:56:33 +0000
commitf7dc3248dcfbdd81b5be64272f38b87a8e8085e7 (patch)
treed3b4e29600c5578796116a57600e473f954f85d5 /include
parent8cfa2dee325f72f286f8f3210f867cbb981f2302 (diff)
skbuff: Optimization of SKB coalescing for page pool
In order to address the issues encountered with commit 1effe8ca4e34 ("skbuff: fix coalescing for page_pool fragment recycling"), the combination of the following condition was excluded from skb coalescing: from->pp_recycle = 1 from->cloned = 1 to->pp_recycle = 1 However, with page pool environments, the aforementioned combination can be quite common(ex. NetworkMananger may lead to the additional packet_type being registered, thus the cloning). In scenarios with a higher number of small packets, it can significantly affect the success rate of coalescing. For example, considering packets of 256 bytes size, our comparison of coalescing success rate is as follows: Without page pool: 70% With page pool: 13% Consequently, this has an impact on performance: Without page pool: 2.57 Gbits/sec With page pool: 2.26 Gbits/sec Therefore, it seems worthwhile to optimize this scenario and enable coalescing of this particular combination. To achieve this, we need to ensure the correct increment of the "from" SKB page's page pool reference count (pp_ref_count). Following this optimization, the success rate of coalescing measured in our environment has improved as follows: With page pool: 60% This success rate is approaching the rate achieved without using page pool, and the performance has also been improved: With page pool: 2.52 Gbits/sec Below is the performance comparison for small packets before and after this optimization. We observe no impact to packets larger than 4K. packet size before after improved (bytes) (Gbits/sec) (Gbits/sec) 128 1.19 1.27 7.13% 256 2.26 2.52 11.75% 512 4.13 4.81 16.50% 1024 6.17 6.73 9.05% 2048 14.54 15.47 6.45% 4096 25.44 27.87 9.52% Signed-off-by: Liang Chen <liangchen.linux@gmail.com> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Suggested-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r--include/net/page_pool/helpers.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index ead2c0d24b2c..1d397c1a0043 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -277,6 +277,11 @@ static inline long page_pool_unref_page(struct page *page, long nr)
return ret;
}
+static inline void page_pool_ref_page(struct page *page)
+{
+ atomic_long_inc(&page->pp_ref_count);
+}
+
static inline bool page_pool_is_last_ref(struct page *page)
{
/* If page_pool_unref_page() returns 0, we were the last user */