diff options
author | Yunsheng Lin <linyunsheng@huawei.com> | 2022-01-07 17:00:42 +0800 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-01-09 16:45:27 -0800 |
commit | 07b17f0f7485bcbc7902cf6f56a89f5b716344bd (patch) | |
tree | 69c4c833f9654cc50f80a7bd382cc33ba672a626 /net | |
parent | 6738fc77ffa2575f38b88d85583450184d59ed94 (diff) |
page_pool: remove spinlock in page_pool_refill_alloc_cache()
As page_pool_refill_alloc_cache() is only called by
__page_pool_get_cached(), which assumes non-concurrent access
as suggested by the comment in __page_pool_get_cached(), and
ptr_ring allows concurrent access between consumer and producer,
so remove the spinlock in page_pool_refill_alloc_cache().
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/r/20220107090042.13605-1-linyunsheng@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/page_pool.c | 4 |
1 files changed, 0 insertions, 4 deletions
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7347d5c7dbe0..bd62c01a2ec3 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -130,9 +130,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) pref_nid = numa_mem_id(); /* will be zero like page_to_nid() */ #endif - /* Slower-path: Get pages from locked ring queue */ - spin_lock(&r->consumer_lock); - /* Refill alloc array, but only if NUMA match */ do { page = __ptr_ring_consume(r); @@ -157,7 +154,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) if (likely(pool->alloc.count > 0)) page = pool->alloc.cache[--pool->alloc.count]; - spin_unlock(&r->consumer_lock); return page; } |