diff options
author | Xiongwei Song <xiongwei.song@windriver.com> | 2024-04-04 13:58:25 +0800 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2024-04-04 11:10:35 +0200 |
commit | 721a2f8be134f9bb61f4358cbb7ae394eaf74573 (patch) | |
tree | 0262409ca90ffdd8e4db9fe837c9e1cb0b309208 /mm/slub.c | |
parent | acc8f4dbf1b3293dc0b5a51eadde086123806c0f (diff) |
mm/slub: add slub_get_cpu_partial() helper
Add slub_get_cpu_partial() and dummy function to help improve
get_partial_node(). It can help remove #ifdef of CONFIG_SLUB_CPU_PARTIAL
and improve filling cpu partial logic.
Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 2e0351066c5d..936f2b13a78e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -604,11 +604,21 @@ static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) nr_slabs = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); s->cpu_partial_slabs = nr_slabs; } + +static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s) +{ + return s->cpu_partial_slabs; +} #else static inline void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) { } + +static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s) +{ + return 0; +} #endif /* CONFIG_SLUB_CPU_PARTIAL */ /* |