diff options
author | Oscar Salvador <osalvador@suse.de> | 2024-04-04 09:07:01 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-04-16 15:39:49 -0700 |
commit | 718b1f3373a7999f77e617c17abdcb98a3c001ea (patch) | |
tree | 255876ea6390aa48ce05da82f8938452a6e7ab5a /mm | |
parent | f5c12105c15f0ddf0ff37646290568dd986fa2f3 (diff) |
mm,page_owner: fix accounting of pages when migrating
Upon migration, new allocated pages are being given the handle of the old
pages. This is problematic because it means that for the stack which
allocated the old page, we will be substracting the old page + the new one
when that page is freed, creating an accounting imbalance.
There is an interest in keeping it that way, as otherwise the output will
biased towards migration stacks should those operations occur often, but
that is not really helpful.
The link from the new page to the old stack is being performed by calling
__update_page_owner_handle() in __folio_copy_owner(). The only thing that
is left is to link the migrate stack to the old page, so the old page will
be subtracted from the migrate stack, avoiding by doing so any possible
imbalance.
Link: https://lkml.kernel.org/r/20240404070702.2744-4-osalvador@suse.de
Fixes: 217b2119b9e2 ("mm,page_owner: implement the tracking of the stacks count")
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Marco Elver <elver@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_owner.c | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/mm/page_owner.c b/mm/page_owner.c index 5df0d6892bdc..b4476f45b376 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -366,9 +366,12 @@ void __split_page_owner(struct page *page, int old_order, int new_order) void __folio_copy_owner(struct folio *newfolio, struct folio *old) { + int i; struct page_ext *old_ext; struct page_ext *new_ext; struct page_owner *old_page_owner; + struct page_owner *new_page_owner; + depot_stack_handle_t migrate_handle; old_ext = page_ext_get(&old->page); if (unlikely(!old_ext)) @@ -381,6 +384,8 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) } old_page_owner = get_page_owner(old_ext); + new_page_owner = get_page_owner(new_ext); + migrate_handle = new_page_owner->handle; __update_page_owner_handle(new_ext, old_page_owner->handle, old_page_owner->order, old_page_owner->gfp_mask, old_page_owner->last_migrate_reason, @@ -395,6 +400,16 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) old_page_owner->free_pid, old_page_owner->free_tgid, old_page_owner->free_ts_nsec); + /* + * We linked the original stack to the new folio, we need to do the same + * for the new one and the old folio otherwise there will be an imbalance + * when subtracting those pages from the stack. + */ + for (i = 0; i < (1 << new_page_owner->order); i++) { + old_page_owner->handle = migrate_handle; + old_ext = page_ext_next(old_ext); + old_page_owner = get_page_owner(old_ext); + } page_ext_put(new_ext); page_ext_put(old_ext); |