diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-02-13 16:38:07 -0500 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-03-21 12:59:02 -0400 |
commit | b4545f46533b7e69cb20e05c9fe987be76e1a3da (patch) | |
tree | 5a100229dd72f8bcbfffac9efa470edced0fa094 /mm | |
parent | d6c75dc22c755c567838f12f12a16f2a323ebd4e (diff) |
mm/truncate: Convert __invalidate_mapping_pages() to use a folio
Now we can call mapping_evict_folio() instead of invalidate_inode_page()
and save a few calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/truncate.c | 18 |
1 files changed, 9 insertions, 9 deletions
diff --git a/mm/truncate.c b/mm/truncate.c index 2fb10735aab4..a8b0243eadf6 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -507,27 +507,27 @@ static unsigned long __invalidate_mapping_pages(struct address_space *mapping, folio_batch_init(&fbatch); while (find_lock_entries(mapping, index, end, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { - struct page *page = &fbatch.folios[i]->page; + struct folio *folio = fbatch.folios[i]; - /* We rely upon deletion not changing page->index */ + /* We rely upon deletion not changing folio->index */ index = indices[i]; - if (xa_is_value(page)) { + if (xa_is_value(folio)) { count += invalidate_exceptional_entry(mapping, index, - page); + folio); continue; } - index += thp_nr_pages(page) - 1; + index += folio_nr_pages(folio) - 1; - ret = invalidate_inode_page(page); - unlock_page(page); + ret = mapping_evict_folio(mapping, folio); + folio_unlock(folio); /* - * Invalidation is a hint that the page is no longer + * Invalidation is a hint that the folio is no longer * of interest and try to speed up its reclaim. */ if (!ret) { - deactivate_file_page(page); + deactivate_file_page(&folio->page); /* It is likely on the pagevec of a remote CPU */ if (nr_pagevec) (*nr_pagevec)++; |