diff options
author | Miaohe Lin <linmiaohe@huawei.com> | 2022-04-28 23:16:06 -0700 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-04-28 23:16:06 -0700 |
commit | bc78b5ed9ff2359f8af1454ee98670b1dc79f06e (patch) | |
tree | 2e7f796b1e20398a62d41be8c268b9a64c4856f9 /mm | |
parent | da63dc84befaa9e6079a0bc363ff0eaa975f9073 (diff) |
mm/mempolicy: clean up the code logic in queue_pages_pte_range
Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
never returned now. We can remove such unnecessary ret != 2 check and
clean up the relevant comment. Minor improvements in readability.
Link: https://lkml.kernel.org/r/20220419122234.45083-1-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/mempolicy.c | 12 |
1 files changed, 3 insertions, 9 deletions
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8c74107a2b15..0288ffaea064 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -440,12 +440,11 @@ static inline bool queue_pages_required(struct page *page, } /* - * queue_pages_pmd() has four possible return values: + * queue_pages_pmd() has three possible return values: * 0 - pages are placed on the right node or queued successfully, or * special page is met, i.e. huge zero page. * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. - * 2 - THP was split. * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an * existing page was already on a node that does not follow the * policy. @@ -507,18 +506,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, struct page *page; struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; - int ret; bool has_unmovable = false; pte_t *pte, *mapped_pte; spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); - if (ptl) { - ret = queue_pages_pmd(pmd, ptl, addr, end, walk); - if (ret != 2) - return ret; - } - /* THP was split, fall through to pte walk */ + if (ptl) + return queue_pages_pmd(pmd, ptl, addr, end, walk); if (pmd_trans_unstable(pmd)) return 0; |