diff options
author | David Hildenbrand <david@redhat.com> | 2022-05-09 18:20:42 -0700 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-05-09 18:20:42 -0700 |
commit | b51ad4f8679e50284ce35ff671767f8f0309b64a (patch) | |
tree | 245c63fc1dc35116029a91ba2a824d17d069745d /mm/memory.c | |
parent | 623a1ddfeb232526275ddd0c8378771e6712aad4 (diff) |
mm/memory: slightly simplify copy_present_pte()
Let's move the pinning check into the caller, to simplify return code
logic and prepare for further changes: relocating the
page_needs_cow_for_dma() into rmap handling code.
While at it, remove the unused pte parameter and simplify the comments a
bit.
No functional change intended.
Link: https://lkml.kernel.org/r/20220428083441.37290-4-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Don Dutile <ddutile@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Khalid Aziz <khalid.aziz@oracle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Liang Zhang <zhangliang5@huawei.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Oded Gabbay <oded.gabbay@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 53 |
1 files changed, 16 insertions, 37 deletions
diff --git a/mm/memory.c b/mm/memory.c index e873a6143181..9bdfce15e98f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -862,19 +862,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, } /* - * Copy a present and normal page if necessary. + * Copy a present and normal page. * - * NOTE! The usual case is that this doesn't need to do - * anything, and can just return a positive value. That - * will let the caller know that it can just increase - * the page refcount and re-use the pte the traditional - * way. - * - * But _if_ we need to copy it because it needs to be - * pinned in the parent (and the child should get its own - * copy rather than just a reference to the same page), - * we'll do that here and return zero to let the caller - * know we're done. + * NOTE! The usual case is that this isn't required; + * instead, the caller can just increase the page refcount + * and re-use the pte the traditional way. * * And if we need a pre-allocated page but don't yet have * one, return a negative error to let the preallocation @@ -884,25 +876,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, static inline int copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, - struct page **prealloc, pte_t pte, struct page *page) + struct page **prealloc, struct page *page) { struct page *new_page; - - /* - * What we want to do is to check whether this page may - * have been pinned by the parent process. If so, - * instead of wrprotect the pte on both sides, we copy - * the page immediately so that we'll always guarantee - * the pinned page won't be randomly replaced in the - * future. - * - * The page pinning checks are just "has this mm ever - * seen pinning", along with the (inexact) check of - * the page count. That might give false positives for - * for pinning, but it will work correctly. - */ - if (likely(!page_needs_cow_for_dma(src_vma, page))) - return 1; + pte_t pte; new_page = *prealloc; if (!new_page) @@ -944,14 +921,16 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, struct page *page; page = vm_normal_page(src_vma, addr, pte); - if (page) { - int retval; - - retval = copy_present_page(dst_vma, src_vma, dst_pte, src_pte, - addr, rss, prealloc, pte, page); - if (retval <= 0) - return retval; - + if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) { + /* + * If this page may have been pinned by the parent process, + * copy the page immediately for the child so that we'll always + * guarantee the pinned page won't be randomly replaced in the + * future. + */ + return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, + addr, rss, prealloc, page); + } else if (page) { get_page(page); page_dup_rmap(page, false); rss[mm_counter(page)]++; |