diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2017-02-24 14:57:45 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-24 17:46:55 -0800 |
commit | ace71a19cec5eb430207c3269d8a2683f0574306 (patch) | |
tree | a4008d66fc253ba7a0b15c80f9df6306aa409ec3 /include/linux/rmap.h | |
parent | c8394812e56fbc334d815226268cea69b447d461 (diff) |
mm: introduce page_vma_mapped_walk()
Introduce a new interface to check if a page is mapped into a vma. It
aims to address shortcomings of page_check_address{,_transhuge}.
Existing interface is not able to handle PTE-mapped THPs: it only finds
the first PTE. The rest lefted unnoticed.
page_vma_mapped_walk() iterates over all possible mapping of the page in
the vma.
Link: http://lkml.kernel.org/r/20170129173858.45174-3-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/rmap.h')
-rw-r--r-- | include/linux/rmap.h | 26 |
1 files changed, 26 insertions, 0 deletions
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 15321fb1df6b..b76343610653 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -9,6 +9,7 @@ #include <linux/mm.h> #include <linux/rwsem.h> #include <linux/memcontrol.h> +#include <linux/highmem.h> /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -232,6 +233,31 @@ static inline bool page_check_address_transhuge(struct page *page, } #endif +/* Avoid racy checks */ +#define PVMW_SYNC (1 << 0) +/* Look for migarion entries rather than present PTEs */ +#define PVMW_MIGRATION (1 << 1) + +struct page_vma_mapped_walk { + struct page *page; + struct vm_area_struct *vma; + unsigned long address; + pmd_t *pmd; + pte_t *pte; + spinlock_t *ptl; + unsigned int flags; +}; + +static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) +{ + if (pvmw->pte) + pte_unmap(pvmw->pte); + if (pvmw->ptl) + spin_unlock(pvmw->ptl); +} + +bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); + /* * Used by swapoff to help locate where page is expected in vma. */ |