diff options
author | Junaid Shahid <junaids@google.com> | 2019-11-04 20:26:00 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2019-11-04 20:26:00 +0100 |
commit | 1aa9b9572b10529c2e64e2b8f44025d86e124308 (patch) | |
tree | 8986b81fe6ccb070a657d3f37b1963a790890c21 /arch/x86/kvm/mmu.h | |
parent | c57c80467f90e5504c8df9ad3555d2c78800bf94 (diff) |
kvm: x86: mmu: Recovery of shattered NX large pages
The page table pages corresponding to broken down large pages are zapped in
FIFO order, so that the large page can potentially be recovered, if it is
not longer being used for execution. This removes the performance penalty
for walking deeper EPT page tables.
By default, one large page will last about one hour once the guest
reaches a steady state.
Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'arch/x86/kvm/mmu.h')
-rw-r--r-- | arch/x86/kvm/mmu.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 11f8ec89433b..d55674f44a18 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -210,4 +210,8 @@ void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn); int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); + +int kvm_mmu_post_init_vm(struct kvm *kvm); +void kvm_mmu_pre_destroy_vm(struct kvm *kvm); + #endif |