diff options
author | Wanpeng Li <wanpengli@tencent.com> | 2019-05-17 16:49:50 +0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-05-24 21:27:08 +0200 |
commit | 541e886f7972cc647804dbb4909189e67987a945 (patch) | |
tree | 12643c41feec9bdf6e0fc33130bbd349eb4c8f9d /arch | |
parent | d30b214d1d0addb7b2c9c78178d1501cd39a01fb (diff) |
KVM: nVMX: Fix using __this_cpu_read() in preemptible context
BUG: using __this_cpu_read() in preemptible [00000000] code: qemu-system-x86/4590
caller is nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
CPU: 4 PID: 4590 Comm: qemu-system-x86 Tainted: G OE 5.1.0-rc4+ #1
Call Trace:
dump_stack+0x67/0x95
__this_cpu_preempt_check+0xd2/0xe0
nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
nested_vmx_run+0xda/0x2b0 [kvm_intel]
handle_vmlaunch+0x13/0x20 [kvm_intel]
vmx_handle_exit+0xbd/0x660 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xa2c/0x1e50 [kvm]
kvm_vcpu_ioctl+0x3ad/0x6d0 [kvm]
do_vfs_ioctl+0xa5/0x6e0
ksys_ioctl+0x6d/0x80
__x64_sys_ioctl+0x1a/0x20
do_syscall_64+0x6f/0x6c0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Accessing per-cpu variable should disable preemption, this patch extends the
preemption disable region for __this_cpu_read().
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W")
Cc: stable@vger.kernel.org
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kvm/vmx/nested.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 6b450839c766..1032f068f0b9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2784,14 +2784,13 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) : "cc", "memory" ); - preempt_enable(); - if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); if (vmx->msr_autoload.guest.nr) vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); if (vm_fail) { + preempt_enable(); WARN_ON_ONCE(vmcs_read32(VM_INSTRUCTION_ERROR) != VMXERR_ENTRY_INVALID_CONTROL_FIELD); return 1; @@ -2803,6 +2802,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) local_irq_enable(); if (hw_breakpoint_active()) set_debugreg(__this_cpu_read(cpu_dr7), 7); + preempt_enable(); /* * A non-failing VMEntry means we somehow entered guest mode with |