VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 84044

Last change on this file since 84044 was 83197, checked in by vboxsync, 4 years ago

VMM/CPUM: Fix the Timer description string outliving the stack, allocate it on the hyper-heap.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 240.5 KB
Line 
1/* $Id: CPUM.cpp 83197 2020-03-04 09:18:18Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2020 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/apic.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/iem.h>
117#include <VBox/vmm/selm.h>
118#include <VBox/vmm/dbgf.h>
119#include <VBox/vmm/hm.h>
120#include <VBox/vmm/hmvmxinline.h>
121#include <VBox/vmm/ssm.h>
122#include "CPUMInternal.h"
123#include <VBox/vmm/vm.h>
124
125#include <VBox/param.h>
126#include <VBox/dis.h>
127#include <VBox/err.h>
128#include <VBox/log.h>
129#include <iprt/asm-amd64-x86.h>
130#include <iprt/assert.h>
131#include <iprt/cpuset.h>
132#include <iprt/mem.h>
133#include <iprt/mp.h>
134#include <iprt/string.h>
135
136
137/*********************************************************************************************************************************
138* Defined Constants And Macros *
139*********************************************************************************************************************************/
140/**
141 * This was used in the saved state up to the early life of version 14.
142 *
143 * It indicates that we may have some out-of-sync hidden segement registers.
144 * It is only relevant for raw-mode.
145 */
146#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
147
148
149/*********************************************************************************************************************************
150* Structures and Typedefs *
151*********************************************************************************************************************************/
152
153/**
154 * What kind of cpu info dump to perform.
155 */
156typedef enum CPUMDUMPTYPE
157{
158 CPUMDUMPTYPE_TERSE,
159 CPUMDUMPTYPE_DEFAULT,
160 CPUMDUMPTYPE_VERBOSE
161} CPUMDUMPTYPE;
162/** Pointer to a cpu info dump type. */
163typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
164
165
166/*********************************************************************************************************************************
167* Internal Functions *
168*********************************************************************************************************************************/
169static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
170static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
171static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
172static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
173static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
180
181
182/*********************************************************************************************************************************
183* Global Variables *
184*********************************************************************************************************************************/
185/** Saved state field descriptors for CPUMCTX. */
186static const SSMFIELD g_aCpumCtxFields[] =
187{
188 SSMFIELD_ENTRY( CPUMCTX, rdi),
189 SSMFIELD_ENTRY( CPUMCTX, rsi),
190 SSMFIELD_ENTRY( CPUMCTX, rbp),
191 SSMFIELD_ENTRY( CPUMCTX, rax),
192 SSMFIELD_ENTRY( CPUMCTX, rbx),
193 SSMFIELD_ENTRY( CPUMCTX, rdx),
194 SSMFIELD_ENTRY( CPUMCTX, rcx),
195 SSMFIELD_ENTRY( CPUMCTX, rsp),
196 SSMFIELD_ENTRY( CPUMCTX, rflags),
197 SSMFIELD_ENTRY( CPUMCTX, rip),
198 SSMFIELD_ENTRY( CPUMCTX, r8),
199 SSMFIELD_ENTRY( CPUMCTX, r9),
200 SSMFIELD_ENTRY( CPUMCTX, r10),
201 SSMFIELD_ENTRY( CPUMCTX, r11),
202 SSMFIELD_ENTRY( CPUMCTX, r12),
203 SSMFIELD_ENTRY( CPUMCTX, r13),
204 SSMFIELD_ENTRY( CPUMCTX, r14),
205 SSMFIELD_ENTRY( CPUMCTX, r15),
206 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
207 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
208 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
209 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
210 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
211 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
212 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
213 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
214 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
215 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
216 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
217 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
218 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
219 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
220 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
221 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
222 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
223 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
224 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
225 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
226 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
227 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
228 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
229 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
230 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
231 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
232 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
233 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
234 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
235 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
236 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
237 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
238 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
239 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
240 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
241 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
242 SSMFIELD_ENTRY( CPUMCTX, cr0),
243 SSMFIELD_ENTRY( CPUMCTX, cr2),
244 SSMFIELD_ENTRY( CPUMCTX, cr3),
245 SSMFIELD_ENTRY( CPUMCTX, cr4),
246 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
247 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
248 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
251 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
252 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
253 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
254 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
255 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
256 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
257 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
258 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
259 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
260 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
261 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
262 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
264 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
265 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
266 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
267 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
272 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
273 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
274 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
275 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
276 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
277 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
278 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
279 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
280 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_TERM()
282};
283
284/** Saved state field descriptors for SVM nested hardware-virtualization
285 * Host State. */
286static const SSMFIELD g_aSvmHwvirtHostState[] =
287{
288 SSMFIELD_ENTRY( SVMHOSTSTATE, uEferMsr),
289 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr0),
290 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr4),
291 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr3),
292 SSMFIELD_ENTRY( SVMHOSTSTATE, uRip),
293 SSMFIELD_ENTRY( SVMHOSTSTATE, uRsp),
294 SSMFIELD_ENTRY( SVMHOSTSTATE, uRax),
295 SSMFIELD_ENTRY( SVMHOSTSTATE, rflags),
296 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Sel),
297 SSMFIELD_ENTRY( SVMHOSTSTATE, es.ValidSel),
298 SSMFIELD_ENTRY( SVMHOSTSTATE, es.fFlags),
299 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u64Base),
300 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u32Limit),
301 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Attr),
302 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Sel),
303 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.ValidSel),
304 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.fFlags),
305 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u64Base),
306 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u32Limit),
307 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Attr),
308 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Sel),
309 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.ValidSel),
310 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.fFlags),
311 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u64Base),
312 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u32Limit),
313 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Attr),
314 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Sel),
315 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.ValidSel),
316 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.fFlags),
317 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u64Base),
318 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u32Limit),
319 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Attr),
320 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.cbGdt),
321 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.pGdt),
322 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.cbIdt),
323 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.pIdt),
324 SSMFIELD_ENTRY_IGNORE(SVMHOSTSTATE, abPadding),
325 SSMFIELD_ENTRY_TERM()
326};
327
328/** Saved state field descriptors for VMX nested hardware-virtualization
329 * VMCS. */
330static const SSMFIELD g_aVmxHwvirtVmcs[] =
331{
332 SSMFIELD_ENTRY( VMXVVMCS, u32VmcsRevId),
333 SSMFIELD_ENTRY( VMXVVMCS, enmVmxAbort),
334 SSMFIELD_ENTRY( VMXVVMCS, fVmcsState),
335 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au8Padding0),
336 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved0),
337
338 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, u16Reserved0),
339
340 SSMFIELD_ENTRY( VMXVVMCS, u32RoVmInstrError),
341 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitReason),
342 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntInfo),
343 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntErrCode),
344 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringInfo),
345 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringErrCode),
346 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrLen),
347 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrInfo),
348 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32RoReserved2),
349
350 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestPhysAddr),
351 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved1),
352
353 SSMFIELD_ENTRY( VMXVVMCS, u64RoExitQual),
354 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRcx),
355 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRsi),
356 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRdi),
357 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRip),
358 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestLinearAddr),
359 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved5),
360
361 SSMFIELD_ENTRY( VMXVVMCS, u16Vpid),
362 SSMFIELD_ENTRY( VMXVVMCS, u16PostIntNotifyVector),
363 SSMFIELD_ENTRY( VMXVVMCS, u16EptpIndex),
364 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved0),
365
366 SSMFIELD_ENTRY( VMXVVMCS, u32PinCtls),
367 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls),
368 SSMFIELD_ENTRY( VMXVVMCS, u32XcptBitmap),
369 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMask),
370 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMatch),
371 SSMFIELD_ENTRY( VMXVVMCS, u32Cr3TargetCount),
372 SSMFIELD_ENTRY( VMXVVMCS, u32ExitCtls),
373 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrStoreCount),
374 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrLoadCount),
375 SSMFIELD_ENTRY( VMXVVMCS, u32EntryCtls),
376 SSMFIELD_ENTRY( VMXVVMCS, u32EntryMsrLoadCount),
377 SSMFIELD_ENTRY( VMXVVMCS, u32EntryIntInfo),
378 SSMFIELD_ENTRY( VMXVVMCS, u32EntryXcptErrCode),
379 SSMFIELD_ENTRY( VMXVVMCS, u32EntryInstrLen),
380 SSMFIELD_ENTRY( VMXVVMCS, u32TprThreshold),
381 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls2),
382 SSMFIELD_ENTRY( VMXVVMCS, u32PleGap),
383 SSMFIELD_ENTRY( VMXVVMCS, u32PleWindow),
384 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved1),
385
386 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapA),
387 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapB),
388 SSMFIELD_ENTRY( VMXVVMCS, u64AddrMsrBitmap),
389 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrStore),
390 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrLoad),
391 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEntryMsrLoad),
392 SSMFIELD_ENTRY( VMXVVMCS, u64ExecVmcsPtr),
393 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPml),
394 SSMFIELD_ENTRY( VMXVVMCS, u64TscOffset),
395 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVirtApic),
396 SSMFIELD_ENTRY( VMXVVMCS, u64AddrApicAccess),
397 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPostedIntDesc),
398 SSMFIELD_ENTRY( VMXVVMCS, u64VmFuncCtls),
399 SSMFIELD_ENTRY( VMXVVMCS, u64EptpPtr),
400 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap0),
401 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap1),
402 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap2),
403 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap3),
404 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEptpList),
405 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmreadBitmap),
406 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmwriteBitmap),
407 SSMFIELD_ENTRY( VMXVVMCS, u64AddrXcptVeInfo),
408 SSMFIELD_ENTRY( VMXVVMCS, u64XssBitmap),
409 SSMFIELD_ENTRY( VMXVVMCS, u64EnclsBitmap),
410 SSMFIELD_ENTRY( VMXVVMCS, u64SpptPtr),
411 SSMFIELD_ENTRY( VMXVVMCS, u64TscMultiplier),
412 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved0),
413
414 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0Mask),
415 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4Mask),
416 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0ReadShadow),
417 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4ReadShadow),
418 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target0),
419 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target1),
420 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target2),
421 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target3),
422 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved4),
423
424 SSMFIELD_ENTRY( VMXVVMCS, HostEs),
425 SSMFIELD_ENTRY( VMXVVMCS, HostCs),
426 SSMFIELD_ENTRY( VMXVVMCS, HostSs),
427 SSMFIELD_ENTRY( VMXVVMCS, HostDs),
428 SSMFIELD_ENTRY( VMXVVMCS, HostFs),
429 SSMFIELD_ENTRY( VMXVVMCS, HostGs),
430 SSMFIELD_ENTRY( VMXVVMCS, HostTr),
431 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved2),
432
433 SSMFIELD_ENTRY( VMXVVMCS, u32HostSysenterCs),
434 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved4),
435
436 SSMFIELD_ENTRY( VMXVVMCS, u64HostPatMsr),
437 SSMFIELD_ENTRY( VMXVVMCS, u64HostEferMsr),
438 SSMFIELD_ENTRY( VMXVVMCS, u64HostPerfGlobalCtlMsr),
439 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved3),
440
441 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr0),
442 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr3),
443 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr4),
444 SSMFIELD_ENTRY( VMXVVMCS, u64HostFsBase),
445 SSMFIELD_ENTRY( VMXVVMCS, u64HostGsBase),
446 SSMFIELD_ENTRY( VMXVVMCS, u64HostTrBase),
447 SSMFIELD_ENTRY( VMXVVMCS, u64HostGdtrBase),
448 SSMFIELD_ENTRY( VMXVVMCS, u64HostIdtrBase),
449 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEsp),
450 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEip),
451 SSMFIELD_ENTRY( VMXVVMCS, u64HostRsp),
452 SSMFIELD_ENTRY( VMXVVMCS, u64HostRip),
453 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved7),
454
455 SSMFIELD_ENTRY( VMXVVMCS, GuestEs),
456 SSMFIELD_ENTRY( VMXVVMCS, GuestCs),
457 SSMFIELD_ENTRY( VMXVVMCS, GuestSs),
458 SSMFIELD_ENTRY( VMXVVMCS, GuestDs),
459 SSMFIELD_ENTRY( VMXVVMCS, GuestFs),
460 SSMFIELD_ENTRY( VMXVVMCS, GuestGs),
461 SSMFIELD_ENTRY( VMXVVMCS, GuestLdtr),
462 SSMFIELD_ENTRY( VMXVVMCS, GuestTr),
463 SSMFIELD_ENTRY( VMXVVMCS, u16GuestIntStatus),
464 SSMFIELD_ENTRY( VMXVVMCS, u16PmlIndex),
465 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved1),
466
467 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsLimit),
468 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsLimit),
469 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsLimit),
470 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsLimit),
471 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsLimit),
472 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsLimit),
473 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrLimit),
474 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrLimit),
475 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGdtrLimit),
476 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIdtrLimit),
477 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsAttr),
478 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsAttr),
479 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsAttr),
480 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsAttr),
481 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsAttr),
482 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsAttr),
483 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrAttr),
484 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrAttr),
485 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIntrState),
486 SSMFIELD_ENTRY( VMXVVMCS, u32GuestActivityState),
487 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSmBase),
488 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSysenterCS),
489 SSMFIELD_ENTRY( VMXVVMCS, u32PreemptTimer),
490 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved3),
491
492 SSMFIELD_ENTRY( VMXVVMCS, u64VmcsLinkPtr),
493 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDebugCtlMsr),
494 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPatMsr),
495 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEferMsr),
496 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPerfGlobalCtlMsr),
497 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte0),
498 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte1),
499 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte2),
500 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte3),
501 SSMFIELD_ENTRY( VMXVVMCS, u64GuestBndcfgsMsr),
502 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRtitCtlMsr),
503 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved2),
504
505 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr0),
506 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr3),
507 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr4),
508 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEsBase),
509 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCsBase),
510 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSsBase),
511 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDsBase),
512 SSMFIELD_ENTRY( VMXVVMCS, u64GuestFsBase),
513 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGsBase),
514 SSMFIELD_ENTRY( VMXVVMCS, u64GuestLdtrBase),
515 SSMFIELD_ENTRY( VMXVVMCS, u64GuestTrBase),
516 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGdtrBase),
517 SSMFIELD_ENTRY( VMXVVMCS, u64GuestIdtrBase),
518 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDr7),
519 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRsp),
520 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRip),
521 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRFlags),
522 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPendingDbgXcpts),
523 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEsp),
524 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEip),
525 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved6),
526
527 SSMFIELD_ENTRY_TERM()
528};
529
530/** Saved state field descriptors for CPUMCTX. */
531static const SSMFIELD g_aCpumX87Fields[] =
532{
533 SSMFIELD_ENTRY( X86FXSTATE, FCW),
534 SSMFIELD_ENTRY( X86FXSTATE, FSW),
535 SSMFIELD_ENTRY( X86FXSTATE, FTW),
536 SSMFIELD_ENTRY( X86FXSTATE, FOP),
537 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
538 SSMFIELD_ENTRY( X86FXSTATE, CS),
539 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
540 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
541 SSMFIELD_ENTRY( X86FXSTATE, DS),
542 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
543 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
544 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
545 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
546 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
547 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
548 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
549 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
550 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
551 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
552 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
553 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
554 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
555 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
556 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
557 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
558 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
559 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
560 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
561 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
562 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
563 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
564 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
565 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
566 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
567 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
568 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
569 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
570 SSMFIELD_ENTRY_TERM()
571};
572
573/** Saved state field descriptors for X86XSAVEHDR. */
574static const SSMFIELD g_aCpumXSaveHdrFields[] =
575{
576 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
577 SSMFIELD_ENTRY_TERM()
578};
579
580/** Saved state field descriptors for X86XSAVEYMMHI. */
581static const SSMFIELD g_aCpumYmmHiFields[] =
582{
583 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
584 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
585 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
586 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
587 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
588 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
589 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
590 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
591 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
592 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
593 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
594 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
595 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
596 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
597 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
598 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
599 SSMFIELD_ENTRY_TERM()
600};
601
602/** Saved state field descriptors for X86XSAVEBNDREGS. */
603static const SSMFIELD g_aCpumBndRegsFields[] =
604{
605 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
606 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
607 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
608 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
609 SSMFIELD_ENTRY_TERM()
610};
611
612/** Saved state field descriptors for X86XSAVEBNDCFG. */
613static const SSMFIELD g_aCpumBndCfgFields[] =
614{
615 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
616 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
617 SSMFIELD_ENTRY_TERM()
618};
619
620#if 0 /** @todo */
621/** Saved state field descriptors for X86XSAVEOPMASK. */
622static const SSMFIELD g_aCpumOpmaskFields[] =
623{
624 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
625 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
626 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
627 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
628 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
629 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
630 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
631 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
632 SSMFIELD_ENTRY_TERM()
633};
634#endif
635
636/** Saved state field descriptors for X86XSAVEZMMHI256. */
637static const SSMFIELD g_aCpumZmmHi256Fields[] =
638{
639 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
640 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
641 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
642 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
643 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
644 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
645 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
646 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
647 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
648 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
649 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
650 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
651 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
652 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
653 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
654 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
655 SSMFIELD_ENTRY_TERM()
656};
657
658/** Saved state field descriptors for X86XSAVEZMM16HI. */
659static const SSMFIELD g_aCpumZmm16HiFields[] =
660{
661 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
662 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
663 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
664 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
665 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
666 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
667 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
668 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
669 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
670 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
671 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
672 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
673 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
674 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
675 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
676 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
677 SSMFIELD_ENTRY_TERM()
678};
679
680
681
682/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
683 * registeres changed. */
684static const SSMFIELD g_aCpumX87FieldsMem[] =
685{
686 SSMFIELD_ENTRY( X86FXSTATE, FCW),
687 SSMFIELD_ENTRY( X86FXSTATE, FSW),
688 SSMFIELD_ENTRY( X86FXSTATE, FTW),
689 SSMFIELD_ENTRY( X86FXSTATE, FOP),
690 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
691 SSMFIELD_ENTRY( X86FXSTATE, CS),
692 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
693 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
694 SSMFIELD_ENTRY( X86FXSTATE, DS),
695 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
696 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
697 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
698 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
699 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
700 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
701 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
702 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
703 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
704 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
705 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
706 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
707 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
708 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
709 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
710 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
711 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
712 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
713 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
714 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
715 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
716 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
717 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
718 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
719 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
720 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
721 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
722 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
723 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
724};
725
726/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
727 * registeres changed. */
728static const SSMFIELD g_aCpumCtxFieldsMem[] =
729{
730 SSMFIELD_ENTRY( CPUMCTX, rdi),
731 SSMFIELD_ENTRY( CPUMCTX, rsi),
732 SSMFIELD_ENTRY( CPUMCTX, rbp),
733 SSMFIELD_ENTRY( CPUMCTX, rax),
734 SSMFIELD_ENTRY( CPUMCTX, rbx),
735 SSMFIELD_ENTRY( CPUMCTX, rdx),
736 SSMFIELD_ENTRY( CPUMCTX, rcx),
737 SSMFIELD_ENTRY( CPUMCTX, rsp),
738 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
739 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
740 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
741 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
742 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
743 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
744 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
745 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
746 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
747 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
748 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
749 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
750 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
751 SSMFIELD_ENTRY( CPUMCTX, rflags),
752 SSMFIELD_ENTRY( CPUMCTX, rip),
753 SSMFIELD_ENTRY( CPUMCTX, r8),
754 SSMFIELD_ENTRY( CPUMCTX, r9),
755 SSMFIELD_ENTRY( CPUMCTX, r10),
756 SSMFIELD_ENTRY( CPUMCTX, r11),
757 SSMFIELD_ENTRY( CPUMCTX, r12),
758 SSMFIELD_ENTRY( CPUMCTX, r13),
759 SSMFIELD_ENTRY( CPUMCTX, r14),
760 SSMFIELD_ENTRY( CPUMCTX, r15),
761 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
762 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
763 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
764 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
765 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
766 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
767 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
768 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
769 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
770 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
771 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
772 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
773 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
774 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
775 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
776 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
777 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
778 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
779 SSMFIELD_ENTRY( CPUMCTX, cr0),
780 SSMFIELD_ENTRY( CPUMCTX, cr2),
781 SSMFIELD_ENTRY( CPUMCTX, cr3),
782 SSMFIELD_ENTRY( CPUMCTX, cr4),
783 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
784 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
785 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
786 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
787 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
788 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
789 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
790 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
791 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
792 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
793 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
794 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
795 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
796 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
797 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
798 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
799 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
800 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
801 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
802 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
803 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
804 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
805 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
806 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
807 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
808 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
809 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
810 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
811 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
812 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
813 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
814 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
815 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
816 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
817 SSMFIELD_ENTRY_TERM()
818};
819
820/** Saved state field descriptors for CPUMCTX_VER1_6. */
821static const SSMFIELD g_aCpumX87FieldsV16[] =
822{
823 SSMFIELD_ENTRY( X86FXSTATE, FCW),
824 SSMFIELD_ENTRY( X86FXSTATE, FSW),
825 SSMFIELD_ENTRY( X86FXSTATE, FTW),
826 SSMFIELD_ENTRY( X86FXSTATE, FOP),
827 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
828 SSMFIELD_ENTRY( X86FXSTATE, CS),
829 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
830 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
831 SSMFIELD_ENTRY( X86FXSTATE, DS),
832 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
833 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
834 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
835 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
836 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
837 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
838 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
839 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
840 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
841 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
842 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
843 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
844 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
845 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
846 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
847 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
848 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
849 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
850 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
851 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
852 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
853 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
854 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
855 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
856 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
857 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
858 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
859 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
860 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
861 SSMFIELD_ENTRY_TERM()
862};
863
864/** Saved state field descriptors for CPUMCTX_VER1_6. */
865static const SSMFIELD g_aCpumCtxFieldsV16[] =
866{
867 SSMFIELD_ENTRY( CPUMCTX, rdi),
868 SSMFIELD_ENTRY( CPUMCTX, rsi),
869 SSMFIELD_ENTRY( CPUMCTX, rbp),
870 SSMFIELD_ENTRY( CPUMCTX, rax),
871 SSMFIELD_ENTRY( CPUMCTX, rbx),
872 SSMFIELD_ENTRY( CPUMCTX, rdx),
873 SSMFIELD_ENTRY( CPUMCTX, rcx),
874 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
875 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
876 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
877 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
878 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
879 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
880 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
881 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
882 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
883 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
884 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
885 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
886 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
887 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
888 SSMFIELD_ENTRY( CPUMCTX, rflags),
889 SSMFIELD_ENTRY( CPUMCTX, rip),
890 SSMFIELD_ENTRY( CPUMCTX, r8),
891 SSMFIELD_ENTRY( CPUMCTX, r9),
892 SSMFIELD_ENTRY( CPUMCTX, r10),
893 SSMFIELD_ENTRY( CPUMCTX, r11),
894 SSMFIELD_ENTRY( CPUMCTX, r12),
895 SSMFIELD_ENTRY( CPUMCTX, r13),
896 SSMFIELD_ENTRY( CPUMCTX, r14),
897 SSMFIELD_ENTRY( CPUMCTX, r15),
898 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
899 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
900 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
901 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
902 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
903 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
904 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
905 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
906 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
907 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
908 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
909 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
910 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
911 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
912 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
913 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
914 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
915 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
916 SSMFIELD_ENTRY( CPUMCTX, cr0),
917 SSMFIELD_ENTRY( CPUMCTX, cr2),
918 SSMFIELD_ENTRY( CPUMCTX, cr3),
919 SSMFIELD_ENTRY( CPUMCTX, cr4),
920 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
921 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
922 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
923 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
924 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
925 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
926 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
927 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
928 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
929 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
930 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
931 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
932 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
933 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
934 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
935 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
936 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
937 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
938 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
939 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
940 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
941 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
942 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
943 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
944 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
945 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
946 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
947 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
948 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
949 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
950 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
951 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
952 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
953 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
954 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
955 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
956 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
957 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
958 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
959 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
960 SSMFIELD_ENTRY_TERM()
961};
962
963
964/**
965 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
966 *
967 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
968 * (last instruction pointer, last data pointer, last opcode) except when the ES
969 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
970 * clear these registers there is potential, local FPU leakage from a process
971 * using the FPU to another.
972 *
973 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
974 *
975 * @param pVM The cross context VM structure.
976 */
977static void cpumR3CheckLeakyFpu(PVM pVM)
978{
979 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
980 uint32_t const u32Family = u32CpuVersion >> 8;
981 if ( u32Family >= 6 /* K7 and higher */
982 && (ASMIsAmdCpu() || ASMIsHygonCpu()) )
983 {
984 uint32_t cExt = ASMCpuId_EAX(0x80000000);
985 if (ASMIsValidExtRange(cExt))
986 {
987 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
988 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
989 {
990 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
991 {
992 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
993 pVCpu->cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
994 }
995 Log(("CPUM: Host CPU has leaky fxsave/fxrstor behaviour\n"));
996 }
997 }
998 }
999}
1000
1001
1002/**
1003 * Frees memory allocated for the SVM hardware virtualization state.
1004 *
1005 * @param pVM The cross context VM structure.
1006 */
1007static void cpumR3FreeSvmHwVirtState(PVM pVM)
1008{
1009 Assert(pVM->cpum.s.GuestFeatures.fSvm);
1010 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1011 {
1012 PVMCPU pVCpu = pVM->apCpusR3[i];
1013 if (pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3)
1014 {
1015 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES);
1016 pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3 = NULL;
1017 }
1018 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = NIL_RTHCPHYS;
1019
1020 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3)
1021 {
1022 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES);
1023 pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3 = NULL;
1024 }
1025
1026 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3)
1027 {
1028 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES);
1029 pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3 = NULL;
1030 }
1031 }
1032}
1033
1034
1035/**
1036 * Allocates memory for the SVM hardware virtualization state.
1037 *
1038 * @returns VBox status code.
1039 * @param pVM The cross context VM structure.
1040 */
1041static int cpumR3AllocSvmHwVirtState(PVM pVM)
1042{
1043 Assert(pVM->cpum.s.GuestFeatures.fSvm);
1044
1045 int rc = VINF_SUCCESS;
1046 LogRel(("CPUM: Allocating %u pages for the nested-guest SVM MSR and IO permission bitmaps\n",
1047 pVM->cCpus * (SVM_MSRPM_PAGES + SVM_IOPM_PAGES)));
1048 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1049 {
1050 PVMCPU pVCpu = pVM->apCpusR3[i];
1051 pVCpu->cpum.s.Guest.hwvirt.enmHwvirt = CPUMHWVIRT_SVM;
1052
1053 /*
1054 * Allocate the nested-guest VMCB.
1055 */
1056 SUPPAGE SupNstGstVmcbPage;
1057 RT_ZERO(SupNstGstVmcbPage);
1058 SupNstGstVmcbPage.Phys = NIL_RTHCPHYS;
1059 Assert(SVM_VMCB_PAGES == 1);
1060 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
1061 rc = SUPR3PageAllocEx(SVM_VMCB_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3,
1062 &pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR0, &SupNstGstVmcbPage);
1063 if (RT_FAILURE(rc))
1064 {
1065 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
1066 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCB\n", pVCpu->idCpu, SVM_VMCB_PAGES));
1067 break;
1068 }
1069 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = SupNstGstVmcbPage.Phys;
1070
1071 /*
1072 * Allocate the MSRPM (MSR Permission bitmap).
1073 *
1074 * This need not be physically contiguous pages because we use the one from
1075 * HMPHYSCPU while executing the nested-guest using hardware-assisted SVM.
1076 * This one is just used for caching the bitmap from guest physical memory.
1077 */
1078 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
1079 rc = SUPR3PageAllocEx(SVM_MSRPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3,
1080 &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR0, NULL /* paPages */);
1081 if (RT_FAILURE(rc))
1082 {
1083 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
1084 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's MSR permission bitmap\n", pVCpu->idCpu,
1085 SVM_MSRPM_PAGES));
1086 break;
1087 }
1088
1089 /*
1090 * Allocate the IOPM (IO Permission bitmap).
1091 *
1092 * This need not be physically contiguous pages because we re-use the ring-0
1093 * allocated IOPM while executing the nested-guest using hardware-assisted SVM
1094 * because it's identical (we trap all IO accesses).
1095 *
1096 * This one is just used for caching the IOPM from guest physical memory in
1097 * case the guest hypervisor allows direct access to some IO ports.
1098 */
1099 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
1100 rc = SUPR3PageAllocEx(SVM_IOPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3,
1101 &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR0, NULL /* paPages */);
1102 if (RT_FAILURE(rc))
1103 {
1104 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
1105 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's IO permission bitmap\n", pVCpu->idCpu,
1106 SVM_IOPM_PAGES));
1107 break;
1108 }
1109 }
1110
1111 /* On any failure, cleanup. */
1112 if (RT_FAILURE(rc))
1113 cpumR3FreeSvmHwVirtState(pVM);
1114
1115 return rc;
1116}
1117
1118
1119/**
1120 * Resets per-VCPU SVM hardware virtualization state.
1121 *
1122 * @param pVCpu The cross context virtual CPU structure.
1123 */
1124DECLINLINE(void) cpumR3ResetSvmHwVirtState(PVMCPU pVCpu)
1125{
1126 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1127 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_SVM);
1128 Assert(pCtx->hwvirt.svm.CTX_SUFF(pVmcb));
1129
1130 memset(pCtx->hwvirt.svm.CTX_SUFF(pVmcb), 0, SVM_VMCB_PAGES << PAGE_SHIFT);
1131 pCtx->hwvirt.svm.uMsrHSavePa = 0;
1132 pCtx->hwvirt.svm.uPrevPauseTick = 0;
1133}
1134
1135
1136/**
1137 * Frees memory allocated for the VMX hardware virtualization state.
1138 *
1139 * @param pVM The cross context VM structure.
1140 */
1141static void cpumR3FreeVmxHwVirtState(PVM pVM)
1142{
1143 Assert(pVM->cpum.s.GuestFeatures.fVmx);
1144 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1145 {
1146 PVMCPU pVCpu = pVM->apCpusR3[i];
1147 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1148
1149 if (pCtx->hwvirt.vmx.pVmcsR3)
1150 {
1151 SUPR3ContFree(pCtx->hwvirt.vmx.pVmcsR3, VMX_V_VMCS_PAGES);
1152 pCtx->hwvirt.vmx.pVmcsR3 = NULL;
1153 }
1154 if (pCtx->hwvirt.vmx.pShadowVmcsR3)
1155 {
1156 SUPR3ContFree(pCtx->hwvirt.vmx.pShadowVmcsR3, VMX_V_VMCS_PAGES);
1157 pCtx->hwvirt.vmx.pShadowVmcsR3 = NULL;
1158 }
1159 if (pCtx->hwvirt.vmx.pvVirtApicPageR3)
1160 {
1161 SUPR3ContFree(pCtx->hwvirt.vmx.pvVirtApicPageR3, VMX_V_VIRT_APIC_PAGES);
1162 pCtx->hwvirt.vmx.pvVirtApicPageR3 = NULL;
1163 }
1164 if (pCtx->hwvirt.vmx.pvVmreadBitmapR3)
1165 {
1166 SUPR3ContFree(pCtx->hwvirt.vmx.pvVmreadBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
1167 pCtx->hwvirt.vmx.pvVmreadBitmapR3 = NULL;
1168 }
1169 if (pCtx->hwvirt.vmx.pvVmwriteBitmapR3)
1170 {
1171 SUPR3ContFree(pCtx->hwvirt.vmx.pvVmwriteBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
1172 pCtx->hwvirt.vmx.pvVmwriteBitmapR3 = NULL;
1173 }
1174 if (pCtx->hwvirt.vmx.pEntryMsrLoadAreaR3)
1175 {
1176 SUPR3ContFree(pCtx->hwvirt.vmx.pEntryMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_PAGES);
1177 pCtx->hwvirt.vmx.pEntryMsrLoadAreaR3 = NULL;
1178 }
1179 if (pCtx->hwvirt.vmx.pExitMsrStoreAreaR3)
1180 {
1181 SUPR3ContFree(pCtx->hwvirt.vmx.pExitMsrStoreAreaR3, VMX_V_AUTOMSR_AREA_PAGES);
1182 pCtx->hwvirt.vmx.pExitMsrStoreAreaR3 = NULL;
1183 }
1184 if (pCtx->hwvirt.vmx.pExitMsrLoadAreaR3)
1185 {
1186 SUPR3ContFree(pCtx->hwvirt.vmx.pExitMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_PAGES);
1187 pCtx->hwvirt.vmx.pExitMsrLoadAreaR3 = NULL;
1188 }
1189 if (pCtx->hwvirt.vmx.pvMsrBitmapR3)
1190 {
1191 SUPR3ContFree(pCtx->hwvirt.vmx.pvMsrBitmapR3, VMX_V_MSR_BITMAP_PAGES);
1192 pCtx->hwvirt.vmx.pvMsrBitmapR3 = NULL;
1193 }
1194 if (pCtx->hwvirt.vmx.pvIoBitmapR3)
1195 {
1196 SUPR3ContFree(pCtx->hwvirt.vmx.pvIoBitmapR3, VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES);
1197 pCtx->hwvirt.vmx.pvIoBitmapR3 = NULL;
1198 }
1199 }
1200}
1201
1202
1203/**
1204 * Allocates memory for the VMX hardware virtualization state.
1205 *
1206 * @returns VBox status code.
1207 * @param pVM The cross context VM structure.
1208 */
1209static int cpumR3AllocVmxHwVirtState(PVM pVM)
1210{
1211 int rc = VINF_SUCCESS;
1212 uint32_t const cPages = VMX_V_VMCS_PAGES
1213 + VMX_V_SHADOW_VMCS_PAGES
1214 + VMX_V_VIRT_APIC_PAGES
1215 + (2 * VMX_V_VMREAD_VMWRITE_BITMAP_PAGES)
1216 + (3 * VMX_V_AUTOMSR_AREA_PAGES)
1217 + VMX_V_MSR_BITMAP_PAGES
1218 + (VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES);
1219 LogRel(("CPUM: Allocating %u pages for the nested-guest VMCS and related structures\n", pVM->cCpus * cPages));
1220 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1221 {
1222 PVMCPU pVCpu = pVM->apCpusR3[i];
1223 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1224 pCtx->hwvirt.enmHwvirt = CPUMHWVIRT_VMX;
1225
1226 /*
1227 * Allocate the nested-guest current VMCS.
1228 */
1229 pCtx->hwvirt.vmx.pVmcsR3 = (PVMXVVMCS)SUPR3ContAlloc(VMX_V_VMCS_PAGES,
1230 &pCtx->hwvirt.vmx.pVmcsR0,
1231 &pCtx->hwvirt.vmx.HCPhysVmcs);
1232 if (pCtx->hwvirt.vmx.pVmcsR3)
1233 { /* likely */ }
1234 else
1235 {
1236 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCS\n", pVCpu->idCpu, VMX_V_VMCS_PAGES));
1237 break;
1238 }
1239
1240 /*
1241 * Allocate the nested-guest shadow VMCS.
1242 */
1243 pCtx->hwvirt.vmx.pShadowVmcsR3 = (PVMXVVMCS)SUPR3ContAlloc(VMX_V_VMCS_PAGES,
1244 &pCtx->hwvirt.vmx.pShadowVmcsR0,
1245 &pCtx->hwvirt.vmx.HCPhysShadowVmcs);
1246 if (pCtx->hwvirt.vmx.pShadowVmcsR3)
1247 { /* likely */ }
1248 else
1249 {
1250 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's shadow VMCS\n", pVCpu->idCpu, VMX_V_VMCS_PAGES));
1251 break;
1252 }
1253
1254 /*
1255 * Allocate the virtual-APIC page.
1256 */
1257 pCtx->hwvirt.vmx.pvVirtApicPageR3 = SUPR3ContAlloc(VMX_V_VIRT_APIC_PAGES,
1258 &pCtx->hwvirt.vmx.pvVirtApicPageR0,
1259 &pCtx->hwvirt.vmx.HCPhysVirtApicPage);
1260 if (pCtx->hwvirt.vmx.pvVirtApicPageR3)
1261 { /* likely */ }
1262 else
1263 {
1264 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's virtual-APIC page\n", pVCpu->idCpu,
1265 VMX_V_VIRT_APIC_PAGES));
1266 break;
1267 }
1268
1269 /*
1270 * Allocate the VMREAD-bitmap.
1271 */
1272 pCtx->hwvirt.vmx.pvVmreadBitmapR3 = SUPR3ContAlloc(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES,
1273 &pCtx->hwvirt.vmx.pvVmreadBitmapR0,
1274 &pCtx->hwvirt.vmx.HCPhysVmreadBitmap);
1275 if (pCtx->hwvirt.vmx.pvVmreadBitmapR3)
1276 { /* likely */ }
1277 else
1278 {
1279 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMREAD-bitmap\n", pVCpu->idCpu,
1280 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
1281 break;
1282 }
1283
1284 /*
1285 * Allocatge the VMWRITE-bitmap.
1286 */
1287 pCtx->hwvirt.vmx.pvVmwriteBitmapR3 = SUPR3ContAlloc(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES,
1288 &pCtx->hwvirt.vmx.pvVmwriteBitmapR0,
1289 &pCtx->hwvirt.vmx.HCPhysVmwriteBitmap);
1290 if (pCtx->hwvirt.vmx.pvVmwriteBitmapR3)
1291 { /* likely */ }
1292 else
1293 {
1294 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMWRITE-bitmap\n", pVCpu->idCpu,
1295 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
1296 break;
1297 }
1298
1299 /*
1300 * Allocate the VM-entry MSR-load area.
1301 */
1302 pCtx->hwvirt.vmx.pEntryMsrLoadAreaR3 = (PVMXAUTOMSR)SUPR3ContAlloc(VMX_V_AUTOMSR_AREA_PAGES,
1303 &pCtx->hwvirt.vmx.pEntryMsrLoadAreaR0,
1304 &pCtx->hwvirt.vmx.HCPhysEntryMsrLoadArea);
1305 if (pCtx->hwvirt.vmx.pEntryMsrLoadAreaR3)
1306 { /* likely */ }
1307 else
1308 {
1309 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VM-entry MSR-load area\n", pVCpu->idCpu,
1310 VMX_V_AUTOMSR_AREA_PAGES));
1311 break;
1312 }
1313
1314 /*
1315 * Allocate the VM-exit MSR-store area.
1316 */
1317 pCtx->hwvirt.vmx.pExitMsrStoreAreaR3 = (PVMXAUTOMSR)SUPR3ContAlloc(VMX_V_AUTOMSR_AREA_PAGES,
1318 &pCtx->hwvirt.vmx.pExitMsrStoreAreaR0,
1319 &pCtx->hwvirt.vmx.HCPhysExitMsrStoreArea);
1320 if (pCtx->hwvirt.vmx.pExitMsrStoreAreaR3)
1321 { /* likely */ }
1322 else
1323 {
1324 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VM-exit MSR-store area\n", pVCpu->idCpu,
1325 VMX_V_AUTOMSR_AREA_PAGES));
1326 break;
1327 }
1328
1329 /*
1330 * Allocate the VM-exit MSR-load area.
1331 */
1332 pCtx->hwvirt.vmx.pExitMsrLoadAreaR3 = (PVMXAUTOMSR)SUPR3ContAlloc(VMX_V_AUTOMSR_AREA_PAGES,
1333 &pCtx->hwvirt.vmx.pExitMsrLoadAreaR0,
1334 &pCtx->hwvirt.vmx.HCPhysExitMsrLoadArea);
1335 if (pCtx->hwvirt.vmx.pExitMsrLoadAreaR3)
1336 { /* likely */ }
1337 else
1338 {
1339 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VM-exit MSR-load area\n", pVCpu->idCpu,
1340 VMX_V_AUTOMSR_AREA_PAGES));
1341 break;
1342 }
1343
1344 /*
1345 * Allocate the MSR bitmap.
1346 */
1347 pCtx->hwvirt.vmx.pvMsrBitmapR3 = SUPR3ContAlloc(VMX_V_MSR_BITMAP_PAGES,
1348 &pCtx->hwvirt.vmx.pvMsrBitmapR0,
1349 &pCtx->hwvirt.vmx.HCPhysMsrBitmap);
1350 if (pCtx->hwvirt.vmx.pvMsrBitmapR3)
1351 { /* likely */ }
1352 else
1353 {
1354 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's MSR bitmap\n", pVCpu->idCpu,
1355 VMX_V_MSR_BITMAP_PAGES));
1356 break;
1357 }
1358
1359 /*
1360 * Allocate the I/O bitmaps (A and B).
1361 */
1362 pCtx->hwvirt.vmx.pvIoBitmapR3 = SUPR3ContAlloc(VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES,
1363 &pCtx->hwvirt.vmx.pvIoBitmapR0,
1364 &pCtx->hwvirt.vmx.HCPhysIoBitmap);
1365 if (pCtx->hwvirt.vmx.pvIoBitmapR3)
1366 { /* likely */ }
1367 else
1368 {
1369 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's I/O bitmaps\n", pVCpu->idCpu,
1370 VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES));
1371 break;
1372 }
1373
1374 /*
1375 * Zero out all allocated pages (should compress well for saved-state).
1376 */
1377 memset(pCtx->hwvirt.vmx.CTX_SUFF(pVmcs), 0, VMX_V_VMCS_SIZE);
1378 memset(pCtx->hwvirt.vmx.CTX_SUFF(pShadowVmcs), 0, VMX_V_SHADOW_VMCS_SIZE);
1379 memset(pCtx->hwvirt.vmx.CTX_SUFF(pvVirtApicPage), 0, VMX_V_VIRT_APIC_SIZE);
1380 memset(pCtx->hwvirt.vmx.CTX_SUFF(pvVmreadBitmap), 0, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1381 memset(pCtx->hwvirt.vmx.CTX_SUFF(pvVmwriteBitmap), 0, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1382 memset(pCtx->hwvirt.vmx.CTX_SUFF(pEntryMsrLoadArea), 0, VMX_V_AUTOMSR_AREA_SIZE);
1383 memset(pCtx->hwvirt.vmx.CTX_SUFF(pExitMsrStoreArea), 0, VMX_V_AUTOMSR_AREA_SIZE);
1384 memset(pCtx->hwvirt.vmx.CTX_SUFF(pExitMsrLoadArea), 0, VMX_V_AUTOMSR_AREA_SIZE);
1385 memset(pCtx->hwvirt.vmx.CTX_SUFF(pvMsrBitmap), 0, VMX_V_MSR_BITMAP_SIZE);
1386 memset(pCtx->hwvirt.vmx.CTX_SUFF(pvIoBitmap), 0, VMX_V_IO_BITMAP_A_SIZE + VMX_V_IO_BITMAP_B_SIZE);
1387 }
1388
1389 /* On any failure, cleanup. */
1390 if (RT_FAILURE(rc))
1391 cpumR3FreeVmxHwVirtState(pVM);
1392
1393 return rc;
1394}
1395
1396
1397/**
1398 * Resets per-VCPU VMX hardware virtualization state.
1399 *
1400 * @param pVCpu The cross context virtual CPU structure.
1401 */
1402DECLINLINE(void) cpumR3ResetVmxHwVirtState(PVMCPU pVCpu)
1403{
1404 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1405 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_VMX);
1406 Assert(pCtx->hwvirt.vmx.CTX_SUFF(pVmcs));
1407 Assert(pCtx->hwvirt.vmx.CTX_SUFF(pShadowVmcs));
1408
1409 memset(pCtx->hwvirt.vmx.CTX_SUFF(pVmcs), 0, VMX_V_VMCS_SIZE);
1410 memset(pCtx->hwvirt.vmx.CTX_SUFF(pShadowVmcs), 0, VMX_V_SHADOW_VMCS_SIZE);
1411 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1412 pCtx->hwvirt.vmx.GCPhysShadowVmcs = NIL_RTGCPHYS;
1413 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1414 pCtx->hwvirt.vmx.fInVmxRootMode = false;
1415 pCtx->hwvirt.vmx.fInVmxNonRootMode = false;
1416 /* Don't reset diagnostics here. */
1417
1418 /* Stop any VMX-preemption timer. */
1419 CPUMStopGuestVmxPremptTimer(pVCpu);
1420
1421 /* Clear all nested-guest FFs. */
1422 VMCPU_FF_CLEAR_MASK(pVCpu, VMCPU_FF_VMX_ALL_MASK);
1423}
1424
1425
1426/**
1427 * Displays the host and guest VMX features.
1428 *
1429 * @param pVM The cross context VM structure.
1430 * @param pHlp The info helper functions.
1431 * @param pszArgs "terse", "default" or "verbose".
1432 */
1433DECLCALLBACK(void) cpumR3InfoVmxFeatures(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1434{
1435 RT_NOREF(pszArgs);
1436 PCCPUMFEATURES pHostFeatures = &pVM->cpum.s.HostFeatures;
1437 PCCPUMFEATURES pGuestFeatures = &pVM->cpum.s.GuestFeatures;
1438 if ( pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_INTEL
1439 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_VIA
1440 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_SHANGHAI)
1441 {
1442#define VMXFEATDUMP(a_szDesc, a_Var) \
1443 pHlp->pfnPrintf(pHlp, " %s = %u (%u)\n", a_szDesc, pGuestFeatures->a_Var, pHostFeatures->a_Var)
1444
1445 pHlp->pfnPrintf(pHlp, "Nested hardware virtualization - VMX features\n");
1446 pHlp->pfnPrintf(pHlp, " Mnemonic - Description = guest (host)\n");
1447 VMXFEATDUMP("VMX - Virtual-Machine Extensions ", fVmx);
1448 /* Basic. */
1449 VMXFEATDUMP("InsOutInfo - INS/OUTS instruction info. ", fVmxInsOutInfo);
1450 /* Pin-based controls. */
1451 VMXFEATDUMP("ExtIntExit - External interrupt exiting ", fVmxExtIntExit);
1452 VMXFEATDUMP("NmiExit - NMI exiting ", fVmxNmiExit);
1453 VMXFEATDUMP("VirtNmi - Virtual NMIs ", fVmxVirtNmi);
1454 VMXFEATDUMP("PreemptTimer - VMX preemption timer ", fVmxPreemptTimer);
1455 VMXFEATDUMP("PostedInt - Posted interrupts ", fVmxPostedInt);
1456 /* Processor-based controls. */
1457 VMXFEATDUMP("IntWindowExit - Interrupt-window exiting ", fVmxIntWindowExit);
1458 VMXFEATDUMP("TscOffsetting - TSC offsetting ", fVmxTscOffsetting);
1459 VMXFEATDUMP("HltExit - HLT exiting ", fVmxHltExit);
1460 VMXFEATDUMP("InvlpgExit - INVLPG exiting ", fVmxInvlpgExit);
1461 VMXFEATDUMP("MwaitExit - MWAIT exiting ", fVmxMwaitExit);
1462 VMXFEATDUMP("RdpmcExit - RDPMC exiting ", fVmxRdpmcExit);
1463 VMXFEATDUMP("RdtscExit - RDTSC exiting ", fVmxRdtscExit);
1464 VMXFEATDUMP("Cr3LoadExit - CR3-load exiting ", fVmxCr3LoadExit);
1465 VMXFEATDUMP("Cr3StoreExit - CR3-store exiting ", fVmxCr3StoreExit);
1466 VMXFEATDUMP("Cr8LoadExit - CR8-load exiting ", fVmxCr8LoadExit);
1467 VMXFEATDUMP("Cr8StoreExit - CR8-store exiting ", fVmxCr8StoreExit);
1468 VMXFEATDUMP("UseTprShadow - Use TPR shadow ", fVmxUseTprShadow);
1469 VMXFEATDUMP("NmiWindowExit - NMI-window exiting ", fVmxNmiWindowExit);
1470 VMXFEATDUMP("MovDRxExit - Mov-DR exiting ", fVmxMovDRxExit);
1471 VMXFEATDUMP("UncondIoExit - Unconditional I/O exiting ", fVmxUncondIoExit);
1472 VMXFEATDUMP("UseIoBitmaps - Use I/O bitmaps ", fVmxUseIoBitmaps);
1473 VMXFEATDUMP("MonitorTrapFlag - Monitor Trap Flag ", fVmxMonitorTrapFlag);
1474 VMXFEATDUMP("UseMsrBitmaps - MSR bitmaps ", fVmxUseMsrBitmaps);
1475 VMXFEATDUMP("MonitorExit - MONITOR exiting ", fVmxMonitorExit);
1476 VMXFEATDUMP("PauseExit - PAUSE exiting ", fVmxPauseExit);
1477 VMXFEATDUMP("SecondaryExecCtl - Activate secondary controls ", fVmxSecondaryExecCtls);
1478 /* Secondary processor-based controls. */
1479 VMXFEATDUMP("VirtApic - Virtualize-APIC accesses ", fVmxVirtApicAccess);
1480 VMXFEATDUMP("Ept - Extended Page Tables ", fVmxEpt);
1481 VMXFEATDUMP("DescTableExit - Descriptor-table exiting ", fVmxDescTableExit);
1482 VMXFEATDUMP("Rdtscp - Enable RDTSCP ", fVmxRdtscp);
1483 VMXFEATDUMP("VirtX2ApicMode - Virtualize-x2APIC mode ", fVmxVirtX2ApicMode);
1484 VMXFEATDUMP("Vpid - Enable VPID ", fVmxVpid);
1485 VMXFEATDUMP("WbinvdExit - WBINVD exiting ", fVmxWbinvdExit);
1486 VMXFEATDUMP("UnrestrictedGuest - Unrestricted guest ", fVmxUnrestrictedGuest);
1487 VMXFEATDUMP("ApicRegVirt - APIC-register virtualization ", fVmxApicRegVirt);
1488 VMXFEATDUMP("VirtIntDelivery - Virtual-interrupt delivery ", fVmxVirtIntDelivery);
1489 VMXFEATDUMP("PauseLoopExit - PAUSE-loop exiting ", fVmxPauseLoopExit);
1490 VMXFEATDUMP("RdrandExit - RDRAND exiting ", fVmxRdrandExit);
1491 VMXFEATDUMP("Invpcid - Enable INVPCID ", fVmxInvpcid);
1492 VMXFEATDUMP("VmFuncs - Enable VM Functions ", fVmxVmFunc);
1493 VMXFEATDUMP("VmcsShadowing - VMCS shadowing ", fVmxVmcsShadowing);
1494 VMXFEATDUMP("RdseedExiting - RDSEED exiting ", fVmxRdseedExit);
1495 VMXFEATDUMP("PML - Page-Modification Log (PML) ", fVmxPml);
1496 VMXFEATDUMP("EptVe - EPT violations can cause #VE ", fVmxEptXcptVe);
1497 VMXFEATDUMP("XsavesXRstors - Enable XSAVES/XRSTORS ", fVmxXsavesXrstors);
1498 /* VM-entry controls. */
1499 VMXFEATDUMP("EntryLoadDebugCtls - Load debug controls on VM-entry ", fVmxEntryLoadDebugCtls);
1500 VMXFEATDUMP("Ia32eModeGuest - IA-32e mode guest ", fVmxIa32eModeGuest);
1501 VMXFEATDUMP("EntryLoadEferMsr - Load IA32_EFER MSR on VM-entry ", fVmxEntryLoadEferMsr);
1502 VMXFEATDUMP("EntryLoadPatMsr - Load IA32_PAT MSR on VM-entry ", fVmxEntryLoadPatMsr);
1503 /* VM-exit controls. */
1504 VMXFEATDUMP("ExitSaveDebugCtls - Save debug controls on VM-exit ", fVmxExitSaveDebugCtls);
1505 VMXFEATDUMP("HostAddrSpaceSize - Host address-space size ", fVmxHostAddrSpaceSize);
1506 VMXFEATDUMP("ExitAckExtInt - Acknowledge interrupt on VM-exit ", fVmxExitAckExtInt);
1507 VMXFEATDUMP("ExitSavePatMsr - Save IA32_PAT MSR on VM-exit ", fVmxExitSavePatMsr);
1508 VMXFEATDUMP("ExitLoadPatMsr - Load IA32_PAT MSR on VM-exit ", fVmxExitLoadPatMsr);
1509 VMXFEATDUMP("ExitSaveEferMsr - Save IA32_EFER MSR on VM-exit ", fVmxExitSaveEferMsr);
1510 VMXFEATDUMP("ExitLoadEferMsr - Load IA32_EFER MSR on VM-exit ", fVmxExitLoadEferMsr);
1511 VMXFEATDUMP("SavePreemptTimer - Save VMX-preemption timer ", fVmxSavePreemptTimer);
1512 /* Miscellaneous data. */
1513 VMXFEATDUMP("ExitSaveEferLma - Save IA32_EFER.LMA on VM-exit ", fVmxExitSaveEferLma);
1514 VMXFEATDUMP("IntelPt - Intel PT (Processor Trace) in VMX operation ", fVmxIntelPt);
1515 VMXFEATDUMP("VmwriteAll - VMWRITE to any supported VMCS field ", fVmxVmwriteAll);
1516 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1517#undef VMXFEATDUMP
1518 }
1519 else
1520 pHlp->pfnPrintf(pHlp, "No VMX features present - requires an Intel or compatible CPU.\n");
1521}
1522
1523
1524/**
1525 * Checks whether nested-guest execution using hardware-assisted VMX (e.g, using HM
1526 * or NEM) is allowed.
1527 *
1528 * @returns @c true if hardware-assisted nested-guest execution is allowed, @c false
1529 * otherwise.
1530 * @param pVM The cross context VM structure.
1531 */
1532static bool cpumR3IsHwAssistNstGstExecAllowed(PVM pVM)
1533{
1534 AssertMsg(pVM->bMainExecutionEngine != VM_EXEC_ENGINE_NOT_SET, ("Calling this function too early!\n"));
1535#ifndef VBOX_WITH_NESTED_HWVIRT_ONLY_IN_IEM
1536 if ( pVM->bMainExecutionEngine == VM_EXEC_ENGINE_HW_VIRT
1537 || pVM->bMainExecutionEngine == VM_EXEC_ENGINE_NATIVE_API)
1538 return true;
1539#else
1540 NOREF(pVM);
1541#endif
1542 return false;
1543}
1544
1545
1546/**
1547 * Initializes the VMX guest MSRs from guest CPU features based on the host MSRs.
1548 *
1549 * @param pVM The cross context VM structure.
1550 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1551 * and no hardware-assisted nested-guest execution is
1552 * possible for this VM.
1553 * @param pGuestFeatures The guest features to use (only VMX features are
1554 * accessed).
1555 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1556 *
1557 * @remarks This function ASSUMES the VMX guest-features are already exploded!
1558 */
1559static void cpumR3InitVmxGuestMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PCCPUMFEATURES pGuestFeatures, PVMXMSRS pGuestVmxMsrs)
1560{
1561 bool const fIsNstGstHwExecAllowed = cpumR3IsHwAssistNstGstExecAllowed(pVM);
1562
1563 Assert(!fIsNstGstHwExecAllowed || pHostVmxMsrs);
1564 Assert(pGuestFeatures->fVmx);
1565
1566 /*
1567 * We don't support the following MSRs yet:
1568 * - True Pin-based VM-execution controls.
1569 * - True Processor-based VM-execution controls.
1570 * - True VM-entry VM-execution controls.
1571 * - True VM-exit VM-execution controls.
1572 */
1573
1574 /* Feature control. */
1575 pGuestVmxMsrs->u64FeatCtrl = MSR_IA32_FEATURE_CONTROL_LOCK | MSR_IA32_FEATURE_CONTROL_VMXON;
1576
1577 /* Basic information. */
1578 {
1579 uint64_t const u64Basic = RT_BF_MAKE(VMX_BF_BASIC_VMCS_ID, VMX_V_VMCS_REVISION_ID )
1580 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_SIZE, VMX_V_VMCS_SIZE )
1581 | RT_BF_MAKE(VMX_BF_BASIC_PHYSADDR_WIDTH, !pGuestFeatures->fLongMode )
1582 | RT_BF_MAKE(VMX_BF_BASIC_DUAL_MON, 0 )
1583 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_MEM_TYPE, VMX_BASIC_MEM_TYPE_WB )
1584 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_INS_OUTS, pGuestFeatures->fVmxInsOutInfo)
1585 | RT_BF_MAKE(VMX_BF_BASIC_TRUE_CTLS, 0 );
1586 pGuestVmxMsrs->u64Basic = u64Basic;
1587 }
1588
1589 /* Pin-based VM-execution controls. */
1590 {
1591 uint32_t const fFeatures = (pGuestFeatures->fVmxExtIntExit << VMX_BF_PIN_CTLS_EXT_INT_EXIT_SHIFT )
1592 | (pGuestFeatures->fVmxNmiExit << VMX_BF_PIN_CTLS_NMI_EXIT_SHIFT )
1593 | (pGuestFeatures->fVmxVirtNmi << VMX_BF_PIN_CTLS_VIRT_NMI_SHIFT )
1594 | (pGuestFeatures->fVmxPreemptTimer << VMX_BF_PIN_CTLS_PREEMPT_TIMER_SHIFT)
1595 | (pGuestFeatures->fVmxPostedInt << VMX_BF_PIN_CTLS_POSTED_INT_SHIFT );
1596 uint32_t const fAllowed0 = VMX_PIN_CTLS_DEFAULT1;
1597 uint32_t const fAllowed1 = fFeatures | VMX_PIN_CTLS_DEFAULT1;
1598 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n",
1599 fAllowed0, fAllowed1, fFeatures));
1600 pGuestVmxMsrs->PinCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1601 }
1602
1603 /* Processor-based VM-execution controls. */
1604 {
1605 uint32_t const fFeatures = (pGuestFeatures->fVmxIntWindowExit << VMX_BF_PROC_CTLS_INT_WINDOW_EXIT_SHIFT )
1606 | (pGuestFeatures->fVmxTscOffsetting << VMX_BF_PROC_CTLS_USE_TSC_OFFSETTING_SHIFT)
1607 | (pGuestFeatures->fVmxHltExit << VMX_BF_PROC_CTLS_HLT_EXIT_SHIFT )
1608 | (pGuestFeatures->fVmxInvlpgExit << VMX_BF_PROC_CTLS_INVLPG_EXIT_SHIFT )
1609 | (pGuestFeatures->fVmxMwaitExit << VMX_BF_PROC_CTLS_MWAIT_EXIT_SHIFT )
1610 | (pGuestFeatures->fVmxRdpmcExit << VMX_BF_PROC_CTLS_RDPMC_EXIT_SHIFT )
1611 | (pGuestFeatures->fVmxRdtscExit << VMX_BF_PROC_CTLS_RDTSC_EXIT_SHIFT )
1612 | (pGuestFeatures->fVmxCr3LoadExit << VMX_BF_PROC_CTLS_CR3_LOAD_EXIT_SHIFT )
1613 | (pGuestFeatures->fVmxCr3StoreExit << VMX_BF_PROC_CTLS_CR3_STORE_EXIT_SHIFT )
1614 | (pGuestFeatures->fVmxCr8LoadExit << VMX_BF_PROC_CTLS_CR8_LOAD_EXIT_SHIFT )
1615 | (pGuestFeatures->fVmxCr8StoreExit << VMX_BF_PROC_CTLS_CR8_STORE_EXIT_SHIFT )
1616 | (pGuestFeatures->fVmxUseTprShadow << VMX_BF_PROC_CTLS_USE_TPR_SHADOW_SHIFT )
1617 | (pGuestFeatures->fVmxNmiWindowExit << VMX_BF_PROC_CTLS_NMI_WINDOW_EXIT_SHIFT )
1618 | (pGuestFeatures->fVmxMovDRxExit << VMX_BF_PROC_CTLS_MOV_DR_EXIT_SHIFT )
1619 | (pGuestFeatures->fVmxUncondIoExit << VMX_BF_PROC_CTLS_UNCOND_IO_EXIT_SHIFT )
1620 | (pGuestFeatures->fVmxUseIoBitmaps << VMX_BF_PROC_CTLS_USE_IO_BITMAPS_SHIFT )
1621 | (pGuestFeatures->fVmxMonitorTrapFlag << VMX_BF_PROC_CTLS_MONITOR_TRAP_FLAG_SHIFT )
1622 | (pGuestFeatures->fVmxUseMsrBitmaps << VMX_BF_PROC_CTLS_USE_MSR_BITMAPS_SHIFT )
1623 | (pGuestFeatures->fVmxMonitorExit << VMX_BF_PROC_CTLS_MONITOR_EXIT_SHIFT )
1624 | (pGuestFeatures->fVmxPauseExit << VMX_BF_PROC_CTLS_PAUSE_EXIT_SHIFT )
1625 | (pGuestFeatures->fVmxSecondaryExecCtls << VMX_BF_PROC_CTLS_USE_SECONDARY_CTLS_SHIFT);
1626 uint32_t const fAllowed0 = VMX_PROC_CTLS_DEFAULT1;
1627 uint32_t const fAllowed1 = fFeatures | VMX_PROC_CTLS_DEFAULT1;
1628 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1629 fAllowed1, fFeatures));
1630 pGuestVmxMsrs->ProcCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1631 }
1632
1633 /* Secondary processor-based VM-execution controls. */
1634 if (pGuestFeatures->fVmxSecondaryExecCtls)
1635 {
1636 uint32_t const fFeatures = (pGuestFeatures->fVmxVirtApicAccess << VMX_BF_PROC_CTLS2_VIRT_APIC_ACCESS_SHIFT )
1637 | (pGuestFeatures->fVmxEpt << VMX_BF_PROC_CTLS2_EPT_SHIFT )
1638 | (pGuestFeatures->fVmxDescTableExit << VMX_BF_PROC_CTLS2_DESC_TABLE_EXIT_SHIFT )
1639 | (pGuestFeatures->fVmxRdtscp << VMX_BF_PROC_CTLS2_RDTSCP_SHIFT )
1640 | (pGuestFeatures->fVmxVirtX2ApicMode << VMX_BF_PROC_CTLS2_VIRT_X2APIC_MODE_SHIFT )
1641 | (pGuestFeatures->fVmxVpid << VMX_BF_PROC_CTLS2_VPID_SHIFT )
1642 | (pGuestFeatures->fVmxWbinvdExit << VMX_BF_PROC_CTLS2_WBINVD_EXIT_SHIFT )
1643 | (pGuestFeatures->fVmxUnrestrictedGuest << VMX_BF_PROC_CTLS2_UNRESTRICTED_GUEST_SHIFT)
1644 | (pGuestFeatures->fVmxApicRegVirt << VMX_BF_PROC_CTLS2_APIC_REG_VIRT_SHIFT )
1645 | (pGuestFeatures->fVmxVirtIntDelivery << VMX_BF_PROC_CTLS2_VIRT_INT_DELIVERY_SHIFT )
1646 | (pGuestFeatures->fVmxPauseLoopExit << VMX_BF_PROC_CTLS2_PAUSE_LOOP_EXIT_SHIFT )
1647 | (pGuestFeatures->fVmxRdrandExit << VMX_BF_PROC_CTLS2_RDRAND_EXIT_SHIFT )
1648 | (pGuestFeatures->fVmxInvpcid << VMX_BF_PROC_CTLS2_INVPCID_SHIFT )
1649 | (pGuestFeatures->fVmxVmFunc << VMX_BF_PROC_CTLS2_VMFUNC_SHIFT )
1650 | (pGuestFeatures->fVmxVmcsShadowing << VMX_BF_PROC_CTLS2_VMCS_SHADOWING_SHIFT )
1651 | (pGuestFeatures->fVmxRdseedExit << VMX_BF_PROC_CTLS2_RDSEED_EXIT_SHIFT )
1652 | (pGuestFeatures->fVmxPml << VMX_BF_PROC_CTLS2_PML_SHIFT )
1653 | (pGuestFeatures->fVmxEptXcptVe << VMX_BF_PROC_CTLS2_EPT_VE_SHIFT )
1654 | (pGuestFeatures->fVmxXsavesXrstors << VMX_BF_PROC_CTLS2_XSAVES_XRSTORS_SHIFT )
1655 | (pGuestFeatures->fVmxUseTscScaling << VMX_BF_PROC_CTLS2_TSC_SCALING_SHIFT );
1656 uint32_t const fAllowed0 = 0;
1657 uint32_t const fAllowed1 = fFeatures;
1658 pGuestVmxMsrs->ProcCtls2.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1659 }
1660
1661 /* VM-exit controls. */
1662 {
1663 uint32_t const fFeatures = (pGuestFeatures->fVmxExitSaveDebugCtls << VMX_BF_EXIT_CTLS_SAVE_DEBUG_SHIFT )
1664 | (pGuestFeatures->fVmxHostAddrSpaceSize << VMX_BF_EXIT_CTLS_HOST_ADDR_SPACE_SIZE_SHIFT)
1665 | (pGuestFeatures->fVmxExitAckExtInt << VMX_BF_EXIT_CTLS_ACK_EXT_INT_SHIFT )
1666 | (pGuestFeatures->fVmxExitSavePatMsr << VMX_BF_EXIT_CTLS_SAVE_PAT_MSR_SHIFT )
1667 | (pGuestFeatures->fVmxExitLoadPatMsr << VMX_BF_EXIT_CTLS_LOAD_PAT_MSR_SHIFT )
1668 | (pGuestFeatures->fVmxExitSaveEferMsr << VMX_BF_EXIT_CTLS_SAVE_EFER_MSR_SHIFT )
1669 | (pGuestFeatures->fVmxExitLoadEferMsr << VMX_BF_EXIT_CTLS_LOAD_EFER_MSR_SHIFT )
1670 | (pGuestFeatures->fVmxSavePreemptTimer << VMX_BF_EXIT_CTLS_SAVE_PREEMPT_TIMER_SHIFT );
1671 /* Set the default1 class bits. See Intel spec. A.4 "VM-exit Controls". */
1672 uint32_t const fAllowed0 = VMX_EXIT_CTLS_DEFAULT1;
1673 uint32_t const fAllowed1 = fFeatures | VMX_EXIT_CTLS_DEFAULT1;
1674 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1675 fAllowed1, fFeatures));
1676 pGuestVmxMsrs->ExitCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1677 }
1678
1679 /* VM-entry controls. */
1680 {
1681 uint32_t const fFeatures = (pGuestFeatures->fVmxEntryLoadDebugCtls << VMX_BF_ENTRY_CTLS_LOAD_DEBUG_SHIFT )
1682 | (pGuestFeatures->fVmxIa32eModeGuest << VMX_BF_ENTRY_CTLS_IA32E_MODE_GUEST_SHIFT)
1683 | (pGuestFeatures->fVmxEntryLoadEferMsr << VMX_BF_ENTRY_CTLS_LOAD_EFER_MSR_SHIFT )
1684 | (pGuestFeatures->fVmxEntryLoadPatMsr << VMX_BF_ENTRY_CTLS_LOAD_PAT_MSR_SHIFT );
1685 uint32_t const fAllowed0 = VMX_ENTRY_CTLS_DEFAULT1;
1686 uint32_t const fAllowed1 = fFeatures | VMX_ENTRY_CTLS_DEFAULT1;
1687 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed0=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1688 fAllowed1, fFeatures));
1689 pGuestVmxMsrs->EntryCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1690 }
1691
1692 /* Miscellaneous data. */
1693 {
1694 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Misc : 0;
1695
1696 uint8_t const cMaxMsrs = RT_MIN(RT_BF_GET(uHostMsr, VMX_BF_MISC_MAX_MSRS), VMX_V_AUTOMSR_COUNT_MAX);
1697 uint8_t const fActivityState = RT_BF_GET(uHostMsr, VMX_BF_MISC_ACTIVITY_STATES) & VMX_V_GUEST_ACTIVITY_STATE_MASK;
1698 pGuestVmxMsrs->u64Misc = RT_BF_MAKE(VMX_BF_MISC_PREEMPT_TIMER_TSC, VMX_V_PREEMPT_TIMER_SHIFT )
1699 | RT_BF_MAKE(VMX_BF_MISC_EXIT_SAVE_EFER_LMA, pGuestFeatures->fVmxExitSaveEferLma )
1700 | RT_BF_MAKE(VMX_BF_MISC_ACTIVITY_STATES, fActivityState )
1701 | RT_BF_MAKE(VMX_BF_MISC_INTEL_PT, pGuestFeatures->fVmxIntelPt )
1702 | RT_BF_MAKE(VMX_BF_MISC_SMM_READ_SMBASE_MSR, 0 )
1703 | RT_BF_MAKE(VMX_BF_MISC_CR3_TARGET, VMX_V_CR3_TARGET_COUNT )
1704 | RT_BF_MAKE(VMX_BF_MISC_MAX_MSRS, cMaxMsrs )
1705 | RT_BF_MAKE(VMX_BF_MISC_VMXOFF_BLOCK_SMI, 0 )
1706 | RT_BF_MAKE(VMX_BF_MISC_VMWRITE_ALL, pGuestFeatures->fVmxVmwriteAll )
1707 | RT_BF_MAKE(VMX_BF_MISC_ENTRY_INJECT_SOFT_INT, pGuestFeatures->fVmxEntryInjectSoftInt)
1708 | RT_BF_MAKE(VMX_BF_MISC_MSEG_ID, VMX_V_MSEG_REV_ID );
1709 }
1710
1711 /* CR0 Fixed-0. */
1712 pGuestVmxMsrs->u64Cr0Fixed0 = pGuestFeatures->fVmxUnrestrictedGuest ? VMX_V_CR0_FIXED0_UX : VMX_V_CR0_FIXED0;
1713
1714 /* CR0 Fixed-1. */
1715 {
1716 /*
1717 * All CPUs I've looked at so far report CR0 fixed-1 bits as 0xffffffff.
1718 * This is different from CR4 fixed-1 bits which are reported as per the
1719 * CPU features and/or micro-architecture/generation. Why? Ask Intel.
1720 */
1721 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr0Fixed1 : 0xffffffff;
1722 pGuestVmxMsrs->u64Cr0Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr0Fixed0; /* Make sure the CR0 MB1 bits are not clear. */
1723 }
1724
1725 /* CR4 Fixed-0. */
1726 pGuestVmxMsrs->u64Cr4Fixed0 = VMX_V_CR4_FIXED0;
1727
1728 /* CR4 Fixed-1. */
1729 {
1730 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr4Fixed1 : CPUMGetGuestCR4ValidMask(pVM);
1731 pGuestVmxMsrs->u64Cr4Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr4Fixed0; /* Make sure the CR4 MB1 bits are not clear. */
1732 }
1733
1734 /* VMCS Enumeration. */
1735 pGuestVmxMsrs->u64VmcsEnum = VMX_V_VMCS_MAX_INDEX << VMX_BF_VMCS_ENUM_HIGHEST_IDX_SHIFT;
1736
1737 /* VPID and EPT Capabilities. */
1738 {
1739 /*
1740 * INVVPID instruction always causes a VM-exit unconditionally, so we are free to fake
1741 * and emulate any INVVPID flush type. However, it only makes sense to expose the types
1742 * when INVVPID instruction is supported just to be more compatible with guest
1743 * hypervisors that may make assumptions by only looking at this MSR even though they
1744 * are technically supposed to refer to bit 37 of MSR_IA32_VMX_PROC_CTLS2 first.
1745 *
1746 * See Intel spec. 25.1.2 "Instructions That Cause VM Exits Unconditionally".
1747 * See Intel spec. 30.3 "VMX Instructions".
1748 */
1749 uint8_t const fVpid = pGuestFeatures->fVmxVpid;
1750 pGuestVmxMsrs->u64EptVpidCaps = RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID, fVpid)
1751 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX, fVpid & 1)
1752 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_ALL_CTX, fVpid & 1)
1753 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX_RETAIN_GLOBALS, fVpid & 1);
1754 }
1755
1756 /* VM Functions. */
1757 if (pGuestFeatures->fVmxVmFunc)
1758 pGuestVmxMsrs->u64VmFunc = RT_BF_MAKE(VMX_BF_VMFUNC_EPTP_SWITCHING, 1);
1759}
1760
1761
1762/**
1763 * Checks whether the given guest CPU VMX features are compatible with the provided
1764 * base features.
1765 *
1766 * @returns @c true if compatible, @c false otherwise.
1767 * @param pVM The cross context VM structure.
1768 * @param pBase The base VMX CPU features.
1769 * @param pGst The guest VMX CPU features.
1770 *
1771 * @remarks Only VMX feature bits are examined.
1772 */
1773static bool cpumR3AreVmxCpuFeaturesCompatible(PVM pVM, PCCPUMFEATURES pBase, PCCPUMFEATURES pGst)
1774{
1775 if (cpumR3IsHwAssistNstGstExecAllowed(pVM))
1776 {
1777 uint64_t const fBase = ((uint64_t)pBase->fVmxInsOutInfo << 0) | ((uint64_t)pBase->fVmxExtIntExit << 1)
1778 | ((uint64_t)pBase->fVmxNmiExit << 2) | ((uint64_t)pBase->fVmxVirtNmi << 3)
1779 | ((uint64_t)pBase->fVmxPreemptTimer << 4) | ((uint64_t)pBase->fVmxPostedInt << 5)
1780 | ((uint64_t)pBase->fVmxIntWindowExit << 6) | ((uint64_t)pBase->fVmxTscOffsetting << 7)
1781 | ((uint64_t)pBase->fVmxHltExit << 8) | ((uint64_t)pBase->fVmxInvlpgExit << 9)
1782 | ((uint64_t)pBase->fVmxMwaitExit << 10) | ((uint64_t)pBase->fVmxRdpmcExit << 11)
1783 | ((uint64_t)pBase->fVmxRdtscExit << 12) | ((uint64_t)pBase->fVmxCr3LoadExit << 13)
1784 | ((uint64_t)pBase->fVmxCr3StoreExit << 14) | ((uint64_t)pBase->fVmxCr8LoadExit << 15)
1785 | ((uint64_t)pBase->fVmxCr8StoreExit << 16) | ((uint64_t)pBase->fVmxUseTprShadow << 17)
1786 | ((uint64_t)pBase->fVmxNmiWindowExit << 18) | ((uint64_t)pBase->fVmxMovDRxExit << 19)
1787 | ((uint64_t)pBase->fVmxUncondIoExit << 20) | ((uint64_t)pBase->fVmxUseIoBitmaps << 21)
1788 | ((uint64_t)pBase->fVmxMonitorTrapFlag << 22) | ((uint64_t)pBase->fVmxUseMsrBitmaps << 23)
1789 | ((uint64_t)pBase->fVmxMonitorExit << 24) | ((uint64_t)pBase->fVmxPauseExit << 25)
1790 | ((uint64_t)pBase->fVmxSecondaryExecCtls << 26) | ((uint64_t)pBase->fVmxVirtApicAccess << 27)
1791 | ((uint64_t)pBase->fVmxEpt << 28) | ((uint64_t)pBase->fVmxDescTableExit << 29)
1792 | ((uint64_t)pBase->fVmxRdtscp << 30) | ((uint64_t)pBase->fVmxVirtX2ApicMode << 31)
1793 | ((uint64_t)pBase->fVmxVpid << 32) | ((uint64_t)pBase->fVmxWbinvdExit << 33)
1794 | ((uint64_t)pBase->fVmxUnrestrictedGuest << 34) | ((uint64_t)pBase->fVmxApicRegVirt << 35)
1795 | ((uint64_t)pBase->fVmxVirtIntDelivery << 36) | ((uint64_t)pBase->fVmxPauseLoopExit << 37)
1796 | ((uint64_t)pBase->fVmxRdrandExit << 38) | ((uint64_t)pBase->fVmxInvpcid << 39)
1797 | ((uint64_t)pBase->fVmxVmFunc << 40) | ((uint64_t)pBase->fVmxVmcsShadowing << 41)
1798 | ((uint64_t)pBase->fVmxRdseedExit << 42) | ((uint64_t)pBase->fVmxPml << 43)
1799 | ((uint64_t)pBase->fVmxEptXcptVe << 44) | ((uint64_t)pBase->fVmxXsavesXrstors << 45)
1800 | ((uint64_t)pBase->fVmxUseTscScaling << 46) | ((uint64_t)pBase->fVmxEntryLoadDebugCtls << 47)
1801 | ((uint64_t)pBase->fVmxIa32eModeGuest << 48) | ((uint64_t)pBase->fVmxEntryLoadEferMsr << 49)
1802 | ((uint64_t)pBase->fVmxEntryLoadPatMsr << 50) | ((uint64_t)pBase->fVmxExitSaveDebugCtls << 51)
1803 | ((uint64_t)pBase->fVmxHostAddrSpaceSize << 52) | ((uint64_t)pBase->fVmxExitAckExtInt << 53)
1804 | ((uint64_t)pBase->fVmxExitSavePatMsr << 54) | ((uint64_t)pBase->fVmxExitLoadPatMsr << 55)
1805 | ((uint64_t)pBase->fVmxExitSaveEferMsr << 56) | ((uint64_t)pBase->fVmxExitLoadEferMsr << 57)
1806 | ((uint64_t)pBase->fVmxSavePreemptTimer << 58) | ((uint64_t)pBase->fVmxExitSaveEferLma << 59)
1807 | ((uint64_t)pBase->fVmxIntelPt << 60) | ((uint64_t)pBase->fVmxVmwriteAll << 61)
1808 | ((uint64_t)pBase->fVmxEntryInjectSoftInt << 62);
1809
1810 uint64_t const fGst = ((uint64_t)pGst->fVmxInsOutInfo << 0) | ((uint64_t)pGst->fVmxExtIntExit << 1)
1811 | ((uint64_t)pGst->fVmxNmiExit << 2) | ((uint64_t)pGst->fVmxVirtNmi << 3)
1812 | ((uint64_t)pGst->fVmxPreemptTimer << 4) | ((uint64_t)pGst->fVmxPostedInt << 5)
1813 | ((uint64_t)pGst->fVmxIntWindowExit << 6) | ((uint64_t)pGst->fVmxTscOffsetting << 7)
1814 | ((uint64_t)pGst->fVmxHltExit << 8) | ((uint64_t)pGst->fVmxInvlpgExit << 9)
1815 | ((uint64_t)pGst->fVmxMwaitExit << 10) | ((uint64_t)pGst->fVmxRdpmcExit << 11)
1816 | ((uint64_t)pGst->fVmxRdtscExit << 12) | ((uint64_t)pGst->fVmxCr3LoadExit << 13)
1817 | ((uint64_t)pGst->fVmxCr3StoreExit << 14) | ((uint64_t)pGst->fVmxCr8LoadExit << 15)
1818 | ((uint64_t)pGst->fVmxCr8StoreExit << 16) | ((uint64_t)pGst->fVmxUseTprShadow << 17)
1819 | ((uint64_t)pGst->fVmxNmiWindowExit << 18) | ((uint64_t)pGst->fVmxMovDRxExit << 19)
1820 | ((uint64_t)pGst->fVmxUncondIoExit << 20) | ((uint64_t)pGst->fVmxUseIoBitmaps << 21)
1821 | ((uint64_t)pGst->fVmxMonitorTrapFlag << 22) | ((uint64_t)pGst->fVmxUseMsrBitmaps << 23)
1822 | ((uint64_t)pGst->fVmxMonitorExit << 24) | ((uint64_t)pGst->fVmxPauseExit << 25)
1823 | ((uint64_t)pGst->fVmxSecondaryExecCtls << 26) | ((uint64_t)pGst->fVmxVirtApicAccess << 27)
1824 | ((uint64_t)pGst->fVmxEpt << 28) | ((uint64_t)pGst->fVmxDescTableExit << 29)
1825 | ((uint64_t)pGst->fVmxRdtscp << 30) | ((uint64_t)pGst->fVmxVirtX2ApicMode << 31)
1826 | ((uint64_t)pGst->fVmxVpid << 32) | ((uint64_t)pGst->fVmxWbinvdExit << 33)
1827 | ((uint64_t)pGst->fVmxUnrestrictedGuest << 34) | ((uint64_t)pGst->fVmxApicRegVirt << 35)
1828 | ((uint64_t)pGst->fVmxVirtIntDelivery << 36) | ((uint64_t)pGst->fVmxPauseLoopExit << 37)
1829 | ((uint64_t)pGst->fVmxRdrandExit << 38) | ((uint64_t)pGst->fVmxInvpcid << 39)
1830 | ((uint64_t)pGst->fVmxVmFunc << 40) | ((uint64_t)pGst->fVmxVmcsShadowing << 41)
1831 | ((uint64_t)pGst->fVmxRdseedExit << 42) | ((uint64_t)pGst->fVmxPml << 43)
1832 | ((uint64_t)pGst->fVmxEptXcptVe << 44) | ((uint64_t)pGst->fVmxXsavesXrstors << 45)
1833 | ((uint64_t)pGst->fVmxUseTscScaling << 46) | ((uint64_t)pGst->fVmxEntryLoadDebugCtls << 47)
1834 | ((uint64_t)pGst->fVmxIa32eModeGuest << 48) | ((uint64_t)pGst->fVmxEntryLoadEferMsr << 49)
1835 | ((uint64_t)pGst->fVmxEntryLoadPatMsr << 50) | ((uint64_t)pGst->fVmxExitSaveDebugCtls << 51)
1836 | ((uint64_t)pGst->fVmxHostAddrSpaceSize << 52) | ((uint64_t)pGst->fVmxExitAckExtInt << 53)
1837 | ((uint64_t)pGst->fVmxExitSavePatMsr << 54) | ((uint64_t)pGst->fVmxExitLoadPatMsr << 55)
1838 | ((uint64_t)pGst->fVmxExitSaveEferMsr << 56) | ((uint64_t)pGst->fVmxExitLoadEferMsr << 57)
1839 | ((uint64_t)pGst->fVmxSavePreemptTimer << 58) | ((uint64_t)pGst->fVmxExitSaveEferLma << 59)
1840 | ((uint64_t)pGst->fVmxIntelPt << 60) | ((uint64_t)pGst->fVmxVmwriteAll << 61)
1841 | ((uint64_t)pGst->fVmxEntryInjectSoftInt << 62);
1842
1843 if ((fBase | fGst) != fBase)
1844 {
1845 uint64_t const fDiff = fBase ^ fGst;
1846 LogRel(("CPUM: VMX features now exposed to the guest are incompatible with those from the saved state. fBase=%#RX64 fGst=%#RX64 fDiff=%#RX64\n",
1847 fBase, fGst, fDiff));
1848 return false;
1849 }
1850 return true;
1851 }
1852 return true;
1853}
1854
1855
1856/**
1857 * Initializes VMX guest features and MSRs.
1858 *
1859 * @param pVM The cross context VM structure.
1860 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1861 * and no hardware-assisted nested-guest execution is
1862 * possible for this VM.
1863 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1864 */
1865void cpumR3InitVmxGuestFeaturesAndMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PVMXMSRS pGuestVmxMsrs)
1866{
1867 Assert(pVM);
1868 Assert(pGuestVmxMsrs);
1869
1870 /*
1871 * Initialize the set of VMX features we emulate.
1872 *
1873 * Note! Some bits might be reported as 1 always if they fall under the
1874 * default1 class bits (e.g. fVmxEntryLoadDebugCtls), see @bugref{9180#c5}.
1875 */
1876 CPUMFEATURES EmuFeat;
1877 RT_ZERO(EmuFeat);
1878 EmuFeat.fVmx = 1;
1879 EmuFeat.fVmxInsOutInfo = 1;
1880 EmuFeat.fVmxExtIntExit = 1;
1881 EmuFeat.fVmxNmiExit = 1;
1882 EmuFeat.fVmxVirtNmi = 1;
1883 EmuFeat.fVmxPreemptTimer = 0; /* Currently disabled on purpose, see @bugref{9180#c108}. */
1884 EmuFeat.fVmxPostedInt = 0;
1885 EmuFeat.fVmxIntWindowExit = 1;
1886 EmuFeat.fVmxTscOffsetting = 1;
1887 EmuFeat.fVmxHltExit = 1;
1888 EmuFeat.fVmxInvlpgExit = 1;
1889 EmuFeat.fVmxMwaitExit = 1;
1890 EmuFeat.fVmxRdpmcExit = 1;
1891 EmuFeat.fVmxRdtscExit = 1;
1892 EmuFeat.fVmxCr3LoadExit = 1;
1893 EmuFeat.fVmxCr3StoreExit = 1;
1894 EmuFeat.fVmxCr8LoadExit = 1;
1895 EmuFeat.fVmxCr8StoreExit = 1;
1896 EmuFeat.fVmxUseTprShadow = 1;
1897 EmuFeat.fVmxNmiWindowExit = 0;
1898 EmuFeat.fVmxMovDRxExit = 1;
1899 EmuFeat.fVmxUncondIoExit = 1;
1900 EmuFeat.fVmxUseIoBitmaps = 1;
1901 EmuFeat.fVmxMonitorTrapFlag = 0;
1902 EmuFeat.fVmxUseMsrBitmaps = 1;
1903 EmuFeat.fVmxMonitorExit = 1;
1904 EmuFeat.fVmxPauseExit = 1;
1905 EmuFeat.fVmxSecondaryExecCtls = 1;
1906 EmuFeat.fVmxVirtApicAccess = 1;
1907 EmuFeat.fVmxEpt = 0; /* Cannot be disabled if unrestricted guest is enabled. */
1908 EmuFeat.fVmxDescTableExit = 1;
1909 EmuFeat.fVmxRdtscp = 1;
1910 EmuFeat.fVmxVirtX2ApicMode = 0;
1911 EmuFeat.fVmxVpid = 0; /** @todo NSTVMX: enable this. */
1912 EmuFeat.fVmxWbinvdExit = 1;
1913 EmuFeat.fVmxUnrestrictedGuest = 0;
1914 EmuFeat.fVmxApicRegVirt = 0;
1915 EmuFeat.fVmxVirtIntDelivery = 0;
1916 EmuFeat.fVmxPauseLoopExit = 0;
1917 EmuFeat.fVmxRdrandExit = 0;
1918 EmuFeat.fVmxInvpcid = 1;
1919 EmuFeat.fVmxVmFunc = 0;
1920 EmuFeat.fVmxVmcsShadowing = 0;
1921 EmuFeat.fVmxRdseedExit = 0;
1922 EmuFeat.fVmxPml = 0;
1923 EmuFeat.fVmxEptXcptVe = 0;
1924 EmuFeat.fVmxXsavesXrstors = 0;
1925 EmuFeat.fVmxUseTscScaling = 0;
1926 EmuFeat.fVmxEntryLoadDebugCtls = 1;
1927 EmuFeat.fVmxIa32eModeGuest = 1;
1928 EmuFeat.fVmxEntryLoadEferMsr = 1;
1929 EmuFeat.fVmxEntryLoadPatMsr = 0;
1930 EmuFeat.fVmxExitSaveDebugCtls = 1;
1931 EmuFeat.fVmxHostAddrSpaceSize = 1;
1932 EmuFeat.fVmxExitAckExtInt = 1;
1933 EmuFeat.fVmxExitSavePatMsr = 0;
1934 EmuFeat.fVmxExitLoadPatMsr = 0;
1935 EmuFeat.fVmxExitSaveEferMsr = 1;
1936 EmuFeat.fVmxExitLoadEferMsr = 1;
1937 EmuFeat.fVmxSavePreemptTimer = 0; /* Cannot be enabled if VMX-preemption timer is disabled. */
1938 EmuFeat.fVmxExitSaveEferLma = 1; /* Cannot be disabled if unrestricted guest is enabled. */
1939 EmuFeat.fVmxIntelPt = 0;
1940 EmuFeat.fVmxVmwriteAll = 0; /** @todo NSTVMX: enable this when nested VMCS shadowing is enabled. */
1941 EmuFeat.fVmxEntryInjectSoftInt = 1;
1942
1943 /*
1944 * Merge guest features.
1945 *
1946 * When hardware-assisted VMX may be used, any feature we emulate must also be supported
1947 * by the hardware, hence we merge our emulated features with the host features below.
1948 */
1949 PCCPUMFEATURES pBaseFeat = cpumR3IsHwAssistNstGstExecAllowed(pVM) ? &pVM->cpum.s.HostFeatures : &EmuFeat;
1950 PCPUMFEATURES pGuestFeat = &pVM->cpum.s.GuestFeatures;
1951 Assert(pBaseFeat->fVmx);
1952 pGuestFeat->fVmxInsOutInfo = (pBaseFeat->fVmxInsOutInfo & EmuFeat.fVmxInsOutInfo );
1953 pGuestFeat->fVmxExtIntExit = (pBaseFeat->fVmxExtIntExit & EmuFeat.fVmxExtIntExit );
1954 pGuestFeat->fVmxNmiExit = (pBaseFeat->fVmxNmiExit & EmuFeat.fVmxNmiExit );
1955 pGuestFeat->fVmxVirtNmi = (pBaseFeat->fVmxVirtNmi & EmuFeat.fVmxVirtNmi );
1956 pGuestFeat->fVmxPreemptTimer = (pBaseFeat->fVmxPreemptTimer & EmuFeat.fVmxPreemptTimer );
1957 pGuestFeat->fVmxPostedInt = (pBaseFeat->fVmxPostedInt & EmuFeat.fVmxPostedInt );
1958 pGuestFeat->fVmxIntWindowExit = (pBaseFeat->fVmxIntWindowExit & EmuFeat.fVmxIntWindowExit );
1959 pGuestFeat->fVmxTscOffsetting = (pBaseFeat->fVmxTscOffsetting & EmuFeat.fVmxTscOffsetting );
1960 pGuestFeat->fVmxHltExit = (pBaseFeat->fVmxHltExit & EmuFeat.fVmxHltExit );
1961 pGuestFeat->fVmxInvlpgExit = (pBaseFeat->fVmxInvlpgExit & EmuFeat.fVmxInvlpgExit );
1962 pGuestFeat->fVmxMwaitExit = (pBaseFeat->fVmxMwaitExit & EmuFeat.fVmxMwaitExit );
1963 pGuestFeat->fVmxRdpmcExit = (pBaseFeat->fVmxRdpmcExit & EmuFeat.fVmxRdpmcExit );
1964 pGuestFeat->fVmxRdtscExit = (pBaseFeat->fVmxRdtscExit & EmuFeat.fVmxRdtscExit );
1965 pGuestFeat->fVmxCr3LoadExit = (pBaseFeat->fVmxCr3LoadExit & EmuFeat.fVmxCr3LoadExit );
1966 pGuestFeat->fVmxCr3StoreExit = (pBaseFeat->fVmxCr3StoreExit & EmuFeat.fVmxCr3StoreExit );
1967 pGuestFeat->fVmxCr8LoadExit = (pBaseFeat->fVmxCr8LoadExit & EmuFeat.fVmxCr8LoadExit );
1968 pGuestFeat->fVmxCr8StoreExit = (pBaseFeat->fVmxCr8StoreExit & EmuFeat.fVmxCr8StoreExit );
1969 pGuestFeat->fVmxUseTprShadow = (pBaseFeat->fVmxUseTprShadow & EmuFeat.fVmxUseTprShadow );
1970 pGuestFeat->fVmxNmiWindowExit = (pBaseFeat->fVmxNmiWindowExit & EmuFeat.fVmxNmiWindowExit );
1971 pGuestFeat->fVmxMovDRxExit = (pBaseFeat->fVmxMovDRxExit & EmuFeat.fVmxMovDRxExit );
1972 pGuestFeat->fVmxUncondIoExit = (pBaseFeat->fVmxUncondIoExit & EmuFeat.fVmxUncondIoExit );
1973 pGuestFeat->fVmxUseIoBitmaps = (pBaseFeat->fVmxUseIoBitmaps & EmuFeat.fVmxUseIoBitmaps );
1974 pGuestFeat->fVmxMonitorTrapFlag = (pBaseFeat->fVmxMonitorTrapFlag & EmuFeat.fVmxMonitorTrapFlag );
1975 pGuestFeat->fVmxUseMsrBitmaps = (pBaseFeat->fVmxUseMsrBitmaps & EmuFeat.fVmxUseMsrBitmaps );
1976 pGuestFeat->fVmxMonitorExit = (pBaseFeat->fVmxMonitorExit & EmuFeat.fVmxMonitorExit );
1977 pGuestFeat->fVmxPauseExit = (pBaseFeat->fVmxPauseExit & EmuFeat.fVmxPauseExit );
1978 pGuestFeat->fVmxSecondaryExecCtls = (pBaseFeat->fVmxSecondaryExecCtls & EmuFeat.fVmxSecondaryExecCtls );
1979 pGuestFeat->fVmxVirtApicAccess = (pBaseFeat->fVmxVirtApicAccess & EmuFeat.fVmxVirtApicAccess );
1980 pGuestFeat->fVmxEpt = (pBaseFeat->fVmxEpt & EmuFeat.fVmxEpt );
1981 pGuestFeat->fVmxDescTableExit = (pBaseFeat->fVmxDescTableExit & EmuFeat.fVmxDescTableExit );
1982 pGuestFeat->fVmxRdtscp = (pBaseFeat->fVmxRdtscp & EmuFeat.fVmxRdtscp );
1983 pGuestFeat->fVmxVirtX2ApicMode = (pBaseFeat->fVmxVirtX2ApicMode & EmuFeat.fVmxVirtX2ApicMode );
1984 pGuestFeat->fVmxVpid = (pBaseFeat->fVmxVpid & EmuFeat.fVmxVpid );
1985 pGuestFeat->fVmxWbinvdExit = (pBaseFeat->fVmxWbinvdExit & EmuFeat.fVmxWbinvdExit );
1986 pGuestFeat->fVmxUnrestrictedGuest = (pBaseFeat->fVmxUnrestrictedGuest & EmuFeat.fVmxUnrestrictedGuest );
1987 pGuestFeat->fVmxApicRegVirt = (pBaseFeat->fVmxApicRegVirt & EmuFeat.fVmxApicRegVirt );
1988 pGuestFeat->fVmxVirtIntDelivery = (pBaseFeat->fVmxVirtIntDelivery & EmuFeat.fVmxVirtIntDelivery );
1989 pGuestFeat->fVmxPauseLoopExit = (pBaseFeat->fVmxPauseLoopExit & EmuFeat.fVmxPauseLoopExit );
1990 pGuestFeat->fVmxRdrandExit = (pBaseFeat->fVmxRdrandExit & EmuFeat.fVmxRdrandExit );
1991 pGuestFeat->fVmxInvpcid = (pBaseFeat->fVmxInvpcid & EmuFeat.fVmxInvpcid );
1992 pGuestFeat->fVmxVmFunc = (pBaseFeat->fVmxVmFunc & EmuFeat.fVmxVmFunc );
1993 pGuestFeat->fVmxVmcsShadowing = (pBaseFeat->fVmxVmcsShadowing & EmuFeat.fVmxVmcsShadowing );
1994 pGuestFeat->fVmxRdseedExit = (pBaseFeat->fVmxRdseedExit & EmuFeat.fVmxRdseedExit );
1995 pGuestFeat->fVmxPml = (pBaseFeat->fVmxPml & EmuFeat.fVmxPml );
1996 pGuestFeat->fVmxEptXcptVe = (pBaseFeat->fVmxEptXcptVe & EmuFeat.fVmxEptXcptVe );
1997 pGuestFeat->fVmxXsavesXrstors = (pBaseFeat->fVmxXsavesXrstors & EmuFeat.fVmxXsavesXrstors );
1998 pGuestFeat->fVmxUseTscScaling = (pBaseFeat->fVmxUseTscScaling & EmuFeat.fVmxUseTscScaling );
1999 pGuestFeat->fVmxEntryLoadDebugCtls = (pBaseFeat->fVmxEntryLoadDebugCtls & EmuFeat.fVmxEntryLoadDebugCtls );
2000 pGuestFeat->fVmxIa32eModeGuest = (pBaseFeat->fVmxIa32eModeGuest & EmuFeat.fVmxIa32eModeGuest );
2001 pGuestFeat->fVmxEntryLoadEferMsr = (pBaseFeat->fVmxEntryLoadEferMsr & EmuFeat.fVmxEntryLoadEferMsr );
2002 pGuestFeat->fVmxEntryLoadPatMsr = (pBaseFeat->fVmxEntryLoadPatMsr & EmuFeat.fVmxEntryLoadPatMsr );
2003 pGuestFeat->fVmxExitSaveDebugCtls = (pBaseFeat->fVmxExitSaveDebugCtls & EmuFeat.fVmxExitSaveDebugCtls );
2004 pGuestFeat->fVmxHostAddrSpaceSize = (pBaseFeat->fVmxHostAddrSpaceSize & EmuFeat.fVmxHostAddrSpaceSize );
2005 pGuestFeat->fVmxExitAckExtInt = (pBaseFeat->fVmxExitAckExtInt & EmuFeat.fVmxExitAckExtInt );
2006 pGuestFeat->fVmxExitSavePatMsr = (pBaseFeat->fVmxExitSavePatMsr & EmuFeat.fVmxExitSavePatMsr );
2007 pGuestFeat->fVmxExitLoadPatMsr = (pBaseFeat->fVmxExitLoadPatMsr & EmuFeat.fVmxExitLoadPatMsr );
2008 pGuestFeat->fVmxExitSaveEferMsr = (pBaseFeat->fVmxExitSaveEferMsr & EmuFeat.fVmxExitSaveEferMsr );
2009 pGuestFeat->fVmxExitLoadEferMsr = (pBaseFeat->fVmxExitLoadEferMsr & EmuFeat.fVmxExitLoadEferMsr );
2010 pGuestFeat->fVmxSavePreemptTimer = (pBaseFeat->fVmxSavePreemptTimer & EmuFeat.fVmxSavePreemptTimer );
2011 pGuestFeat->fVmxExitSaveEferLma = (pBaseFeat->fVmxExitSaveEferLma & EmuFeat.fVmxExitSaveEferLma );
2012 pGuestFeat->fVmxIntelPt = (pBaseFeat->fVmxIntelPt & EmuFeat.fVmxIntelPt );
2013 pGuestFeat->fVmxVmwriteAll = (pBaseFeat->fVmxVmwriteAll & EmuFeat.fVmxVmwriteAll );
2014 pGuestFeat->fVmxEntryInjectSoftInt = (pBaseFeat->fVmxEntryInjectSoftInt & EmuFeat.fVmxEntryInjectSoftInt );
2015
2016 if ( !pVM->cpum.s.fNestedVmxPreemptTimer
2017 || HMIsSubjectToVmxPreemptTimerErratum())
2018 {
2019 LogRel(("CPUM: Warning! VMX-preemption timer not exposed to guest due to forced CFGM setting or CPU erratum.\n"));
2020 pGuestFeat->fVmxPreemptTimer = 0;
2021 pGuestFeat->fVmxSavePreemptTimer = 0;
2022 }
2023
2024 /* Paranoia. */
2025 if (!pGuestFeat->fVmxSecondaryExecCtls)
2026 {
2027 Assert(!pGuestFeat->fVmxVirtApicAccess);
2028 Assert(!pGuestFeat->fVmxEpt);
2029 Assert(!pGuestFeat->fVmxDescTableExit);
2030 Assert(!pGuestFeat->fVmxRdtscp);
2031 Assert(!pGuestFeat->fVmxVirtX2ApicMode);
2032 Assert(!pGuestFeat->fVmxVpid);
2033 Assert(!pGuestFeat->fVmxWbinvdExit);
2034 Assert(!pGuestFeat->fVmxUnrestrictedGuest);
2035 Assert(!pGuestFeat->fVmxApicRegVirt);
2036 Assert(!pGuestFeat->fVmxVirtIntDelivery);
2037 Assert(!pGuestFeat->fVmxPauseLoopExit);
2038 Assert(!pGuestFeat->fVmxRdrandExit);
2039 Assert(!pGuestFeat->fVmxInvpcid);
2040 Assert(!pGuestFeat->fVmxVmFunc);
2041 Assert(!pGuestFeat->fVmxVmcsShadowing);
2042 Assert(!pGuestFeat->fVmxRdseedExit);
2043 Assert(!pGuestFeat->fVmxPml);
2044 Assert(!pGuestFeat->fVmxEptXcptVe);
2045 Assert(!pGuestFeat->fVmxXsavesXrstors);
2046 Assert(!pGuestFeat->fVmxUseTscScaling);
2047 }
2048 if (pGuestFeat->fVmxUnrestrictedGuest)
2049 {
2050 /* See footnote in Intel spec. 27.2 "Recording VM-Exit Information And Updating VM-entry Control Fields". */
2051 Assert(pGuestFeat->fVmxExitSaveEferLma);
2052 }
2053
2054 /*
2055 * Finally initialize the VMX guest MSRs.
2056 */
2057 cpumR3InitVmxGuestMsrs(pVM, pHostVmxMsrs, pGuestFeat, pGuestVmxMsrs);
2058}
2059
2060
2061/**
2062 * Gets the host hardware-virtualization MSRs.
2063 *
2064 * @returns VBox status code.
2065 * @param pMsrs Where to store the MSRs.
2066 */
2067static int cpumR3GetHostHwvirtMsrs(PCPUMMSRS pMsrs)
2068{
2069 Assert(pMsrs);
2070
2071 uint32_t fCaps = 0;
2072 int rc = SUPR3QueryVTCaps(&fCaps);
2073 if (RT_SUCCESS(rc))
2074 {
2075 if (fCaps & (SUPVTCAPS_VT_X | SUPVTCAPS_AMD_V))
2076 {
2077 SUPHWVIRTMSRS HwvirtMsrs;
2078 rc = SUPR3GetHwvirtMsrs(&HwvirtMsrs, false /* fForceRequery */);
2079 if (RT_SUCCESS(rc))
2080 {
2081 if (fCaps & SUPVTCAPS_VT_X)
2082 HMGetVmxMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.vmx);
2083 else
2084 HMGetSvmMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.svm);
2085 return VINF_SUCCESS;
2086 }
2087
2088 LogRel(("CPUM: Querying hardware-virtualization MSRs failed. rc=%Rrc\n", rc));
2089 return rc;
2090 }
2091 else
2092 {
2093 LogRel(("CPUM: Querying hardware-virtualization capability succeeded but did not find VT-x or AMD-V\n"));
2094 return VERR_INTERNAL_ERROR_5;
2095 }
2096 }
2097 else
2098 LogRel(("CPUM: No hardware-virtualization capability detected\n"));
2099
2100 return VINF_SUCCESS;
2101}
2102
2103
2104/**
2105 * Callback that fires when the nested VMX-preemption timer expired.
2106 *
2107 * @param pVM The cross context VM structure.
2108 * @param pTimer Pointer to timer.
2109 * @param pvUser Opaque pointer to the virtual-CPU.
2110 */
2111static DECLCALLBACK(void) cpumR3VmxPreemptTimerCallback(PVM pVM, PTMTIMER pTimer, void *pvUser)
2112{
2113 RT_NOREF2(pVM, pTimer);
2114 Assert(pvUser);
2115
2116 PVMCPU pVCpu = (PVMCPUR3)pvUser;
2117 VMCPU_FF_SET(pVCpu, VMCPU_FF_VMX_PREEMPT_TIMER);
2118}
2119
2120
2121/**
2122 * Initializes the CPUM.
2123 *
2124 * @returns VBox status code.
2125 * @param pVM The cross context VM structure.
2126 */
2127VMMR3DECL(int) CPUMR3Init(PVM pVM)
2128{
2129 LogFlow(("CPUMR3Init\n"));
2130
2131 /*
2132 * Assert alignment, sizes and tables.
2133 */
2134 AssertCompileMemberAlignment(VM, cpum.s, 32);
2135 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
2136 AssertCompileSizeAlignment(CPUMCTX, 64);
2137 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
2138 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
2139 AssertCompileMemberAlignment(VM, cpum, 64);
2140 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
2141#ifdef VBOX_STRICT
2142 int rc2 = cpumR3MsrStrictInitChecks();
2143 AssertRCReturn(rc2, rc2);
2144#endif
2145
2146 /*
2147 * Gather info about the host CPU.
2148 */
2149 if (!ASMHasCpuId())
2150 {
2151 LogRel(("The CPU doesn't support CPUID!\n"));
2152 return VERR_UNSUPPORTED_CPU;
2153 }
2154
2155 pVM->cpum.s.fHostMxCsrMask = CPUMR3DeterminHostMxCsrMask();
2156
2157 CPUMMSRS HostMsrs;
2158 RT_ZERO(HostMsrs);
2159 int rc = cpumR3GetHostHwvirtMsrs(&HostMsrs);
2160 AssertLogRelRCReturn(rc, rc);
2161
2162 PCPUMCPUIDLEAF paLeaves;
2163 uint32_t cLeaves;
2164 rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
2165 AssertLogRelRCReturn(rc, rc);
2166
2167 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &HostMsrs, &pVM->cpum.s.HostFeatures);
2168 RTMemFree(paLeaves);
2169 AssertLogRelRCReturn(rc, rc);
2170 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
2171
2172 /*
2173 * Check that the CPU supports the minimum features we require.
2174 */
2175 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
2176 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
2177 if (!pVM->cpum.s.HostFeatures.fMmx)
2178 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
2179 if (!pVM->cpum.s.HostFeatures.fTsc)
2180 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
2181
2182 /*
2183 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
2184 */
2185 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
2186 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
2187
2188 /*
2189 * Figure out which XSAVE/XRSTOR features are available on the host.
2190 */
2191 uint64_t fXcr0Host = 0;
2192 uint64_t fXStateHostMask = 0;
2193 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
2194 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
2195 {
2196 fXStateHostMask = fXcr0Host = ASMGetXcr0();
2197 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
2198 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
2199 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
2200 }
2201 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
2202 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
2203 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
2204
2205 /*
2206 * Allocate memory for the extended CPU state and initialize the host XSAVE/XRSTOR mask.
2207 */
2208 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
2209 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
2210 AssertLogRelReturn(cbMaxXState >= sizeof(X86FXSTATE) && cbMaxXState <= _8K, VERR_CPUM_IPE_2);
2211
2212 uint8_t *pbXStates;
2213 rc = MMR3HyperAllocOnceNoRelEx(pVM, cbMaxXState * 2 * pVM->cCpus, PAGE_SIZE, MM_TAG_CPUM_CTX,
2214 MMHYPER_AONR_FLAGS_KERNEL_MAPPING, (void **)&pbXStates);
2215 AssertLogRelRCReturn(rc, rc);
2216
2217 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2218 {
2219 PVMCPU pVCpu = pVM->apCpusR3[i];
2220
2221 pVCpu->cpum.s.Guest.pXStateR3 = (PX86XSAVEAREA)pbXStates;
2222 pVCpu->cpum.s.Guest.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
2223 pbXStates += cbMaxXState;
2224
2225 pVCpu->cpum.s.Host.pXStateR3 = (PX86XSAVEAREA)pbXStates;
2226 pVCpu->cpum.s.Host.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
2227 pbXStates += cbMaxXState;
2228
2229 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
2230 }
2231
2232 /*
2233 * Register saved state data item.
2234 */
2235 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
2236 NULL, cpumR3LiveExec, NULL,
2237 NULL, cpumR3SaveExec, NULL,
2238 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
2239 if (RT_FAILURE(rc))
2240 return rc;
2241
2242 /*
2243 * Register info handlers and registers with the debugger facility.
2244 */
2245 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
2246 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
2247 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
2248 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
2249 DBGFR3InfoRegisterInternalEx(pVM, "cpumguesthwvirt", "Displays the guest hwvirt. cpu state.",
2250 &cpumR3InfoGuestHwvirt, DBGFINFO_FLAGS_ALL_EMTS);
2251 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
2252 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
2253 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
2254 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
2255 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
2256 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
2257 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
2258 DBGFR3InfoRegisterInternal( pVM, "cpumvmxfeat", "Displays the host and guest VMX hwvirt. features.",
2259 &cpumR3InfoVmxFeatures);
2260
2261 rc = cpumR3DbgInit(pVM);
2262 if (RT_FAILURE(rc))
2263 return rc;
2264
2265 /*
2266 * Check if we need to workaround partial/leaky FPU handling.
2267 */
2268 cpumR3CheckLeakyFpu(pVM);
2269
2270 /*
2271 * Initialize the Guest CPUID and MSR states.
2272 */
2273 rc = cpumR3InitCpuIdAndMsrs(pVM, &HostMsrs);
2274 if (RT_FAILURE(rc))
2275 return rc;
2276
2277 /*
2278 * Allocate memory required by the guest hardware-virtualization structures.
2279 * This must be done after initializing CPUID/MSR features as we access the
2280 * the VMX/SVM guest features below.
2281 *
2282 * In the case of nested VT-x, we also need to create the per-VCPU
2283 * VMX preemption timers.
2284 */
2285 if (pVM->cpum.s.GuestFeatures.fVmx)
2286 rc = cpumR3AllocVmxHwVirtState(pVM);
2287 else if (pVM->cpum.s.GuestFeatures.fSvm)
2288 rc = cpumR3AllocSvmHwVirtState(pVM);
2289 else
2290 Assert(pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.enmHwvirt == CPUMHWVIRT_NONE);
2291 if (RT_FAILURE(rc))
2292 return rc;
2293
2294 CPUMR3Reset(pVM);
2295 return VINF_SUCCESS;
2296}
2297
2298
2299/**
2300 * Applies relocations to data and code managed by this
2301 * component. This function will be called at init and
2302 * whenever the VMM need to relocate it self inside the GC.
2303 *
2304 * The CPUM will update the addresses used by the switcher.
2305 *
2306 * @param pVM The cross context VM structure.
2307 */
2308VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
2309{
2310 RT_NOREF(pVM);
2311}
2312
2313
2314/**
2315 * Terminates the CPUM.
2316 *
2317 * Termination means cleaning up and freeing all resources,
2318 * the VM it self is at this point powered off or suspended.
2319 *
2320 * @returns VBox status code.
2321 * @param pVM The cross context VM structure.
2322 */
2323VMMR3DECL(int) CPUMR3Term(PVM pVM)
2324{
2325#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2326 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2327 {
2328 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2329 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
2330 pVCpu->cpum.s.uMagic = 0;
2331 pvCpu->cpum.s.Guest.dr[5] = 0;
2332 }
2333#endif
2334
2335 if (pVM->cpum.s.GuestFeatures.fVmx)
2336 {
2337 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2338 {
2339 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2340 int rc = TMR3TimerDestroy(pVCpu->cpum.s.pNestedVmxPreemptTimerR3); AssertRC(rc);
2341 pVCpu->cpum.s.pNestedVmxPreemptTimerR0 = NIL_RTR0PTR;
2342 }
2343
2344 cpumR3FreeVmxHwVirtState(pVM);
2345 }
2346 else if (pVM->cpum.s.GuestFeatures.fSvm)
2347 cpumR3FreeSvmHwVirtState(pVM);
2348 return VINF_SUCCESS;
2349}
2350
2351
2352/**
2353 * Resets a virtual CPU.
2354 *
2355 * Used by CPUMR3Reset and CPU hot plugging.
2356 *
2357 * @param pVM The cross context VM structure.
2358 * @param pVCpu The cross context virtual CPU structure of the CPU that is
2359 * being reset. This may differ from the current EMT.
2360 */
2361VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
2362{
2363 /** @todo anything different for VCPU > 0? */
2364 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2365
2366 /*
2367 * Initialize everything to ZERO first.
2368 */
2369 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
2370
2371 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateR3));
2372 memset(pCtx, 0, RT_UOFFSETOF(CPUMCTX, pXStateR0));
2373
2374 pVCpu->cpum.s.fUseFlags = fUseFlags;
2375
2376 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
2377 pCtx->eip = 0x0000fff0;
2378 pCtx->edx = 0x00000600; /* P6 processor */
2379 pCtx->eflags.Bits.u1Reserved0 = 1;
2380
2381 pCtx->cs.Sel = 0xf000;
2382 pCtx->cs.ValidSel = 0xf000;
2383 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
2384 pCtx->cs.u64Base = UINT64_C(0xffff0000);
2385 pCtx->cs.u32Limit = 0x0000ffff;
2386 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
2387 pCtx->cs.Attr.n.u1Present = 1;
2388 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
2389
2390 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
2391 pCtx->ds.u32Limit = 0x0000ffff;
2392 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
2393 pCtx->ds.Attr.n.u1Present = 1;
2394 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2395
2396 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
2397 pCtx->es.u32Limit = 0x0000ffff;
2398 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
2399 pCtx->es.Attr.n.u1Present = 1;
2400 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2401
2402 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
2403 pCtx->fs.u32Limit = 0x0000ffff;
2404 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
2405 pCtx->fs.Attr.n.u1Present = 1;
2406 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2407
2408 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
2409 pCtx->gs.u32Limit = 0x0000ffff;
2410 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
2411 pCtx->gs.Attr.n.u1Present = 1;
2412 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2413
2414 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
2415 pCtx->ss.u32Limit = 0x0000ffff;
2416 pCtx->ss.Attr.n.u1Present = 1;
2417 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
2418 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2419
2420 pCtx->idtr.cbIdt = 0xffff;
2421 pCtx->gdtr.cbGdt = 0xffff;
2422
2423 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2424 pCtx->ldtr.u32Limit = 0xffff;
2425 pCtx->ldtr.Attr.n.u1Present = 1;
2426 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
2427
2428 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
2429 pCtx->tr.u32Limit = 0xffff;
2430 pCtx->tr.Attr.n.u1Present = 1;
2431 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
2432
2433 pCtx->dr[6] = X86_DR6_INIT_VAL;
2434 pCtx->dr[7] = X86_DR7_INIT_VAL;
2435
2436 PX86FXSTATE pFpuCtx = &pCtx->pXStateR3->x87; AssertReleaseMsg(RT_VALID_PTR(pFpuCtx), ("%p\n", pFpuCtx));
2437 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
2438 pFpuCtx->FCW = 0x37f;
2439
2440 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
2441 IA-32 Processor States Following Power-up, Reset, or INIT */
2442 pFpuCtx->MXCSR = 0x1F80;
2443 pFpuCtx->MXCSR_MASK = pVM->cpum.s.GuestInfo.fMxCsrMask; /** @todo check if REM messes this up... */
2444
2445 pCtx->aXcr[0] = XSAVE_C_X87;
2446 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_UOFFSETOF(X86XSAVEAREA, Hdr))
2447 {
2448 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
2449 as we don't know what happened before. (Bother optimize later?) */
2450 pCtx->pXStateR3->Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
2451 }
2452
2453 /*
2454 * MSRs.
2455 */
2456 /* Init PAT MSR */
2457 pCtx->msrPAT = MSR_IA32_CR_PAT_INIT_VAL;
2458
2459 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
2460 * The Intel docs don't mention it. */
2461 Assert(!pCtx->msrEFER);
2462
2463 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
2464 is supposed to be here, just trying provide useful/sensible values. */
2465 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
2466 if (pRange)
2467 {
2468 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2469 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
2470 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
2471 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
2472 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2473 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
2474 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
2475 }
2476
2477 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
2478
2479 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
2480 * called from each EMT while we're getting called by CPUMR3Reset()
2481 * iteratively on the same thread. Fix later. */
2482#if 0 /** @todo r=bird: This we will do in TM, not here. */
2483 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
2484 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
2485#endif
2486
2487
2488 /* C-state control. Guesses. */
2489 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
2490 /* For Nehalem+ and Atoms, the 0xE2 MSR (MSR_PKG_CST_CONFIG_CONTROL) is documented. For Core 2,
2491 * it's undocumented but exists as MSR_PMG_CST_CONFIG_CONTROL and has similar but not identical
2492 * functionality. The default value must be different due to incompatible write mask.
2493 */
2494 if (CPUMMICROARCH_IS_INTEL_CORE2(pVM->cpum.s.GuestFeatures.enmMicroarch))
2495 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x202a01; /* From Mac Pro Harpertown, unlocked. */
2496 else if (pVM->cpum.s.GuestFeatures.enmMicroarch == kCpumMicroarch_Intel_Core_Yonah)
2497 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x26740c; /* From MacBookPro1,1. */
2498
2499 /*
2500 * Hardware virtualization state.
2501 */
2502 CPUMSetGuestGif(pCtx, true);
2503 Assert(!pVM->cpum.s.GuestFeatures.fVmx || !pVM->cpum.s.GuestFeatures.fSvm); /* Paranoia. */
2504 if (pVM->cpum.s.GuestFeatures.fVmx)
2505 cpumR3ResetVmxHwVirtState(pVCpu);
2506 else if (pVM->cpum.s.GuestFeatures.fSvm)
2507 cpumR3ResetSvmHwVirtState(pVCpu);
2508}
2509
2510
2511/**
2512 * Resets the CPU.
2513 *
2514 * @returns VINF_SUCCESS.
2515 * @param pVM The cross context VM structure.
2516 */
2517VMMR3DECL(void) CPUMR3Reset(PVM pVM)
2518{
2519 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2520 {
2521 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2522 CPUMR3ResetCpu(pVM, pVCpu);
2523
2524#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2525
2526 /* Magic marker for searching in crash dumps. */
2527 strcpy((char *)pVCpu->.cpum.s.aMagic, "CPUMCPU Magic");
2528 pVCpu->cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
2529 pVCpu->cpum.s.Guest->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
2530#endif
2531 }
2532}
2533
2534
2535
2536
2537/**
2538 * Pass 0 live exec callback.
2539 *
2540 * @returns VINF_SSM_DONT_CALL_AGAIN.
2541 * @param pVM The cross context VM structure.
2542 * @param pSSM The saved state handle.
2543 * @param uPass The pass (0).
2544 */
2545static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
2546{
2547 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
2548 cpumR3SaveCpuId(pVM, pSSM);
2549 return VINF_SSM_DONT_CALL_AGAIN;
2550}
2551
2552
2553/**
2554 * Execute state save operation.
2555 *
2556 * @returns VBox status code.
2557 * @param pVM The cross context VM structure.
2558 * @param pSSM SSM operation handle.
2559 */
2560static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
2561{
2562 /*
2563 * Save.
2564 */
2565 SSMR3PutU32(pSSM, pVM->cCpus);
2566 SSMR3PutU32(pSSM, sizeof(pVM->apCpusR3[0]->cpum.s.GuestMsrs.msr));
2567 CPUMCTX DummyHyperCtx;
2568 RT_ZERO(DummyHyperCtx);
2569 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2570 {
2571 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2572
2573 SSMR3PutStructEx(pSSM, &DummyHyperCtx, sizeof(DummyHyperCtx), 0, g_aCpumCtxFields, NULL);
2574
2575 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2576 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2577 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
2578 if (pGstCtx->fXStateMask != 0)
2579 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr), 0, g_aCpumXSaveHdrFields, NULL);
2580 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2581 {
2582 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2583 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2584 }
2585 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2586 {
2587 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2588 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2589 }
2590 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2591 {
2592 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2593 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2594 }
2595 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2596 {
2597 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2598 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2599 }
2600 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2601 {
2602 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2603 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2604 }
2605 if (pVM->cpum.s.GuestFeatures.fSvm)
2606 {
2607 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
2608 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uMsrHSavePa);
2609 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.svm.GCPhysVmcb);
2610 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uPrevPauseTick);
2611 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilter);
2612 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2613 SSMR3PutBool(pSSM, pGstCtx->hwvirt.svm.fInterceptEvents);
2614 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState), 0 /* fFlags */,
2615 g_aSvmHwvirtHostState, NULL /* pvUser */);
2616 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
2617 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
2618 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
2619 SSMR3PutU32(pSSM, pGstCtx->hwvirt.fLocalForcedActions);
2620 SSMR3PutBool(pSSM, pGstCtx->hwvirt.fGif);
2621 }
2622 if (pVM->cpum.s.GuestFeatures.fVmx)
2623 {
2624 Assert(pGstCtx->hwvirt.vmx.CTX_SUFF(pVmcs));
2625 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmxon);
2626 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmcs);
2627 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2628 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxRootMode);
2629 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2630 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInterceptEvents);
2631 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2632 SSMR3PutStructEx(pSSM, pGstCtx->hwvirt.vmx.pVmcsR3, sizeof(VMXVVMCS), 0, g_aVmxHwvirtVmcs, NULL);
2633 SSMR3PutStructEx(pSSM, pGstCtx->hwvirt.vmx.pShadowVmcsR3, sizeof(VMXVVMCS), 0, g_aVmxHwvirtVmcs, NULL);
2634 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pvVmreadBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
2635 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pvVmwriteBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
2636 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pEntryMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2637 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pExitMsrStoreAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2638 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pExitMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2639 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pvMsrBitmapR3, VMX_V_MSR_BITMAP_SIZE);
2640 SSMR3PutMem(pSSM, pGstCtx->hwvirt.vmx.pvIoBitmapR3, VMX_V_IO_BITMAP_A_SIZE + VMX_V_IO_BITMAP_B_SIZE);
2641 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2642 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uPrevPauseTick);
2643 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uEntryTick);
2644 SSMR3PutU16(pSSM, pGstCtx->hwvirt.vmx.offVirtApicWrite);
2645 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2646 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64FeatCtrl);
2647 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2648 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2649 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2650 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2651 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2652 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2653 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2654 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2655 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2656 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2657 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2658 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2659 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2660 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2661 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2662 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2663 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2664 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2665 }
2666 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
2667 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
2668 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
2669 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
2670 }
2671
2672 cpumR3SaveCpuId(pVM, pSSM);
2673 return VINF_SUCCESS;
2674}
2675
2676
2677/**
2678 * @callback_method_impl{FNSSMINTLOADPREP}
2679 */
2680static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
2681{
2682 NOREF(pSSM);
2683 pVM->cpum.s.fPendingRestore = true;
2684 return VINF_SUCCESS;
2685}
2686
2687
2688/**
2689 * @callback_method_impl{FNSSMINTLOADEXEC}
2690 */
2691static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
2692{
2693 int rc; /* Only for AssertRCReturn use. */
2694
2695 /*
2696 * Validate version.
2697 */
2698 if ( uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_IEM
2699 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_SVM
2700 && uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
2701 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
2702 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
2703 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
2704 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
2705 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
2706 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
2707 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
2708 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
2709 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
2710 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
2711 {
2712 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
2713 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2714 }
2715
2716 if (uPass == SSM_PASS_FINAL)
2717 {
2718 /*
2719 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
2720 * really old SSM file versions.)
2721 */
2722 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2723 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
2724 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
2725 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR));
2726
2727 /*
2728 * Figure x86 and ctx field definitions to use for older states.
2729 */
2730 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
2731 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
2732 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
2733 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2734 {
2735 paCpumCtx1Fields = g_aCpumX87FieldsV16;
2736 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
2737 }
2738 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2739 {
2740 paCpumCtx1Fields = g_aCpumX87FieldsMem;
2741 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
2742 }
2743
2744 /*
2745 * The hyper state used to preceed the CPU count. Starting with
2746 * XSAVE it was moved down till after we've got the count.
2747 */
2748 CPUMCTX HyperCtxIgnored;
2749 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
2750 {
2751 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2752 {
2753 X86FXSTATE Ign;
2754 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2755 SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored),
2756 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2757 }
2758 }
2759
2760 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
2761 {
2762 uint32_t cCpus;
2763 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
2764 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
2765 VERR_SSM_UNEXPECTED_DATA);
2766 }
2767 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
2768 || pVM->cCpus == 1,
2769 ("cCpus=%u\n", pVM->cCpus),
2770 VERR_SSM_UNEXPECTED_DATA);
2771
2772 uint32_t cbMsrs = 0;
2773 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2774 {
2775 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
2776 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
2777 VERR_SSM_UNEXPECTED_DATA);
2778 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
2779 VERR_SSM_UNEXPECTED_DATA);
2780 }
2781
2782 /*
2783 * Do the per-CPU restoring.
2784 */
2785 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2786 {
2787 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2788 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2789
2790 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
2791 {
2792 /*
2793 * The XSAVE saved state layout moved the hyper state down here.
2794 */
2795 rc = SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored), 0, g_aCpumCtxFields, NULL);
2796 AssertRCReturn(rc, rc);
2797
2798 /*
2799 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
2800 */
2801 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2802 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
2803 AssertRCReturn(rc, rc);
2804
2805 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
2806 if (pGstCtx->fXStateMask != 0)
2807 {
2808 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
2809 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
2810 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
2811 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
2812 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
2813 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2814 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2815 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2816 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2817 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2818 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2819 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2820 }
2821
2822 /* Check that the XCR0 mask is valid (invalid results in #GP). */
2823 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
2824 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
2825 {
2826 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
2827 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
2828 VERR_CPUM_INVALID_XCR0);
2829 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
2830 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2831 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2832 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2833 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2834 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2835 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2836 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2837 }
2838
2839 /* Check that the XCR1 is zero, as we don't implement it yet. */
2840 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2841
2842 /*
2843 * Restore the individual extended state components we support.
2844 */
2845 if (pGstCtx->fXStateMask != 0)
2846 {
2847 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr),
2848 0, g_aCpumXSaveHdrFields, NULL);
2849 AssertRCReturn(rc, rc);
2850 AssertLogRelMsgReturn(!(pGstCtx->pXStateR3->Hdr.bmXState & ~pGstCtx->fXStateMask),
2851 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
2852 pGstCtx->pXStateR3->Hdr.bmXState, pGstCtx->fXStateMask),
2853 VERR_CPUM_INVALID_XSAVE_HDR);
2854 }
2855 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2856 {
2857 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
2858 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2859 }
2860 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2861 {
2862 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
2863 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2864 }
2865 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2866 {
2867 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
2868 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2869 }
2870 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2871 {
2872 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
2873 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2874 }
2875 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2876 {
2877 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
2878 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2879 }
2880 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_SVM)
2881 {
2882 if (pVM->cpum.s.GuestFeatures.fSvm)
2883 {
2884 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
2885 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uMsrHSavePa);
2886 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.svm.GCPhysVmcb);
2887 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uPrevPauseTick);
2888 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilter);
2889 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2890 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.svm.fInterceptEvents);
2891 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState),
2892 0 /* fFlags */, g_aSvmHwvirtHostState, NULL /* pvUser */);
2893 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
2894 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
2895 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
2896 SSMR3GetU32(pSSM, &pGstCtx->hwvirt.fLocalForcedActions);
2897 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.fGif);
2898 }
2899 }
2900 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_IEM)
2901 {
2902 if (pVM->cpum.s.GuestFeatures.fVmx)
2903 {
2904 Assert(pGstCtx->hwvirt.vmx.CTX_SUFF(pVmcs));
2905 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmxon);
2906 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmcs);
2907 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2908 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxRootMode);
2909 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2910 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInterceptEvents);
2911 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2912 SSMR3GetStructEx(pSSM, pGstCtx->hwvirt.vmx.pVmcsR3, sizeof(VMXVVMCS), 0, g_aVmxHwvirtVmcs, NULL);
2913 SSMR3GetStructEx(pSSM, pGstCtx->hwvirt.vmx.pShadowVmcsR3, sizeof(VMXVVMCS), 0, g_aVmxHwvirtVmcs, NULL);
2914 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pvVmreadBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
2915 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pvVmwriteBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
2916 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pEntryMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2917 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pExitMsrStoreAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2918 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pExitMsrLoadAreaR3, VMX_V_AUTOMSR_AREA_SIZE);
2919 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pvMsrBitmapR3, VMX_V_MSR_BITMAP_SIZE);
2920 SSMR3GetMem(pSSM, pGstCtx->hwvirt.vmx.pvIoBitmapR3, VMX_V_IO_BITMAP_A_SIZE + VMX_V_IO_BITMAP_B_SIZE);
2921 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2922 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uPrevPauseTick);
2923 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uEntryTick);
2924 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.vmx.offVirtApicWrite);
2925 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2926 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64FeatCtrl);
2927 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2928 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2929 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2930 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2931 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2932 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2933 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2934 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2935 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2936 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2937 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2938 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2939 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2940 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2941 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2942 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2943 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2944 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2945 }
2946 }
2947 }
2948 else
2949 {
2950 /*
2951 * Pre XSAVE saved state.
2952 */
2953 SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87),
2954 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2955 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2956 }
2957
2958 /*
2959 * Restore a couple of flags and the MSRs.
2960 */
2961 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fUseFlags);
2962 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
2963
2964 rc = VINF_SUCCESS;
2965 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2966 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
2967 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
2968 {
2969 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
2970 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
2971 }
2972 AssertRCReturn(rc, rc);
2973
2974 /* REM and other may have cleared must-be-one fields in DR6 and
2975 DR7, fix these. */
2976 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
2977 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
2978 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
2979 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
2980 }
2981
2982 /* Older states does not have the internal selector register flags
2983 and valid selector value. Supply those. */
2984 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2985 {
2986 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2987 {
2988 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2989 bool const fValid = true /*!VM_IS_RAW_MODE_ENABLED(pVM)*/
2990 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2991 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
2992 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
2993 if (fValid)
2994 {
2995 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2996 {
2997 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
2998 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
2999 }
3000
3001 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
3002 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
3003 }
3004 else
3005 {
3006 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
3007 {
3008 paSelReg[iSelReg].fFlags = 0;
3009 paSelReg[iSelReg].ValidSel = 0;
3010 }
3011
3012 /* This might not be 104% correct, but I think it's close
3013 enough for all practical purposes... (REM always loaded
3014 LDTR registers.) */
3015 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
3016 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
3017 }
3018 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
3019 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
3020 }
3021 }
3022
3023 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
3024 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
3025 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
3026 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
3027 {
3028 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
3029 pVCpu->cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
3030 }
3031
3032 /*
3033 * A quick sanity check.
3034 */
3035 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
3036 {
3037 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
3038 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3039 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3040 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3041 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3042 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3043 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
3044 }
3045 }
3046
3047 pVM->cpum.s.fPendingRestore = false;
3048
3049 /*
3050 * Guest CPUIDs (and VMX MSR features).
3051 */
3052 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
3053 {
3054 CPUMMSRS GuestMsrs;
3055 RT_ZERO(GuestMsrs);
3056
3057 CPUMFEATURES BaseFeatures;
3058 bool const fVmxGstFeat = pVM->cpum.s.GuestFeatures.fVmx;
3059 if (fVmxGstFeat)
3060 {
3061 /*
3062 * At this point the MSRs in the guest CPU-context are loaded with the guest VMX MSRs from the saved state.
3063 * However the VMX sub-features have not been exploded yet. So cache the base (host derived) VMX features
3064 * here so we can compare them for compatibility after exploding guest features.
3065 */
3066 BaseFeatures = pVM->cpum.s.GuestFeatures;
3067
3068 /* Use the VMX MSR features from the saved state while exploding guest features. */
3069 GuestMsrs.hwvirt.vmx = pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.vmx.Msrs;
3070 }
3071
3072 /* Load CPUID and explode guest features. */
3073 rc = cpumR3LoadCpuId(pVM, pSSM, uVersion, &GuestMsrs);
3074 if (fVmxGstFeat)
3075 {
3076 /*
3077 * Check if the exploded VMX features from the saved state are compatible with the host-derived features
3078 * we cached earlier (above). The is required if we use hardware-assisted nested-guest execution with
3079 * VMX features presented to the guest.
3080 */
3081 bool const fIsCompat = cpumR3AreVmxCpuFeaturesCompatible(pVM, &BaseFeatures, &pVM->cpum.s.GuestFeatures);
3082 if (!fIsCompat)
3083 return VERR_CPUM_INVALID_HWVIRT_FEAT_COMBO;
3084 }
3085 return rc;
3086 }
3087 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
3088}
3089
3090
3091/**
3092 * @callback_method_impl{FNSSMINTLOADDONE}
3093 */
3094static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
3095{
3096 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
3097 return VINF_SUCCESS;
3098
3099 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
3100 if (pVM->cpum.s.fPendingRestore)
3101 {
3102 LogRel(("CPUM: Missing state!\n"));
3103 return VERR_INTERNAL_ERROR_2;
3104 }
3105
3106 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
3107 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
3108 {
3109 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
3110
3111 /* Notify PGM of the NXE states in case they've changed. */
3112 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
3113
3114 /* During init. this is done in CPUMR3InitCompleted(). */
3115 if (fSupportsLongMode)
3116 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
3117 }
3118 return VINF_SUCCESS;
3119}
3120
3121
3122/**
3123 * Checks if the CPUM state restore is still pending.
3124 *
3125 * @returns true / false.
3126 * @param pVM The cross context VM structure.
3127 */
3128VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
3129{
3130 return pVM->cpum.s.fPendingRestore;
3131}
3132
3133
3134/**
3135 * Formats the EFLAGS value into mnemonics.
3136 *
3137 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
3138 * @param efl The EFLAGS value.
3139 */
3140static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
3141{
3142 /*
3143 * Format the flags.
3144 */
3145 static const struct
3146 {
3147 const char *pszSet; const char *pszClear; uint32_t fFlag;
3148 } s_aFlags[] =
3149 {
3150 { "vip",NULL, X86_EFL_VIP },
3151 { "vif",NULL, X86_EFL_VIF },
3152 { "ac", NULL, X86_EFL_AC },
3153 { "vm", NULL, X86_EFL_VM },
3154 { "rf", NULL, X86_EFL_RF },
3155 { "nt", NULL, X86_EFL_NT },
3156 { "ov", "nv", X86_EFL_OF },
3157 { "dn", "up", X86_EFL_DF },
3158 { "ei", "di", X86_EFL_IF },
3159 { "tf", NULL, X86_EFL_TF },
3160 { "nt", "pl", X86_EFL_SF },
3161 { "nz", "zr", X86_EFL_ZF },
3162 { "ac", "na", X86_EFL_AF },
3163 { "po", "pe", X86_EFL_PF },
3164 { "cy", "nc", X86_EFL_CF },
3165 };
3166 char *psz = pszEFlags;
3167 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
3168 {
3169 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
3170 if (pszAdd)
3171 {
3172 strcpy(psz, pszAdd);
3173 psz += strlen(pszAdd);
3174 *psz++ = ' ';
3175 }
3176 }
3177 psz[-1] = '\0';
3178}
3179
3180
3181/**
3182 * Formats a full register dump.
3183 *
3184 * @param pVM The cross context VM structure.
3185 * @param pCtx The context to format.
3186 * @param pCtxCore The context core to format.
3187 * @param pHlp Output functions.
3188 * @param enmType The dump type.
3189 * @param pszPrefix Register name prefix.
3190 */
3191static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
3192 const char *pszPrefix)
3193{
3194 NOREF(pVM);
3195
3196 /*
3197 * Format the EFLAGS.
3198 */
3199 uint32_t efl = pCtxCore->eflags.u32;
3200 char szEFlags[80];
3201 cpumR3InfoFormatFlags(&szEFlags[0], efl);
3202
3203 /*
3204 * Format the registers.
3205 */
3206 switch (enmType)
3207 {
3208 case CPUMDUMPTYPE_TERSE:
3209 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3210 pHlp->pfnPrintf(pHlp,
3211 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3212 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3213 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3214 "%sr14=%016RX64 %sr15=%016RX64\n"
3215 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3216 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3217 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3218 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3219 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3220 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3221 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3222 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3223 else
3224 pHlp->pfnPrintf(pHlp,
3225 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3226 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3227 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3228 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3229 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3230 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3231 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3232 break;
3233
3234 case CPUMDUMPTYPE_DEFAULT:
3235 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3236 pHlp->pfnPrintf(pHlp,
3237 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3238 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3239 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3240 "%sr14=%016RX64 %sr15=%016RX64\n"
3241 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3242 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3243 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
3244 ,
3245 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3246 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3247 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3248 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3249 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3250 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3251 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3252 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3253 else
3254 pHlp->pfnPrintf(pHlp,
3255 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3256 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3257 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3258 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
3259 ,
3260 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3261 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3262 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3263 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3264 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3265 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3266 break;
3267
3268 case CPUMDUMPTYPE_VERBOSE:
3269 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3270 pHlp->pfnPrintf(pHlp,
3271 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3272 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3273 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3274 "%sr14=%016RX64 %sr15=%016RX64\n"
3275 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3276 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3277 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3278 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3279 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3280 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3281 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3282 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
3283 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
3284 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
3285 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3286 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3287 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3288 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
3289 ,
3290 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3291 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3292 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3293 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3294 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
3295 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
3296 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
3297 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
3298 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
3299 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
3300 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3301 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3302 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3303 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3304 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3305 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3306 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3307 else
3308 pHlp->pfnPrintf(pHlp,
3309 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3310 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3311 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
3312 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
3313 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
3314 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
3315 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
3316 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
3317 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3318 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3319 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3320 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
3321 ,
3322 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3323 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3324 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
3325 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3326 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
3327 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3328 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
3329 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3330 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3331 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3332 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3333 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3334
3335 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
3336 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
3337 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
3338 if (pCtx->CTX_SUFF(pXState))
3339 {
3340 PX86FXSTATE pFpuCtx = &pCtx->CTX_SUFF(pXState)->x87;
3341 pHlp->pfnPrintf(pHlp,
3342 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
3343 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
3344 ,
3345 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
3346 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
3347 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
3348 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
3349 );
3350 /*
3351 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
3352 * not (FP)R0-7 as Intel SDM suggests.
3353 */
3354 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
3355 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
3356 {
3357 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
3358 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
3359 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
3360 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
3361 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
3362 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
3363 iExponent -= 16383; /* subtract bias */
3364 /** @todo This isn't entirenly correct and needs more work! */
3365 pHlp->pfnPrintf(pHlp,
3366 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
3367 pszPrefix, iST, pszPrefix, iFPR,
3368 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
3369 uTag, chSign, iInteger, u64Fraction, iExponent);
3370 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
3371 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
3372 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
3373 else
3374 pHlp->pfnPrintf(pHlp, "\n");
3375 }
3376
3377 /* XMM/YMM/ZMM registers. */
3378 if (pCtx->fXStateMask & XSAVE_C_YMM)
3379 {
3380 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
3381 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
3382 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3383 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3384 pszPrefix, i, i < 10 ? " " : "",
3385 pYmmHiCtx->aYmmHi[i].au32[3],
3386 pYmmHiCtx->aYmmHi[i].au32[2],
3387 pYmmHiCtx->aYmmHi[i].au32[1],
3388 pYmmHiCtx->aYmmHi[i].au32[0],
3389 pFpuCtx->aXMM[i].au32[3],
3390 pFpuCtx->aXMM[i].au32[2],
3391 pFpuCtx->aXMM[i].au32[1],
3392 pFpuCtx->aXMM[i].au32[0]);
3393 else
3394 {
3395 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
3396 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3397 pHlp->pfnPrintf(pHlp,
3398 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3399 pszPrefix, i, i < 10 ? " " : "",
3400 pZmmHi256->aHi256Regs[i].au32[7],
3401 pZmmHi256->aHi256Regs[i].au32[6],
3402 pZmmHi256->aHi256Regs[i].au32[5],
3403 pZmmHi256->aHi256Regs[i].au32[4],
3404 pZmmHi256->aHi256Regs[i].au32[3],
3405 pZmmHi256->aHi256Regs[i].au32[2],
3406 pZmmHi256->aHi256Regs[i].au32[1],
3407 pZmmHi256->aHi256Regs[i].au32[0],
3408 pYmmHiCtx->aYmmHi[i].au32[3],
3409 pYmmHiCtx->aYmmHi[i].au32[2],
3410 pYmmHiCtx->aYmmHi[i].au32[1],
3411 pYmmHiCtx->aYmmHi[i].au32[0],
3412 pFpuCtx->aXMM[i].au32[3],
3413 pFpuCtx->aXMM[i].au32[2],
3414 pFpuCtx->aXMM[i].au32[1],
3415 pFpuCtx->aXMM[i].au32[0]);
3416
3417 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
3418 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
3419 pHlp->pfnPrintf(pHlp,
3420 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3421 pszPrefix, i + 16,
3422 pZmm16Hi->aRegs[i].au32[15],
3423 pZmm16Hi->aRegs[i].au32[14],
3424 pZmm16Hi->aRegs[i].au32[13],
3425 pZmm16Hi->aRegs[i].au32[12],
3426 pZmm16Hi->aRegs[i].au32[11],
3427 pZmm16Hi->aRegs[i].au32[10],
3428 pZmm16Hi->aRegs[i].au32[9],
3429 pZmm16Hi->aRegs[i].au32[8],
3430 pZmm16Hi->aRegs[i].au32[7],
3431 pZmm16Hi->aRegs[i].au32[6],
3432 pZmm16Hi->aRegs[i].au32[5],
3433 pZmm16Hi->aRegs[i].au32[4],
3434 pZmm16Hi->aRegs[i].au32[3],
3435 pZmm16Hi->aRegs[i].au32[2],
3436 pZmm16Hi->aRegs[i].au32[1],
3437 pZmm16Hi->aRegs[i].au32[0]);
3438 }
3439 }
3440 else
3441 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3442 pHlp->pfnPrintf(pHlp,
3443 i & 1
3444 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
3445 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
3446 pszPrefix, i, i < 10 ? " " : "",
3447 pFpuCtx->aXMM[i].au32[3],
3448 pFpuCtx->aXMM[i].au32[2],
3449 pFpuCtx->aXMM[i].au32[1],
3450 pFpuCtx->aXMM[i].au32[0]);
3451
3452 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
3453 {
3454 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
3455 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
3456 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
3457 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
3458 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
3459 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
3460 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
3461 }
3462
3463 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
3464 {
3465 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
3466 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
3467 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
3468 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
3469 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
3470 }
3471
3472 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
3473 {
3474 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
3475 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
3476 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
3477 }
3478
3479 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
3480 if (pFpuCtx->au32RsrvdRest[i])
3481 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
3482 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_UOFFSETOF_DYN(X86FXSTATE, au32RsrvdRest[i]) );
3483 }
3484
3485 pHlp->pfnPrintf(pHlp,
3486 "%sEFER =%016RX64\n"
3487 "%sPAT =%016RX64\n"
3488 "%sSTAR =%016RX64\n"
3489 "%sCSTAR =%016RX64\n"
3490 "%sLSTAR =%016RX64\n"
3491 "%sSFMASK =%016RX64\n"
3492 "%sKERNELGSBASE =%016RX64\n",
3493 pszPrefix, pCtx->msrEFER,
3494 pszPrefix, pCtx->msrPAT,
3495 pszPrefix, pCtx->msrSTAR,
3496 pszPrefix, pCtx->msrCSTAR,
3497 pszPrefix, pCtx->msrLSTAR,
3498 pszPrefix, pCtx->msrSFMASK,
3499 pszPrefix, pCtx->msrKERNELGSBASE);
3500 break;
3501 }
3502}
3503
3504
3505/**
3506 * Display all cpu states and any other cpum info.
3507 *
3508 * @param pVM The cross context VM structure.
3509 * @param pHlp The info helper functions.
3510 * @param pszArgs Arguments, ignored.
3511 */
3512static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3513{
3514 cpumR3InfoGuest(pVM, pHlp, pszArgs);
3515 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
3516 cpumR3InfoGuestHwvirt(pVM, pHlp, pszArgs);
3517 cpumR3InfoHyper(pVM, pHlp, pszArgs);
3518 cpumR3InfoHost(pVM, pHlp, pszArgs);
3519}
3520
3521
3522/**
3523 * Parses the info argument.
3524 *
3525 * The argument starts with 'verbose', 'terse' or 'default' and then
3526 * continues with the comment string.
3527 *
3528 * @param pszArgs The pointer to the argument string.
3529 * @param penmType Where to store the dump type request.
3530 * @param ppszComment Where to store the pointer to the comment string.
3531 */
3532static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
3533{
3534 if (!pszArgs)
3535 {
3536 *penmType = CPUMDUMPTYPE_DEFAULT;
3537 *ppszComment = "";
3538 }
3539 else
3540 {
3541 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
3542 {
3543 pszArgs += 7;
3544 *penmType = CPUMDUMPTYPE_VERBOSE;
3545 }
3546 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
3547 {
3548 pszArgs += 5;
3549 *penmType = CPUMDUMPTYPE_TERSE;
3550 }
3551 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
3552 {
3553 pszArgs += 7;
3554 *penmType = CPUMDUMPTYPE_DEFAULT;
3555 }
3556 else
3557 *penmType = CPUMDUMPTYPE_DEFAULT;
3558 *ppszComment = RTStrStripL(pszArgs);
3559 }
3560}
3561
3562
3563/**
3564 * Display the guest cpu state.
3565 *
3566 * @param pVM The cross context VM structure.
3567 * @param pHlp The info helper functions.
3568 * @param pszArgs Arguments.
3569 */
3570static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3571{
3572 CPUMDUMPTYPE enmType;
3573 const char *pszComment;
3574 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3575
3576 PVMCPU pVCpu = VMMGetCpu(pVM);
3577 if (!pVCpu)
3578 pVCpu = pVM->apCpusR3[0];
3579
3580 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
3581
3582 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3583 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
3584}
3585
3586
3587/**
3588 * Displays an SVM VMCB control area.
3589 *
3590 * @param pHlp The info helper functions.
3591 * @param pVmcbCtrl Pointer to a SVM VMCB controls area.
3592 * @param pszPrefix Caller specified string prefix.
3593 */
3594static void cpumR3InfoSvmVmcbCtrl(PCDBGFINFOHLP pHlp, PCSVMVMCBCTRL pVmcbCtrl, const char *pszPrefix)
3595{
3596 AssertReturnVoid(pHlp);
3597 AssertReturnVoid(pVmcbCtrl);
3598
3599 pHlp->pfnPrintf(pHlp, "%sCRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdCRx);
3600 pHlp->pfnPrintf(pHlp, "%sCRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrCRx);
3601 pHlp->pfnPrintf(pHlp, "%sDRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdDRx);
3602 pHlp->pfnPrintf(pHlp, "%sDRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrDRx);
3603 pHlp->pfnPrintf(pHlp, "%sException intercepts = %#RX32\n", pszPrefix, pVmcbCtrl->u32InterceptXcpt);
3604 pHlp->pfnPrintf(pHlp, "%sControl intercepts = %#RX64\n", pszPrefix, pVmcbCtrl->u64InterceptCtrl);
3605 pHlp->pfnPrintf(pHlp, "%sPause-filter threshold = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterThreshold);
3606 pHlp->pfnPrintf(pHlp, "%sPause-filter count = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterCount);
3607 pHlp->pfnPrintf(pHlp, "%sIOPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64IOPMPhysAddr);
3608 pHlp->pfnPrintf(pHlp, "%sMSRPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64MSRPMPhysAddr);
3609 pHlp->pfnPrintf(pHlp, "%sTSC offset = %#RX64\n", pszPrefix, pVmcbCtrl->u64TSCOffset);
3610 pHlp->pfnPrintf(pHlp, "%sTLB Control\n", pszPrefix);
3611 pHlp->pfnPrintf(pHlp, " %sASID = %#RX32\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u32ASID);
3612 pHlp->pfnPrintf(pHlp, " %sTLB-flush type = %u\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u8TLBFlush);
3613 pHlp->pfnPrintf(pHlp, "%sInterrupt Control\n", pszPrefix);
3614 pHlp->pfnPrintf(pHlp, " %sVTPR = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VTPR, pVmcbCtrl->IntCtrl.n.u8VTPR);
3615 pHlp->pfnPrintf(pHlp, " %sVIRQ (Pending) = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIrqPending);
3616 pHlp->pfnPrintf(pHlp, " %sVINTR vector = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VIntrVector);
3617 pHlp->pfnPrintf(pHlp, " %sVGIF = %u\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGif);
3618 pHlp->pfnPrintf(pHlp, " %sVINTR priority = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u4VIntrPrio);
3619 pHlp->pfnPrintf(pHlp, " %sIgnore TPR = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1IgnoreTPR);
3620 pHlp->pfnPrintf(pHlp, " %sVINTR masking = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIntrMasking);
3621 pHlp->pfnPrintf(pHlp, " %sVGIF enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGifEnable);
3622 pHlp->pfnPrintf(pHlp, " %sAVIC enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1AvicEnable);
3623 pHlp->pfnPrintf(pHlp, "%sInterrupt Shadow\n", pszPrefix);
3624 pHlp->pfnPrintf(pHlp, " %sInterrupt shadow = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1IntShadow);
3625 pHlp->pfnPrintf(pHlp, " %sGuest-interrupt Mask = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1GuestIntMask);
3626 pHlp->pfnPrintf(pHlp, "%sExit Code = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitCode);
3627 pHlp->pfnPrintf(pHlp, "%sEXITINFO1 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo1);
3628 pHlp->pfnPrintf(pHlp, "%sEXITINFO2 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo2);
3629 pHlp->pfnPrintf(pHlp, "%sExit Interrupt Info\n", pszPrefix);
3630 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1Valid);
3631 pHlp->pfnPrintf(pHlp, " %sVector = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u8Vector, pVmcbCtrl->ExitIntInfo.n.u8Vector);
3632 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u3Type);
3633 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1ErrorCodeValid);
3634 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u32ErrorCode);
3635 pHlp->pfnPrintf(pHlp, "%sNested paging and SEV\n", pszPrefix);
3636 pHlp->pfnPrintf(pHlp, " %sNested paging = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1NestedPaging);
3637 pHlp->pfnPrintf(pHlp, " %sSEV (Secure Encrypted VM) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1Sev);
3638 pHlp->pfnPrintf(pHlp, " %sSEV-ES (Encrypted State) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1SevEs);
3639 pHlp->pfnPrintf(pHlp, "%sEvent Inject\n", pszPrefix);
3640 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1Valid);
3641 pHlp->pfnPrintf(pHlp, " %sVector = %#RX32 (%u)\n", pszPrefix, pVmcbCtrl->EventInject.n.u8Vector, pVmcbCtrl->EventInject.n.u8Vector);
3642 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->EventInject.n.u3Type);
3643 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1ErrorCodeValid);
3644 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->EventInject.n.u32ErrorCode);
3645 pHlp->pfnPrintf(pHlp, "%sNested-paging CR3 = %#RX64\n", pszPrefix, pVmcbCtrl->u64NestedPagingCR3);
3646 pHlp->pfnPrintf(pHlp, "%sLBR Virtualization\n", pszPrefix);
3647 pHlp->pfnPrintf(pHlp, " %sLBR virt = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1LbrVirt);
3648 pHlp->pfnPrintf(pHlp, " %sVirt. VMSAVE/VMLOAD = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1VirtVmsaveVmload);
3649 pHlp->pfnPrintf(pHlp, "%sVMCB Clean Bits = %#RX32\n", pszPrefix, pVmcbCtrl->u32VmcbCleanBits);
3650 pHlp->pfnPrintf(pHlp, "%sNext-RIP = %#RX64\n", pszPrefix, pVmcbCtrl->u64NextRIP);
3651 pHlp->pfnPrintf(pHlp, "%sInstruction bytes fetched = %u\n", pszPrefix, pVmcbCtrl->cbInstrFetched);
3652 pHlp->pfnPrintf(pHlp, "%sInstruction bytes = %.*Rhxs\n", pszPrefix, sizeof(pVmcbCtrl->abInstr), pVmcbCtrl->abInstr);
3653 pHlp->pfnPrintf(pHlp, "%sAVIC\n", pszPrefix);
3654 pHlp->pfnPrintf(pHlp, " %sBar addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBar.n.u40Addr);
3655 pHlp->pfnPrintf(pHlp, " %sBacking page addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBackingPagePtr.n.u40Addr);
3656 pHlp->pfnPrintf(pHlp, " %sLogical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicLogicalTablePtr.n.u40Addr);
3657 pHlp->pfnPrintf(pHlp, " %sPhysical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u40Addr);
3658 pHlp->pfnPrintf(pHlp, " %sLast guest core Id = %u\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u8LastGuestCoreId);
3659}
3660
3661
3662/**
3663 * Helper for dumping the SVM VMCB selector registers.
3664 *
3665 * @param pHlp The info helper functions.
3666 * @param pSel Pointer to the SVM selector register.
3667 * @param pszName Name of the selector.
3668 * @param pszPrefix Caller specified string prefix.
3669 */
3670DECLINLINE(void) cpumR3InfoSvmVmcbSelReg(PCDBGFINFOHLP pHlp, PCSVMSELREG pSel, const char *pszName, const char *pszPrefix)
3671{
3672 /* The string width of 4 used below is to handle 'LDTR'. Change later if longer register names are used. */
3673 pHlp->pfnPrintf(pHlp, "%s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", pszPrefix,
3674 pszName, pSel->u16Sel, pSel->u64Base, pSel->u32Limit, pSel->u16Attr);
3675}
3676
3677
3678/**
3679 * Helper for dumping the SVM VMCB GDTR/IDTR registers.
3680 *
3681 * @param pHlp The info helper functions.
3682 * @param pXdtr Pointer to the descriptor table register.
3683 * @param pszName Name of the descriptor table register.
3684 * @param pszPrefix Caller specified string prefix.
3685 */
3686DECLINLINE(void) cpumR3InfoSvmVmcbXdtr(PCDBGFINFOHLP pHlp, PCSVMXDTR pXdtr, const char *pszName, const char *pszPrefix)
3687{
3688 /* The string width of 4 used below is to cover 'GDTR', 'IDTR'. Change later if longer register names are used. */
3689 pHlp->pfnPrintf(pHlp, "%s%-4s = %016RX64:%04x\n", pszPrefix, pszName, pXdtr->u64Base, pXdtr->u32Limit);
3690}
3691
3692
3693/**
3694 * Displays an SVM VMCB state-save area.
3695 *
3696 * @param pHlp The info helper functions.
3697 * @param pVmcbStateSave Pointer to a SVM VMCB controls area.
3698 * @param pszPrefix Caller specified string prefix.
3699 */
3700static void cpumR3InfoSvmVmcbStateSave(PCDBGFINFOHLP pHlp, PCSVMVMCBSTATESAVE pVmcbStateSave, const char *pszPrefix)
3701{
3702 AssertReturnVoid(pHlp);
3703 AssertReturnVoid(pVmcbStateSave);
3704
3705 char szEFlags[80];
3706 cpumR3InfoFormatFlags(&szEFlags[0], pVmcbStateSave->u64RFlags);
3707
3708 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->CS, "CS", pszPrefix);
3709 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->SS, "SS", pszPrefix);
3710 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->ES, "ES", pszPrefix);
3711 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->DS, "DS", pszPrefix);
3712 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->FS, "FS", pszPrefix);
3713 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->GS, "GS", pszPrefix);
3714 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->LDTR, "LDTR", pszPrefix);
3715 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->TR, "TR", pszPrefix);
3716 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->GDTR, "GDTR", pszPrefix);
3717 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->IDTR, "IDTR", pszPrefix);
3718 pHlp->pfnPrintf(pHlp, "%sCPL = %u\n", pszPrefix, pVmcbStateSave->u8CPL);
3719 pHlp->pfnPrintf(pHlp, "%sEFER = %#RX64\n", pszPrefix, pVmcbStateSave->u64EFER);
3720 pHlp->pfnPrintf(pHlp, "%sCR4 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR4);
3721 pHlp->pfnPrintf(pHlp, "%sCR3 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR3);
3722 pHlp->pfnPrintf(pHlp, "%sCR0 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR0);
3723 pHlp->pfnPrintf(pHlp, "%sDR7 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR7);
3724 pHlp->pfnPrintf(pHlp, "%sDR6 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR6);
3725 pHlp->pfnPrintf(pHlp, "%sRFLAGS = %#RX64 %31s\n", pszPrefix, pVmcbStateSave->u64RFlags, szEFlags);
3726 pHlp->pfnPrintf(pHlp, "%sRIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RIP);
3727 pHlp->pfnPrintf(pHlp, "%sRSP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RSP);
3728 pHlp->pfnPrintf(pHlp, "%sRAX = %#RX64\n", pszPrefix, pVmcbStateSave->u64RAX);
3729 pHlp->pfnPrintf(pHlp, "%sSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64STAR);
3730 pHlp->pfnPrintf(pHlp, "%sLSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64LSTAR);
3731 pHlp->pfnPrintf(pHlp, "%sCSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64CSTAR);
3732 pHlp->pfnPrintf(pHlp, "%sSFMASK = %#RX64\n", pszPrefix, pVmcbStateSave->u64SFMASK);
3733 pHlp->pfnPrintf(pHlp, "%sKERNELGSBASE = %#RX64\n", pszPrefix, pVmcbStateSave->u64KernelGSBase);
3734 pHlp->pfnPrintf(pHlp, "%sSysEnter CS = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterCS);
3735 pHlp->pfnPrintf(pHlp, "%sSysEnter EIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterEIP);
3736 pHlp->pfnPrintf(pHlp, "%sSysEnter ESP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterESP);
3737 pHlp->pfnPrintf(pHlp, "%sCR2 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR2);
3738 pHlp->pfnPrintf(pHlp, "%sPAT = %#RX64\n", pszPrefix, pVmcbStateSave->u64PAT);
3739 pHlp->pfnPrintf(pHlp, "%sDBGCTL = %#RX64\n", pszPrefix, pVmcbStateSave->u64DBGCTL);
3740 pHlp->pfnPrintf(pHlp, "%sBR_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_FROM);
3741 pHlp->pfnPrintf(pHlp, "%sBR_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_TO);
3742 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPFROM);
3743 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPTO);
3744}
3745
3746
3747/**
3748 * Displays a virtual-VMCS.
3749 *
3750 * @param pVCpu The cross context virtual CPU structure.
3751 * @param pHlp The info helper functions.
3752 * @param pVmcs Pointer to a virtual VMCS.
3753 * @param pszPrefix Caller specified string prefix.
3754 */
3755static void cpumR3InfoVmxVmcs(PVMCPU pVCpu, PCDBGFINFOHLP pHlp, PCVMXVVMCS pVmcs, const char *pszPrefix)
3756{
3757 AssertReturnVoid(pHlp);
3758 AssertReturnVoid(pVmcs);
3759
3760 /* The string width of -4 used in the macros below to cover 'LDTR', 'GDTR', 'IDTR. */
3761#define CPUMVMX_DUMP_HOST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3762 do { \
3763 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64}\n", \
3764 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Host##a_Seg##Base.u); \
3765 } while (0)
3766
3767#define CPUMVMX_DUMP_HOST_FS_GS_TR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3768 do { \
3769 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64}\n", \
3770 (a_pszPrefix), (a_SegName), (a_pVmcs)->Host##a_Seg, (a_pVmcs)->u64Host##a_Seg##Base.u); \
3771 } while (0)
3772
3773#define CPUMVMX_DUMP_GUEST_SEGREG(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3774 do { \
3775 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", \
3776 (a_pszPrefix), (a_SegName), (a_pVmcs)->Guest##a_Seg, (a_pVmcs)->u64Guest##a_Seg##Base.u, \
3777 (a_pVmcs)->u32Guest##a_Seg##Limit, (a_pVmcs)->u32Guest##a_Seg##Attr); \
3778 } while (0)
3779
3780#define CPUMVMX_DUMP_GUEST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3781 do { \
3782 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64 limit=%08x}\n", \
3783 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Guest##a_Seg##Base.u, (a_pVmcs)->u32Guest##a_Seg##Limit); \
3784 } while (0)
3785
3786 /* Header. */
3787 {
3788 pHlp->pfnPrintf(pHlp, "%sHeader:\n", pszPrefix);
3789 pHlp->pfnPrintf(pHlp, " %sVMCS revision id = %#RX32\n", pszPrefix, pVmcs->u32VmcsRevId);
3790 pHlp->pfnPrintf(pHlp, " %sVMX-abort id = %#RX32 (%s)\n", pszPrefix, pVmcs->enmVmxAbort, VMXGetAbortDesc(pVmcs->enmVmxAbort));
3791 pHlp->pfnPrintf(pHlp, " %sVMCS state = %#x (%s)\n", pszPrefix, pVmcs->fVmcsState, VMXGetVmcsStateDesc(pVmcs->fVmcsState));
3792 }
3793
3794 /* Control fields. */
3795 {
3796 /* 16-bit. */
3797 pHlp->pfnPrintf(pHlp, "%sControl:\n", pszPrefix);
3798 pHlp->pfnPrintf(pHlp, " %sVPID = %#RX16\n", pszPrefix, pVmcs->u16Vpid);
3799 pHlp->pfnPrintf(pHlp, " %sPosted intr notify vector = %#RX16\n", pszPrefix, pVmcs->u16PostIntNotifyVector);
3800 pHlp->pfnPrintf(pHlp, " %sEPTP index = %#RX16\n", pszPrefix, pVmcs->u16EptpIndex);
3801
3802 /* 32-bit. */
3803 pHlp->pfnPrintf(pHlp, " %sPin ctls = %#RX32\n", pszPrefix, pVmcs->u32PinCtls);
3804 pHlp->pfnPrintf(pHlp, " %sProcessor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls);
3805 pHlp->pfnPrintf(pHlp, " %sSecondary processor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls2);
3806 pHlp->pfnPrintf(pHlp, " %sVM-exit ctls = %#RX32\n", pszPrefix, pVmcs->u32ExitCtls);
3807 pHlp->pfnPrintf(pHlp, " %sVM-entry ctls = %#RX32\n", pszPrefix, pVmcs->u32EntryCtls);
3808 pHlp->pfnPrintf(pHlp, " %sException bitmap = %#RX32\n", pszPrefix, pVmcs->u32XcptBitmap);
3809 pHlp->pfnPrintf(pHlp, " %sPage-fault mask = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMask);
3810 pHlp->pfnPrintf(pHlp, " %sPage-fault match = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMatch);
3811 pHlp->pfnPrintf(pHlp, " %sCR3-target count = %RU32\n", pszPrefix, pVmcs->u32Cr3TargetCount);
3812 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrStoreCount);
3813 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrLoadCount);
3814 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load count = %RU32\n", pszPrefix, pVmcs->u32EntryMsrLoadCount);
3815 pHlp->pfnPrintf(pHlp, " %sVM-entry interruption info = %#RX32\n", pszPrefix, pVmcs->u32EntryIntInfo);
3816 {
3817 uint32_t const fInfo = pVmcs->u32EntryIntInfo;
3818 uint8_t const uType = VMX_ENTRY_INT_INFO_TYPE(fInfo);
3819 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_VALID(fInfo));
3820 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetEntryIntInfoTypeDesc(uType));
3821 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_ENTRY_INT_INFO_VECTOR(fInfo));
3822 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3823 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3824 }
3825 pHlp->pfnPrintf(pHlp, " %sVM-entry xcpt error-code = %#RX32\n", pszPrefix, pVmcs->u32EntryXcptErrCode);
3826 pHlp->pfnPrintf(pHlp, " %sVM-entry instr length = %u byte(s)\n", pszPrefix, pVmcs->u32EntryInstrLen);
3827 pHlp->pfnPrintf(pHlp, " %sTPR threshold = %#RX32\n", pszPrefix, pVmcs->u32TprThreshold);
3828 pHlp->pfnPrintf(pHlp, " %sPLE gap = %#RX32\n", pszPrefix, pVmcs->u32PleGap);
3829 pHlp->pfnPrintf(pHlp, " %sPLE window = %#RX32\n", pszPrefix, pVmcs->u32PleWindow);
3830
3831 /* 64-bit. */
3832 pHlp->pfnPrintf(pHlp, " %sIO-bitmap A addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapA.u);
3833 pHlp->pfnPrintf(pHlp, " %sIO-bitmap B addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapB.u);
3834 pHlp->pfnPrintf(pHlp, " %sMSR-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrMsrBitmap.u);
3835 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrStore.u);
3836 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrLoad.u);
3837 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEntryMsrLoad.u);
3838 pHlp->pfnPrintf(pHlp, " %sExecutive VMCS ptr = %#RX64\n", pszPrefix, pVmcs->u64ExecVmcsPtr.u);
3839 pHlp->pfnPrintf(pHlp, " %sPML addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPml.u);
3840 pHlp->pfnPrintf(pHlp, " %sTSC offset = %#RX64\n", pszPrefix, pVmcs->u64TscOffset.u);
3841 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVirtApic.u);
3842 pHlp->pfnPrintf(pHlp, " %sAPIC-access addr = %#RX64\n", pszPrefix, pVmcs->u64AddrApicAccess.u);
3843 pHlp->pfnPrintf(pHlp, " %sPosted-intr desc addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPostedIntDesc.u);
3844 pHlp->pfnPrintf(pHlp, " %sVM-functions control = %#RX64\n", pszPrefix, pVmcs->u64VmFuncCtls.u);
3845 pHlp->pfnPrintf(pHlp, " %sEPTP ptr = %#RX64\n", pszPrefix, pVmcs->u64EptpPtr.u);
3846 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 0 addr = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap0.u);
3847 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 1 addr = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap1.u);
3848 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 2 addr = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap2.u);
3849 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 3 addr = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap3.u);
3850 pHlp->pfnPrintf(pHlp, " %sEPTP-list addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEptpList.u);
3851 pHlp->pfnPrintf(pHlp, " %sVMREAD-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmreadBitmap.u);
3852 pHlp->pfnPrintf(pHlp, " %sVMWRITE-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmwriteBitmap.u);
3853 pHlp->pfnPrintf(pHlp, " %sVirt-Xcpt info addr = %#RX64\n", pszPrefix, pVmcs->u64AddrXcptVeInfo.u);
3854 pHlp->pfnPrintf(pHlp, " %sXSS-bitmap = %#RX64\n", pszPrefix, pVmcs->u64XssBitmap.u);
3855 pHlp->pfnPrintf(pHlp, " %sENCLS-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64EnclsBitmap.u);
3856 pHlp->pfnPrintf(pHlp, " %sSPPT ptr = %#RX64\n", pszPrefix, pVmcs->u64SpptPtr.u);
3857 pHlp->pfnPrintf(pHlp, " %sTSC multiplier = %#RX64\n", pszPrefix, pVmcs->u64TscMultiplier.u);
3858
3859 /* Natural width. */
3860 pHlp->pfnPrintf(pHlp, " %sCR0 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr0Mask.u);
3861 pHlp->pfnPrintf(pHlp, " %sCR4 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr4Mask.u);
3862 pHlp->pfnPrintf(pHlp, " %sCR0 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr0ReadShadow.u);
3863 pHlp->pfnPrintf(pHlp, " %sCR4 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr4ReadShadow.u);
3864 pHlp->pfnPrintf(pHlp, " %sCR3-target 0 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target0.u);
3865 pHlp->pfnPrintf(pHlp, " %sCR3-target 1 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target1.u);
3866 pHlp->pfnPrintf(pHlp, " %sCR3-target 2 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target2.u);
3867 pHlp->pfnPrintf(pHlp, " %sCR3-target 3 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target3.u);
3868 }
3869
3870 /* Guest state. */
3871 {
3872 char szEFlags[80];
3873 cpumR3InfoFormatFlags(&szEFlags[0], pVmcs->u64GuestRFlags.u);
3874 pHlp->pfnPrintf(pHlp, "%sGuest state:\n", pszPrefix);
3875
3876 /* 16-bit. */
3877 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Cs, "cs", pszPrefix);
3878 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ss, "ss", pszPrefix);
3879 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Es, "es", pszPrefix);
3880 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ds, "ds", pszPrefix);
3881 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Fs, "fs", pszPrefix);
3882 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Gs, "gs", pszPrefix);
3883 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ldtr, "ldtr", pszPrefix);
3884 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Tr, "tr", pszPrefix);
3885 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Gdtr, "gdtr", pszPrefix);
3886 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Idtr, "idtr", pszPrefix);
3887 pHlp->pfnPrintf(pHlp, " %sInterrupt status = %#RX16\n", pszPrefix, pVmcs->u16GuestIntStatus);
3888 pHlp->pfnPrintf(pHlp, " %sPML index = %#RX16\n", pszPrefix, pVmcs->u16PmlIndex);
3889
3890 /* 32-bit. */
3891 pHlp->pfnPrintf(pHlp, " %sInterruptibility state = %#RX32\n", pszPrefix, pVmcs->u32GuestIntrState);
3892 pHlp->pfnPrintf(pHlp, " %sActivity state = %#RX32\n", pszPrefix, pVmcs->u32GuestActivityState);
3893 pHlp->pfnPrintf(pHlp, " %sSMBASE = %#RX32\n", pszPrefix, pVmcs->u32GuestSmBase);
3894 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32GuestSysenterCS);
3895 pHlp->pfnPrintf(pHlp, " %sVMX-preemption timer value = %#RX32\n", pszPrefix, pVmcs->u32PreemptTimer);
3896
3897 /* 64-bit. */
3898 pHlp->pfnPrintf(pHlp, " %sVMCS link ptr = %#RX64\n", pszPrefix, pVmcs->u64VmcsLinkPtr.u);
3899 pHlp->pfnPrintf(pHlp, " %sDBGCTL = %#RX64\n", pszPrefix, pVmcs->u64GuestDebugCtlMsr.u);
3900 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64GuestPatMsr.u);
3901 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64GuestEferMsr.u);
3902 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64GuestPerfGlobalCtlMsr.u);
3903 pHlp->pfnPrintf(pHlp, " %sPDPTE 0 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte0.u);
3904 pHlp->pfnPrintf(pHlp, " %sPDPTE 1 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte1.u);
3905 pHlp->pfnPrintf(pHlp, " %sPDPTE 2 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte2.u);
3906 pHlp->pfnPrintf(pHlp, " %sPDPTE 3 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte3.u);
3907 pHlp->pfnPrintf(pHlp, " %sBNDCFGS = %#RX64\n", pszPrefix, pVmcs->u64GuestBndcfgsMsr.u);
3908 pHlp->pfnPrintf(pHlp, " %sRTIT_CTL = %#RX64\n", pszPrefix, pVmcs->u64GuestRtitCtlMsr.u);
3909
3910 /* Natural width. */
3911 pHlp->pfnPrintf(pHlp, " %scr0 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr0.u);
3912 pHlp->pfnPrintf(pHlp, " %scr3 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr3.u);
3913 pHlp->pfnPrintf(pHlp, " %scr4 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr4.u);
3914 pHlp->pfnPrintf(pHlp, " %sdr7 = %#RX64\n", pszPrefix, pVmcs->u64GuestDr7.u);
3915 pHlp->pfnPrintf(pHlp, " %srsp = %#RX64\n", pszPrefix, pVmcs->u64GuestRsp.u);
3916 pHlp->pfnPrintf(pHlp, " %srip = %#RX64\n", pszPrefix, pVmcs->u64GuestRip.u);
3917 pHlp->pfnPrintf(pHlp, " %srflags = %#RX64 %31s\n",pszPrefix, pVmcs->u64GuestRFlags.u, szEFlags);
3918 pHlp->pfnPrintf(pHlp, " %sPending debug xcpts = %#RX64\n", pszPrefix, pVmcs->u64GuestPendingDbgXcpts.u);
3919 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEsp.u);
3920 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEip.u);
3921 }
3922
3923 /* Host state. */
3924 {
3925 pHlp->pfnPrintf(pHlp, "%sHost state:\n", pszPrefix);
3926
3927 /* 16-bit. */
3928 pHlp->pfnPrintf(pHlp, " %scs = %#RX16\n", pszPrefix, pVmcs->HostCs);
3929 pHlp->pfnPrintf(pHlp, " %sss = %#RX16\n", pszPrefix, pVmcs->HostSs);
3930 pHlp->pfnPrintf(pHlp, " %sds = %#RX16\n", pszPrefix, pVmcs->HostDs);
3931 pHlp->pfnPrintf(pHlp, " %ses = %#RX16\n", pszPrefix, pVmcs->HostEs);
3932 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Fs, "fs", pszPrefix);
3933 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Gs, "gs", pszPrefix);
3934 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Tr, "tr", pszPrefix);
3935 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Gdtr, "gdtr", pszPrefix);
3936 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Idtr, "idtr", pszPrefix);
3937
3938 /* 32-bit. */
3939 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32HostSysenterCs);
3940
3941 /* 64-bit. */
3942 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64HostEferMsr.u);
3943 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64HostPatMsr.u);
3944 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64HostPerfGlobalCtlMsr.u);
3945
3946 /* Natural width. */
3947 pHlp->pfnPrintf(pHlp, " %scr0 = %#RX64\n", pszPrefix, pVmcs->u64HostCr0.u);
3948 pHlp->pfnPrintf(pHlp, " %scr3 = %#RX64\n", pszPrefix, pVmcs->u64HostCr3.u);
3949 pHlp->pfnPrintf(pHlp, " %scr4 = %#RX64\n", pszPrefix, pVmcs->u64HostCr4.u);
3950 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEsp.u);
3951 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEip.u);
3952 pHlp->pfnPrintf(pHlp, " %srsp = %#RX64\n", pszPrefix, pVmcs->u64HostRsp.u);
3953 pHlp->pfnPrintf(pHlp, " %srip = %#RX64\n", pszPrefix, pVmcs->u64HostRip.u);
3954 }
3955
3956 /* Read-only fields. */
3957 {
3958 pHlp->pfnPrintf(pHlp, "%sRead-only data fields:\n", pszPrefix);
3959
3960 /* 16-bit (none currently). */
3961
3962 /* 32-bit. */
3963 pHlp->pfnPrintf(pHlp, " %sExit reason = %u (%s)\n", pszPrefix, pVmcs->u32RoExitReason, HMGetVmxExitName(pVmcs->u32RoExitReason));
3964 pHlp->pfnPrintf(pHlp, " %sExit qualification = %#RX64\n", pszPrefix, pVmcs->u64RoExitQual.u);
3965 pHlp->pfnPrintf(pHlp, " %sVM-instruction error = %#RX32\n", pszPrefix, pVmcs->u32RoVmInstrError);
3966 pHlp->pfnPrintf(pHlp, " %sVM-exit intr info = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntInfo);
3967 {
3968 uint32_t const fInfo = pVmcs->u32RoExitIntInfo;
3969 uint8_t const uType = VMX_EXIT_INT_INFO_TYPE(fInfo);
3970 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_VALID(fInfo));
3971 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetExitIntInfoTypeDesc(uType));
3972 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_EXIT_INT_INFO_VECTOR(fInfo));
3973 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3974 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3975 }
3976 pHlp->pfnPrintf(pHlp, " %sVM-exit intr error-code = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntErrCode);
3977 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring info = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringInfo);
3978 {
3979 uint32_t const fInfo = pVmcs->u32RoIdtVectoringInfo;
3980 uint8_t const uType = VMX_IDT_VECTORING_INFO_TYPE(fInfo);
3981 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_VALID(fInfo));
3982 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetIdtVectoringInfoTypeDesc(uType));
3983 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_IDT_VECTORING_INFO_VECTOR(fInfo));
3984 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_ERROR_CODE_VALID(fInfo));
3985 }
3986 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring error-code = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringErrCode);
3987 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction length = %u byte(s)\n", pszPrefix, pVmcs->u32RoExitInstrLen);
3988 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction info = %#RX64\n", pszPrefix, pVmcs->u32RoExitInstrInfo);
3989
3990 /* 64-bit. */
3991 pHlp->pfnPrintf(pHlp, " %sGuest-physical addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestPhysAddr.u);
3992
3993 /* Natural width. */
3994 pHlp->pfnPrintf(pHlp, " %sI/O RCX = %#RX64\n", pszPrefix, pVmcs->u64RoIoRcx.u);
3995 pHlp->pfnPrintf(pHlp, " %sI/O RSI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRsi.u);
3996 pHlp->pfnPrintf(pHlp, " %sI/O RDI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRdi.u);
3997 pHlp->pfnPrintf(pHlp, " %sI/O RIP = %#RX64\n", pszPrefix, pVmcs->u64RoIoRip.u);
3998 pHlp->pfnPrintf(pHlp, " %sGuest-linear addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestLinearAddr.u);
3999 }
4000
4001#ifdef DEBUG_ramshankar
4002 if (pVmcs->u32ProcCtls & VMX_PROC_CTLS_USE_TPR_SHADOW)
4003 {
4004 void *pvPage = RTMemTmpAllocZ(VMX_V_VIRT_APIC_SIZE);
4005 Assert(pvPage);
4006 RTGCPHYS const GCPhysVirtApic = pVmcs->u64AddrVirtApic.u;
4007 int rc = PGMPhysSimpleReadGCPhys(pVCpu->CTX_SUFF(pVM), pvPage, GCPhysVirtApic, VMX_V_VIRT_APIC_SIZE);
4008 if (RT_SUCCESS(rc))
4009 {
4010 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC page\n", pszPrefix);
4011 pHlp->pfnPrintf(pHlp, "%.*Rhxs\n", VMX_V_VIRT_APIC_SIZE, pvPage);
4012 pHlp->pfnPrintf(pHlp, "\n");
4013 }
4014 RTMemTmpFree(pvPage);
4015 }
4016#else
4017 NOREF(pVCpu);
4018#endif
4019
4020#undef CPUMVMX_DUMP_HOST_XDTR
4021#undef CPUMVMX_DUMP_HOST_FS_GS_TR
4022#undef CPUMVMX_DUMP_GUEST_SEGREG
4023#undef CPUMVMX_DUMP_GUEST_XDTR
4024}
4025
4026
4027/**
4028 * Display the guest's hardware-virtualization cpu state.
4029 *
4030 * @param pVM The cross context VM structure.
4031 * @param pHlp The info helper functions.
4032 * @param pszArgs Arguments, ignored.
4033 */
4034static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4035{
4036 RT_NOREF(pszArgs);
4037
4038 PVMCPU pVCpu = VMMGetCpu(pVM);
4039 if (!pVCpu)
4040 pVCpu = pVM->apCpusR3[0];
4041
4042 /*
4043 * Figure out what to dump.
4044 */
4045 /** @todo perhaps make this configurable through pszArgs, depending on how much
4046 * noise we wish to accept when nested hwvirt. isn't used. */
4047#define CPUMHWVIRTDUMP_NONE (0)
4048#define CPUMHWVIRTDUMP_SVM RT_BIT(0)
4049#define CPUMHWVIRTDUMP_VMX RT_BIT(1)
4050#define CPUMHWVIRTDUMP_COMMON RT_BIT(2)
4051#define CPUMHWVIRTDUMP_LAST CPUMHWVIRTDUMP_VMX
4052
4053 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
4054 static const char *const s_aHwvirtModes[] = { "No/inactive", "SVM", "VMX", "Common" };
4055 bool const fSvm = pVM->cpum.s.GuestFeatures.fSvm;
4056 bool const fVmx = pVM->cpum.s.GuestFeatures.fVmx;
4057 uint8_t const idxHwvirtState = fSvm ? CPUMHWVIRTDUMP_SVM : (fVmx ? CPUMHWVIRTDUMP_VMX : CPUMHWVIRTDUMP_NONE);
4058 AssertCompile(CPUMHWVIRTDUMP_LAST <= RT_ELEMENTS(s_aHwvirtModes));
4059 Assert(idxHwvirtState < RT_ELEMENTS(s_aHwvirtModes));
4060 const char *pcszHwvirtMode = s_aHwvirtModes[idxHwvirtState];
4061 uint32_t fDumpState = idxHwvirtState | CPUMHWVIRTDUMP_COMMON;
4062
4063 /*
4064 * Dump it.
4065 */
4066 pHlp->pfnPrintf(pHlp, "VCPU[%u] hardware virtualization state:\n", pVCpu->idCpu);
4067
4068 if (fDumpState & CPUMHWVIRTDUMP_COMMON)
4069 pHlp->pfnPrintf(pHlp, "fLocalForcedActions = %#RX32\n", pCtx->hwvirt.fLocalForcedActions);
4070
4071 pHlp->pfnPrintf(pHlp, "%s hwvirt state%s\n", pcszHwvirtMode, (fDumpState & (CPUMHWVIRTDUMP_SVM | CPUMHWVIRTDUMP_VMX)) ?
4072 ":" : "");
4073 if (fDumpState & CPUMHWVIRTDUMP_SVM)
4074 {
4075 pHlp->pfnPrintf(pHlp, " fGif = %RTbool\n", pCtx->hwvirt.fGif);
4076
4077 char szEFlags[80];
4078 cpumR3InfoFormatFlags(&szEFlags[0], pCtx->hwvirt.svm.HostState.rflags.u);
4079 pHlp->pfnPrintf(pHlp, " uMsrHSavePa = %#RX64\n", pCtx->hwvirt.svm.uMsrHSavePa);
4080 pHlp->pfnPrintf(pHlp, " GCPhysVmcb = %#RGp\n", pCtx->hwvirt.svm.GCPhysVmcb);
4081 pHlp->pfnPrintf(pHlp, " VmcbCtrl:\n");
4082 cpumR3InfoSvmVmcbCtrl(pHlp, &pCtx->hwvirt.svm.pVmcbR3->ctrl, " " /* pszPrefix */);
4083 pHlp->pfnPrintf(pHlp, " VmcbStateSave:\n");
4084 cpumR3InfoSvmVmcbStateSave(pHlp, &pCtx->hwvirt.svm.pVmcbR3->guest, " " /* pszPrefix */);
4085 pHlp->pfnPrintf(pHlp, " HostState:\n");
4086 pHlp->pfnPrintf(pHlp, " uEferMsr = %#RX64\n", pCtx->hwvirt.svm.HostState.uEferMsr);
4087 pHlp->pfnPrintf(pHlp, " uCr0 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr0);
4088 pHlp->pfnPrintf(pHlp, " uCr4 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr4);
4089 pHlp->pfnPrintf(pHlp, " uCr3 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr3);
4090 pHlp->pfnPrintf(pHlp, " uRip = %#RX64\n", pCtx->hwvirt.svm.HostState.uRip);
4091 pHlp->pfnPrintf(pHlp, " uRsp = %#RX64\n", pCtx->hwvirt.svm.HostState.uRsp);
4092 pHlp->pfnPrintf(pHlp, " uRax = %#RX64\n", pCtx->hwvirt.svm.HostState.uRax);
4093 pHlp->pfnPrintf(pHlp, " rflags = %#RX64 %31s\n", pCtx->hwvirt.svm.HostState.rflags.u64, szEFlags);
4094 PCPUMSELREG pSel = &pCtx->hwvirt.svm.HostState.es;
4095 pHlp->pfnPrintf(pHlp, " es = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
4096 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
4097 pSel = &pCtx->hwvirt.svm.HostState.cs;
4098 pHlp->pfnPrintf(pHlp, " cs = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
4099 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
4100 pSel = &pCtx->hwvirt.svm.HostState.ss;
4101 pHlp->pfnPrintf(pHlp, " ss = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
4102 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
4103 pSel = &pCtx->hwvirt.svm.HostState.ds;
4104 pHlp->pfnPrintf(pHlp, " ds = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
4105 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
4106 pHlp->pfnPrintf(pHlp, " gdtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.gdtr.pGdt,
4107 pCtx->hwvirt.svm.HostState.gdtr.cbGdt);
4108 pHlp->pfnPrintf(pHlp, " idtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.idtr.pIdt,
4109 pCtx->hwvirt.svm.HostState.idtr.cbIdt);
4110 pHlp->pfnPrintf(pHlp, " cPauseFilter = %RU16\n", pCtx->hwvirt.svm.cPauseFilter);
4111 pHlp->pfnPrintf(pHlp, " cPauseFilterThreshold = %RU32\n", pCtx->hwvirt.svm.cPauseFilterThreshold);
4112 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %u\n", pCtx->hwvirt.svm.fInterceptEvents);
4113 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR3 = %p\n", pCtx->hwvirt.svm.pvMsrBitmapR3);
4114 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvMsrBitmapR0);
4115 pHlp->pfnPrintf(pHlp, " pvIoBitmapR3 = %p\n", pCtx->hwvirt.svm.pvIoBitmapR3);
4116 pHlp->pfnPrintf(pHlp, " pvIoBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvIoBitmapR0);
4117 }
4118
4119 if (fDumpState & CPUMHWVIRTDUMP_VMX)
4120 {
4121 pHlp->pfnPrintf(pHlp, " GCPhysVmxon = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmxon);
4122 pHlp->pfnPrintf(pHlp, " GCPhysVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmcs);
4123 pHlp->pfnPrintf(pHlp, " GCPhysShadowVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysShadowVmcs);
4124 pHlp->pfnPrintf(pHlp, " enmDiag = %u (%s)\n", pCtx->hwvirt.vmx.enmDiag, HMGetVmxDiagDesc(pCtx->hwvirt.vmx.enmDiag));
4125 pHlp->pfnPrintf(pHlp, " uDiagAux = %#RX64\n", pCtx->hwvirt.vmx.uDiagAux);
4126 pHlp->pfnPrintf(pHlp, " enmAbort = %u (%s)\n", pCtx->hwvirt.vmx.enmAbort, VMXGetAbortDesc(pCtx->hwvirt.vmx.enmAbort));
4127 pHlp->pfnPrintf(pHlp, " uAbortAux = %u (%#x)\n", pCtx->hwvirt.vmx.uAbortAux, pCtx->hwvirt.vmx.uAbortAux);
4128 pHlp->pfnPrintf(pHlp, " fInVmxRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxRootMode);
4129 pHlp->pfnPrintf(pHlp, " fInVmxNonRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxNonRootMode);
4130 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %RTbool\n", pCtx->hwvirt.vmx.fInterceptEvents);
4131 pHlp->pfnPrintf(pHlp, " fNmiUnblockingIret = %RTbool\n", pCtx->hwvirt.vmx.fNmiUnblockingIret);
4132 pHlp->pfnPrintf(pHlp, " uFirstPauseLoopTick = %RX64\n", pCtx->hwvirt.vmx.uFirstPauseLoopTick);
4133 pHlp->pfnPrintf(pHlp, " uPrevPauseTick = %RX64\n", pCtx->hwvirt.vmx.uPrevPauseTick);
4134 pHlp->pfnPrintf(pHlp, " uEntryTick = %RX64\n", pCtx->hwvirt.vmx.uEntryTick);
4135 pHlp->pfnPrintf(pHlp, " offVirtApicWrite = %#RX16\n", pCtx->hwvirt.vmx.offVirtApicWrite);
4136 pHlp->pfnPrintf(pHlp, " fVirtNmiBlocking = %RTbool\n", pCtx->hwvirt.vmx.fVirtNmiBlocking);
4137 pHlp->pfnPrintf(pHlp, " VMCS cache:\n");
4138 cpumR3InfoVmxVmcs(pVCpu, pHlp, pCtx->hwvirt.vmx.pVmcsR3, " " /* pszPrefix */);
4139 }
4140
4141#undef CPUMHWVIRTDUMP_NONE
4142#undef CPUMHWVIRTDUMP_COMMON
4143#undef CPUMHWVIRTDUMP_SVM
4144#undef CPUMHWVIRTDUMP_VMX
4145#undef CPUMHWVIRTDUMP_LAST
4146#undef CPUMHWVIRTDUMP_ALL
4147}
4148
4149/**
4150 * Display the current guest instruction
4151 *
4152 * @param pVM The cross context VM structure.
4153 * @param pHlp The info helper functions.
4154 * @param pszArgs Arguments, ignored.
4155 */
4156static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4157{
4158 NOREF(pszArgs);
4159
4160 PVMCPU pVCpu = VMMGetCpu(pVM);
4161 if (!pVCpu)
4162 pVCpu = pVM->apCpusR3[0];
4163
4164 char szInstruction[256];
4165 szInstruction[0] = '\0';
4166 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
4167 pHlp->pfnPrintf(pHlp, "\nCPUM%u: %s\n\n", pVCpu->idCpu, szInstruction);
4168}
4169
4170
4171/**
4172 * Display the hypervisor cpu state.
4173 *
4174 * @param pVM The cross context VM structure.
4175 * @param pHlp The info helper functions.
4176 * @param pszArgs Arguments, ignored.
4177 */
4178static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4179{
4180 PVMCPU pVCpu = VMMGetCpu(pVM);
4181 if (!pVCpu)
4182 pVCpu = pVM->apCpusR3[0];
4183
4184 CPUMDUMPTYPE enmType;
4185 const char *pszComment;
4186 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4187 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
4188
4189 pHlp->pfnPrintf(pHlp,
4190 ".dr0=%016RX64 .dr1=%016RX64 .dr2=%016RX64 .dr3=%016RX64\n"
4191 ".dr4=%016RX64 .dr5=%016RX64 .dr6=%016RX64 .dr7=%016RX64\n",
4192 pVCpu->cpum.s.Hyper.dr[0], pVCpu->cpum.s.Hyper.dr[1], pVCpu->cpum.s.Hyper.dr[2], pVCpu->cpum.s.Hyper.dr[3],
4193 pVCpu->cpum.s.Hyper.dr[4], pVCpu->cpum.s.Hyper.dr[5], pVCpu->cpum.s.Hyper.dr[6], pVCpu->cpum.s.Hyper.dr[7]);
4194 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
4195}
4196
4197
4198/**
4199 * Display the host cpu state.
4200 *
4201 * @param pVM The cross context VM structure.
4202 * @param pHlp The info helper functions.
4203 * @param pszArgs Arguments, ignored.
4204 */
4205static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4206{
4207 CPUMDUMPTYPE enmType;
4208 const char *pszComment;
4209 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4210 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
4211
4212 PVMCPU pVCpu = VMMGetCpu(pVM);
4213 if (!pVCpu)
4214 pVCpu = pVM->apCpusR3[0];
4215 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
4216
4217 /*
4218 * Format the EFLAGS.
4219 */
4220 uint64_t efl = pCtx->rflags;
4221 char szEFlags[80];
4222 cpumR3InfoFormatFlags(&szEFlags[0], efl);
4223
4224 /*
4225 * Format the registers.
4226 */
4227 pHlp->pfnPrintf(pHlp,
4228 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
4229 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
4230 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
4231 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
4232 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
4233 "r14=%016RX64 r15=%016RX64\n"
4234 "iopl=%d %31s\n"
4235 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
4236 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
4237 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
4238 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
4239 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
4240 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
4241 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
4242 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
4243 ,
4244 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
4245 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
4246 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
4247 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
4248 pCtx->r11, pCtx->r12, pCtx->r13,
4249 pCtx->r14, pCtx->r15,
4250 X86_EFL_GET_IOPL(efl), szEFlags,
4251 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
4252 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
4253 pCtx->cr4, pCtx->ldtr, pCtx->tr,
4254 pCtx->dr0, pCtx->dr1, pCtx->dr2,
4255 pCtx->dr3, pCtx->dr6, pCtx->dr7,
4256 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
4257 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
4258 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
4259}
4260
4261/**
4262 * Structure used when disassembling and instructions in DBGF.
4263 * This is used so the reader function can get the stuff it needs.
4264 */
4265typedef struct CPUMDISASSTATE
4266{
4267 /** Pointer to the CPU structure. */
4268 PDISCPUSTATE pCpu;
4269 /** Pointer to the VM. */
4270 PVM pVM;
4271 /** Pointer to the VMCPU. */
4272 PVMCPU pVCpu;
4273 /** Pointer to the first byte in the segment. */
4274 RTGCUINTPTR GCPtrSegBase;
4275 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
4276 RTGCUINTPTR GCPtrSegEnd;
4277 /** The size of the segment minus 1. */
4278 RTGCUINTPTR cbSegLimit;
4279 /** Pointer to the current page - R3 Ptr. */
4280 void const *pvPageR3;
4281 /** Pointer to the current page - GC Ptr. */
4282 RTGCPTR pvPageGC;
4283 /** The lock information that PGMPhysReleasePageMappingLock needs. */
4284 PGMPAGEMAPLOCK PageMapLock;
4285 /** Whether the PageMapLock is valid or not. */
4286 bool fLocked;
4287 /** 64 bits mode or not. */
4288 bool f64Bits;
4289} CPUMDISASSTATE, *PCPUMDISASSTATE;
4290
4291
4292/**
4293 * @callback_method_impl{FNDISREADBYTES}
4294 */
4295static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
4296{
4297 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
4298 for (;;)
4299 {
4300 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
4301
4302 /*
4303 * Need to update the page translation?
4304 */
4305 if ( !pState->pvPageR3
4306 || (GCPtr >> PAGE_SHIFT) != (pState->pvPageGC >> PAGE_SHIFT))
4307 {
4308 /* translate the address */
4309 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
4310
4311 /* Release mapping lock previously acquired. */
4312 if (pState->fLocked)
4313 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
4314 int rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
4315 if (RT_SUCCESS(rc))
4316 pState->fLocked = true;
4317 else
4318 {
4319 pState->fLocked = false;
4320 pState->pvPageR3 = NULL;
4321 return rc;
4322 }
4323 }
4324
4325 /*
4326 * Check the segment limit.
4327 */
4328 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
4329 return VERR_OUT_OF_SELECTOR_BOUNDS;
4330
4331 /*
4332 * Calc how much we can read.
4333 */
4334 uint32_t cb = PAGE_SIZE - (GCPtr & PAGE_OFFSET_MASK);
4335 if (!pState->f64Bits)
4336 {
4337 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
4338 if (cb > cbSeg && cbSeg)
4339 cb = cbSeg;
4340 }
4341 if (cb > cbMaxRead)
4342 cb = cbMaxRead;
4343
4344 /*
4345 * Read and advance or exit.
4346 */
4347 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & PAGE_OFFSET_MASK), cb);
4348 offInstr += (uint8_t)cb;
4349 if (cb >= cbMinRead)
4350 {
4351 pDis->cbCachedInstr = offInstr;
4352 return VINF_SUCCESS;
4353 }
4354 cbMinRead -= (uint8_t)cb;
4355 cbMaxRead -= (uint8_t)cb;
4356 }
4357}
4358
4359
4360/**
4361 * Disassemble an instruction and return the information in the provided structure.
4362 *
4363 * @returns VBox status code.
4364 * @param pVM The cross context VM structure.
4365 * @param pVCpu The cross context virtual CPU structure.
4366 * @param pCtx Pointer to the guest CPU context.
4367 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
4368 * @param pCpu Disassembly state.
4369 * @param pszPrefix String prefix for logging (debug only).
4370 *
4371 */
4372VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu,
4373 const char *pszPrefix)
4374{
4375 CPUMDISASSTATE State;
4376 int rc;
4377
4378 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
4379 State.pCpu = pCpu;
4380 State.pvPageGC = 0;
4381 State.pvPageR3 = NULL;
4382 State.pVM = pVM;
4383 State.pVCpu = pVCpu;
4384 State.fLocked = false;
4385 State.f64Bits = false;
4386
4387 /*
4388 * Get selector information.
4389 */
4390 DISCPUMODE enmDisCpuMode;
4391 if ( (pCtx->cr0 & X86_CR0_PE)
4392 && pCtx->eflags.Bits.u1VM == 0)
4393 {
4394 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
4395 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
4396 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
4397 State.GCPtrSegBase = pCtx->cs.u64Base;
4398 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
4399 State.cbSegLimit = pCtx->cs.u32Limit;
4400 enmDisCpuMode = (State.f64Bits)
4401 ? DISCPUMODE_64BIT
4402 : pCtx->cs.Attr.n.u1DefBig
4403 ? DISCPUMODE_32BIT
4404 : DISCPUMODE_16BIT;
4405 }
4406 else
4407 {
4408 /* real or V86 mode */
4409 enmDisCpuMode = DISCPUMODE_16BIT;
4410 State.GCPtrSegBase = pCtx->cs.Sel * 16;
4411 State.GCPtrSegEnd = 0xFFFFFFFF;
4412 State.cbSegLimit = 0xFFFFFFFF;
4413 }
4414
4415 /*
4416 * Disassemble the instruction.
4417 */
4418 uint32_t cbInstr;
4419#ifndef LOG_ENABLED
4420 RT_NOREF_PV(pszPrefix);
4421 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
4422 if (RT_SUCCESS(rc))
4423 {
4424#else
4425 char szOutput[160];
4426 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
4427 pCpu, &cbInstr, szOutput, sizeof(szOutput));
4428 if (RT_SUCCESS(rc))
4429 {
4430 /* log it */
4431 if (pszPrefix)
4432 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
4433 else
4434 Log(("%s", szOutput));
4435#endif
4436 rc = VINF_SUCCESS;
4437 }
4438 else
4439 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
4440
4441 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
4442 if (State.fLocked)
4443 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
4444
4445 return rc;
4446}
4447
4448
4449
4450/**
4451 * API for controlling a few of the CPU features found in CR4.
4452 *
4453 * Currently only X86_CR4_TSD is accepted as input.
4454 *
4455 * @returns VBox status code.
4456 *
4457 * @param pVM The cross context VM structure.
4458 * @param fOr The CR4 OR mask.
4459 * @param fAnd The CR4 AND mask.
4460 */
4461VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
4462{
4463 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
4464 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
4465
4466 pVM->cpum.s.CR4.OrMask &= fAnd;
4467 pVM->cpum.s.CR4.OrMask |= fOr;
4468
4469 return VINF_SUCCESS;
4470}
4471
4472
4473/**
4474 * Enters REM, gets and resets the changed flags (CPUM_CHANGED_*).
4475 *
4476 * Only REM should ever call this function!
4477 *
4478 * @returns The changed flags.
4479 * @param pVCpu The cross context virtual CPU structure.
4480 * @param puCpl Where to return the current privilege level (CPL).
4481 */
4482VMMR3DECL(uint32_t) CPUMR3RemEnter(PVMCPU pVCpu, uint32_t *puCpl)
4483{
4484 Assert(!pVCpu->cpum.s.fRemEntered);
4485
4486 /*
4487 * Get the CPL first.
4488 */
4489 *puCpl = CPUMGetGuestCPL(pVCpu);
4490
4491 /*
4492 * Get and reset the flags.
4493 */
4494 uint32_t fFlags = pVCpu->cpum.s.fChanged;
4495 pVCpu->cpum.s.fChanged = 0;
4496
4497 /** @todo change the switcher to use the fChanged flags. */
4498 if (pVCpu->cpum.s.fUseFlags & CPUM_USED_FPU_SINCE_REM)
4499 {
4500 fFlags |= CPUM_CHANGED_FPU_REM;
4501 pVCpu->cpum.s.fUseFlags &= ~CPUM_USED_FPU_SINCE_REM;
4502 }
4503
4504 pVCpu->cpum.s.fRemEntered = true;
4505 return fFlags;
4506}
4507
4508
4509/**
4510 * Leaves REM.
4511 *
4512 * @param pVCpu The cross context virtual CPU structure.
4513 * @param fNoOutOfSyncSels This is @c false if there are out of sync
4514 * registers.
4515 */
4516VMMR3DECL(void) CPUMR3RemLeave(PVMCPU pVCpu, bool fNoOutOfSyncSels)
4517{
4518 Assert(pVCpu->cpum.s.fRemEntered);
4519
4520 RT_NOREF_PV(fNoOutOfSyncSels);
4521
4522 pVCpu->cpum.s.fRemEntered = false;
4523}
4524
4525
4526/**
4527 * Called when the ring-3 init phase completes.
4528 *
4529 * @returns VBox status code.
4530 * @param pVM The cross context VM structure.
4531 * @param enmWhat Which init phase.
4532 */
4533VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
4534{
4535 switch (enmWhat)
4536 {
4537 case VMINITCOMPLETED_RING3:
4538 {
4539 /*
4540 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
4541 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
4542 */
4543 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
4544 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4545 {
4546 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4547
4548 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
4549 if (fSupportsLongMode)
4550 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
4551 }
4552
4553 /* Register statistic counters for MSRs. */
4554 cpumR3MsrRegStats(pVM);
4555
4556 /* Create VMX-preemption timer for nested guests if required. */
4557 if (pVM->cpum.s.GuestFeatures.fVmx)
4558 {
4559 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4560 {
4561 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4562 /* The string cannot live on the stack. It should be safe to call MMR3HeapAPrintf here as
4563 MMR3HyperInitFinalize has already completed at this point. */
4564 char *pszTimerName = MMR3HeapAPrintf(pVM, MM_TAG_CPUM_CTX, "Nested Guest VMX-preempt. timer %u", idCpu);
4565 int rc = TMR3TimerCreateInternal(pVM, TMCLOCK_VIRTUAL_SYNC, cpumR3VmxPreemptTimerCallback, pVCpu,
4566 pszTimerName, &pVCpu->cpum.s.pNestedVmxPreemptTimerR3);
4567 AssertLogRelRCReturn(rc, rc);
4568 pVCpu->cpum.s.pNestedVmxPreemptTimerR0 = TMTimerR0Ptr(pVCpu->cpum.s.pNestedVmxPreemptTimerR3);
4569 }
4570 }
4571 break;
4572 }
4573
4574 default:
4575 break;
4576 }
4577 return VINF_SUCCESS;
4578}
4579
4580
4581/**
4582 * Called when the ring-0 init phases completed.
4583 *
4584 * @param pVM The cross context VM structure.
4585 */
4586VMMR3DECL(void) CPUMR3LogCpuIdAndMsrFeatures(PVM pVM)
4587{
4588 /*
4589 * Enable log buffering as we're going to log a lot of lines.
4590 */
4591 bool const fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
4592
4593 /*
4594 * Log the cpuid.
4595 */
4596 RTCPUSET OnlineSet;
4597 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
4598 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
4599 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
4600 RTCPUID cCores = RTMpGetCoreCount();
4601 if (cCores)
4602 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
4603 LogRel(("************************* CPUID dump ************************\n"));
4604 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
4605 LogRel(("\n"));
4606 DBGFR3_INFO_LOG_SAFE(pVM, "cpuid", "verbose"); /* macro */
4607 LogRel(("******************** End of CPUID dump **********************\n"));
4608
4609 /*
4610 * Log VT-x extended features.
4611 *
4612 * SVM features are currently all covered under CPUID so there is nothing
4613 * to do here for SVM.
4614 */
4615 if (pVM->cpum.s.HostFeatures.fVmx)
4616 {
4617 LogRel(("*********************** VT-x features ***********************\n"));
4618 DBGFR3Info(pVM->pUVM, "cpumvmxfeat", "default", DBGFR3InfoLogRelHlp());
4619 LogRel(("\n"));
4620 LogRel(("******************* End of VT-x features ********************\n"));
4621 }
4622
4623 /*
4624 * Restore the log buffering state to what it was previously.
4625 */
4626 RTLogRelSetBuffering(fOldBuffered);
4627}
4628
Note: See TracBrowser for help on using the repository browser.

© 2023 Oracle
ContactPrivacy policyTerms of Use