Mailing List Archive

[PATCH v14 22/22] KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event
Version 2 of GHCB specification added support for the SNP Extended Guest
Request Message NAE event. This event serves a nearly identical purpose
to the previously-added SNP_GUEST_REQUEST event, but allows for
additional certificate data to be supplied via an additional
guest-supplied buffer to be used mainly for verifying the signature of
an attestation report as returned by firmware.

This certificate data is supplied by userspace, so unlike with
SNP_GUEST_REQUEST events, SNP_EXTENDED_GUEST_REQUEST events are first
forwarded to userspace via a KVM_EXIT_VMGEXIT exit type, and then the
firmware request is made only afterward.

Implement handling for these events.

Since there is a potential for race conditions where the
userspace-supplied certificate data may be out-of-sync relative to the
reported TCB or VLEK that firmware will use when signing attestation
reports, make use of the synchronization mechanisms wired up to the
SNP_{PAUSE,RESUME}_ATTESTATION SEV device ioctls such that the guest
will be told to retry the request while attestation has been paused due
to an update being underway on the system.

Signed-off-by: Michael Roth <michael.roth@amd.com>
---
Documentation/virt/kvm/api.rst | 26 +++++++++++
arch/x86/include/asm/sev.h | 6 +++
arch/x86/kvm/svm/sev.c | 82 ++++++++++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.h | 3 ++
arch/x86/virt/svm/sev.c | 37 +++++++++++++++
include/uapi/linux/kvm.h | 6 +++
6 files changed, 160 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85099198a10f..6cf186ed8f66 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7066,6 +7066,7 @@ values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
struct kvm_user_vmgexit {
#define KVM_USER_VMGEXIT_PSC_MSR 1
#define KVM_USER_VMGEXIT_PSC 2
+ #define KVM_USER_VMGEXIT_EXT_GUEST_REQ 3
__u32 type; /* KVM_USER_VMGEXIT_* type */
union {
struct {
@@ -7079,6 +7080,11 @@ values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
__u64 shared_gpa;
__u64 ret;
} psc;
+ struct {
+ __u64 data_gpa;
+ __u64 data_npages;
+ __u32 ret;
+ } ext_guest_req;
};
};

@@ -7108,6 +7114,26 @@ private/shared state. Userspace will return a value in 'ret' that is in
agreement with the GHCB-defined return values that the guest will expect
in the SW_EXITINFO2 field of the GHCB in response to these requests.

+For the KVM_USER_VMGEXIT_EXT_GUEST_REQ type, the ext_guest_req union type
+is used. The kernel will supply in 'data_gpa' the value the guest supplies
+via the RAX field of the GHCB when issued extended guest requests.
+'data_npages' will similarly contain the value the guest supplies in RBX
+denoting the number of shared pages available to write the certificate
+data into.
+
+ - If the supplied number of pages is sufficient, userspace should write
+ the certificate data blob (in the format defined by the GHCB spec) in
+ the address indicated by 'data_gpa' and set 'ret' to 0.
+
+ - If the number of pages supplied is not sufficient, userspace must write
+ the required number of pages in 'data_npages' and then set 'ret' to 1.
+
+ - If userspace is temporarily unable to handle the request, 'ret' should
+ be set to 2 to inform the guest to retry later.
+
+ - If some other error occurred, userspace should set 'ret' to a non-zero
+ value that is distinct from the specific return values mentioned above.
+
6. Capabilities that can be enabled on vCPUs
============================================

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index ee24ef815e35..dfc28ac4dd0e 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -276,6 +276,9 @@ void snp_leak_pages(u64 pfn, unsigned int npages);
void kdump_sev_callback(void);
int snp_pause_attestation(u64 *transaction_id);
void snp_resume_attestation(u64 *transaction_id);
+u64 snp_transaction_get_id(void);
+bool __snp_transaction_is_stale(u64 transaction_id);
+bool snp_transaction_is_stale(u64 transaction_id);
#else
static inline bool snp_probe_rmptable_info(void) { return false; }
static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENODEV; }
@@ -291,6 +294,9 @@ static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
static inline void kdump_sev_callback(void) { }
static inline int snp_pause_attestation(u64 *transaction_id) { return 0; }
static inline void snp_resume_attestation(u64 *transaction_id) {}
+static inline u64 snp_transaction_get_id(void) { return 0; }
+static inline bool __snp_transaction_is_stale(u64 transaction_id) { return false; }
+static inline bool snp_transaction_is_stale(u64 transaction_id) { return false; }
#endif

#endif
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 68db390b19d0..1cec466e593b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -3292,6 +3292,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
case SVM_VMGEXIT_PSC:
case SVM_VMGEXIT_TERM_REQUEST:
case SVM_VMGEXIT_GUEST_REQUEST:
+ case SVM_VMGEXIT_EXT_GUEST_REQUEST:
break;
default:
reason = GHCB_ERR_INVALID_EVENT;
@@ -3812,6 +3813,84 @@ static void snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
}

+static int snp_complete_ext_guest_req(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb_control_area *control;
+ struct kvm *kvm = vcpu->kvm;
+ sev_ret_code fw_err = 0;
+ int vmm_ret;
+
+ vmm_ret = vcpu->run->vmgexit.ext_guest_req.ret;
+ if (vmm_ret) {
+ if (vmm_ret == SNP_GUEST_VMM_ERR_INVALID_LEN)
+ vcpu->arch.regs[VCPU_REGS_RBX] =
+ vcpu->run->vmgexit.ext_guest_req.data_npages;
+ goto abort_request;
+ }
+
+ control = &svm->vmcb->control;
+
+ /*
+ * To avoid the message sequence number getting out of sync between the
+ * actual value seen by firmware verses the value expected by the guest,
+ * make sure attestations can't get paused on the write-side at this
+ * point by holding the lock for the entire duration of the firmware
+ * request so that there is no situation where SNP_GUEST_VMM_ERR_BUSY
+ * would need to be returned after firmware sees the request.
+ */
+ mutex_lock(&snp_pause_attestation_lock);
+
+ if (__snp_transaction_is_stale(svm->snp_transaction_id))
+ vmm_ret = SNP_GUEST_VMM_ERR_BUSY;
+ else if (!__snp_handle_guest_req(kvm, control->exit_info_1,
+ control->exit_info_2, &fw_err))
+ vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
+
+ mutex_unlock(&snp_pause_attestation_lock);
+
+abort_request:
+ ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
+
+ return 1; /* resume guest */
+}
+
+static int snp_begin_ext_guest_req(struct kvm_vcpu *vcpu)
+{
+ int vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
+ struct vcpu_svm *svm = to_svm(vcpu);
+ unsigned long data_npages;
+ sev_ret_code fw_err;
+ gpa_t data_gpa;
+
+ if (!sev_snp_guest(vcpu->kvm))
+ goto abort_request;
+
+ data_gpa = vcpu->arch.regs[VCPU_REGS_RAX];
+ data_npages = vcpu->arch.regs[VCPU_REGS_RBX];
+
+ if (!IS_ALIGNED(data_gpa, PAGE_SIZE))
+ goto abort_request;
+
+ svm->snp_transaction_id = snp_transaction_get_id();
+ if (snp_transaction_is_stale(svm->snp_transaction_id)) {
+ vmm_ret = SNP_GUEST_VMM_ERR_BUSY;
+ goto abort_request;
+ }
+
+ vcpu->run->exit_reason = KVM_EXIT_VMGEXIT;
+ vcpu->run->vmgexit.type = KVM_USER_VMGEXIT_EXT_GUEST_REQ;
+ vcpu->run->vmgexit.ext_guest_req.data_gpa = data_gpa;
+ vcpu->run->vmgexit.ext_guest_req.data_npages = data_npages;
+ vcpu->arch.complete_userspace_io = snp_complete_ext_guest_req;
+
+ return 0; /* forward request to userspace */
+
+abort_request:
+ ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
+ return 1; /* resume guest */
+}
+
static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
{
struct vmcb_control_area *control = &svm->vmcb->control;
@@ -4076,6 +4155,9 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
snp_handle_guest_req(svm, control->exit_info_1, control->exit_info_2);
ret = 1;
break;
+ case SVM_VMGEXIT_EXT_GUEST_REQUEST:
+ ret = snp_begin_ext_guest_req(vcpu);
+ break;
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
vcpu_unimpl(vcpu,
"vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 8a8ee475ad86..28140bc8af27 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -303,6 +303,9 @@ struct vcpu_svm {

/* Guest GIF value, used when vGIF is not enabled */
bool guest_gif;
+
+ /* Transaction ID associated with SNP config updates */
+ u64 snp_transaction_id;
};

struct svm_cpu_data {
diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index b75f2e7d4012..f1f7486a3dcf 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -72,6 +72,7 @@ static unsigned long snp_nr_leaked_pages;

/* For synchronizing TCB/certificate updates with extended guest requests */
DEFINE_MUTEX(snp_pause_attestation_lock);
+EXPORT_SYMBOL_GPL(snp_pause_attestation_lock);
static u64 snp_transaction_id;
static bool snp_attestation_paused;

@@ -611,3 +612,39 @@ void snp_resume_attestation(u64 *transaction_id)
mutex_unlock(&snp_pause_attestation_lock);
}
EXPORT_SYMBOL_GPL(snp_resume_attestation);
+
+u64 snp_transaction_get_id(void)
+{
+ u64 id;
+
+ mutex_lock(&snp_pause_attestation_lock);
+ id = snp_transaction_id;
+ mutex_unlock(&snp_pause_attestation_lock);
+
+ return id;
+}
+EXPORT_SYMBOL_GPL(snp_transaction_get_id);
+
+/* Must be called with snp_pause_attestion_lock held */
+bool __snp_transaction_is_stale(u64 transaction_id)
+{
+ lockdep_assert_held(&snp_pause_attestation_lock);
+
+ return (snp_attestation_paused ||
+ transaction_id != snp_transaction_id);
+}
+EXPORT_SYMBOL_GPL(__snp_transaction_is_stale);
+
+bool snp_transaction_is_stale(u64 transaction_id)
+{
+ bool stale;
+
+ mutex_lock(&snp_pause_attestation_lock);
+
+ stale = __snp_transaction_is_stale(transaction_id);
+
+ mutex_unlock(&snp_pause_attestation_lock);
+
+ return stale;
+}
+EXPORT_SYMBOL_GPL(snp_transaction_is_stale);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index e33c48bfbd67..585de3a2591e 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -138,6 +138,7 @@ struct kvm_xen_exit {
struct kvm_user_vmgexit {
#define KVM_USER_VMGEXIT_PSC_MSR 1
#define KVM_USER_VMGEXIT_PSC 2
+#define KVM_USER_VMGEXIT_EXT_GUEST_REQ 3
__u32 type; /* KVM_USER_VMGEXIT_* type */
union {
struct {
@@ -151,6 +152,11 @@ struct kvm_user_vmgexit {
__u64 shared_gpa;
__u64 ret;
} psc;
+ struct {
+ __u64 data_gpa;
+ __u64 data_npages;
+ __u32 ret;
+ } ext_guest_req;
};
};

--
2.25.1
Re: [PATCH v14 22/22] KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event [ In reply to ]
On Sun, Apr 21, 2024, Michael Roth wrote:
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 85099198a10f..6cf186ed8f66 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -7066,6 +7066,7 @@ values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
> struct kvm_user_vmgexit {
> #define KVM_USER_VMGEXIT_PSC_MSR 1
> #define KVM_USER_VMGEXIT_PSC 2
> + #define KVM_USER_VMGEXIT_EXT_GUEST_REQ 3

Assuming we can't get massage this into a vendor agnostic exit, there's gotta be
a better name than EXT_GUEST_REQ, which is completely meaningless to me and probably
most other people that aren't intimately familar with the specs. Request what?

> __u32 type; /* KVM_USER_VMGEXIT_* type */
> union {
> struct {
> @@ -3812,6 +3813,84 @@ static void snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp
> ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
> }
>
> +static int snp_complete_ext_guest_req(struct kvm_vcpu *vcpu)
> +{
> + struct vcpu_svm *svm = to_svm(vcpu);
> + struct vmcb_control_area *control;
> + struct kvm *kvm = vcpu->kvm;
> + sev_ret_code fw_err = 0;
> + int vmm_ret;
> +
> + vmm_ret = vcpu->run->vmgexit.ext_guest_req.ret;
> + if (vmm_ret) {
> + if (vmm_ret == SNP_GUEST_VMM_ERR_INVALID_LEN)
> + vcpu->arch.regs[VCPU_REGS_RBX] =
> + vcpu->run->vmgexit.ext_guest_req.data_npages;
> + goto abort_request;
> + }
> +
> + control = &svm->vmcb->control;
> +
> + /*
> + * To avoid the message sequence number getting out of sync between the
> + * actual value seen by firmware verses the value expected by the guest,
> + * make sure attestations can't get paused on the write-side at this
> + * point by holding the lock for the entire duration of the firmware
> + * request so that there is no situation where SNP_GUEST_VMM_ERR_BUSY
> + * would need to be returned after firmware sees the request.
> + */
> + mutex_lock(&snp_pause_attestation_lock);

Why is this in KVM? IIUC, KVM only needs to get involved to translate GFNs to
PFNs, the rest can go in the sev-dev driver, no? The whole split is weird,
seemingly due to KVM "needing" to take this lock. E.g. why is core kernel code
in arch/x86/virt/svm/sev.c at all dealing with attestation goo, when pretty much
all of the actual usage is (or can be) in sev-dev.c

> +
> + if (__snp_transaction_is_stale(svm->snp_transaction_id))
> + vmm_ret = SNP_GUEST_VMM_ERR_BUSY;
> + else if (!__snp_handle_guest_req(kvm, control->exit_info_1,
> + control->exit_info_2, &fw_err))
> + vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
> +
> + mutex_unlock(&snp_pause_attestation_lock);
> +
> +abort_request:
> + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
> +
> + return 1; /* resume guest */
> +}
> +
> +static int snp_begin_ext_guest_req(struct kvm_vcpu *vcpu)
> +{
> + int vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
> + struct vcpu_svm *svm = to_svm(vcpu);
> + unsigned long data_npages;
> + sev_ret_code fw_err;
> + gpa_t data_gpa;
> +
> + if (!sev_snp_guest(vcpu->kvm))
> + goto abort_request;
> +
> + data_gpa = vcpu->arch.regs[VCPU_REGS_RAX];
> + data_npages = vcpu->arch.regs[VCPU_REGS_RBX];
> +
> + if (!IS_ALIGNED(data_gpa, PAGE_SIZE))
> + goto abort_request;
> +
> + svm->snp_transaction_id = snp_transaction_get_id();
> + if (snp_transaction_is_stale(svm->snp_transaction_id)) {

Why bother? I assume this is rare, so why not handle it on the backend, i.e.
after userspace does its thing? Then KVM doesn't even have to care about
checking for stale IDs, I think?

None of this makes much sense to my eyes, e.g. AFAICT, the only thing that can
pause attestation is userspace, yet the kernel is responsible for tracking whether
or not a transaction is fresh?
Re: [PATCH v14 22/22] KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event [ In reply to ]
On Wed, Apr 24, 2024 at 05:10:31PM -0700, Sean Christopherson wrote:
> On Sun, Apr 21, 2024, Michael Roth wrote:
> > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > index 85099198a10f..6cf186ed8f66 100644
> > --- a/Documentation/virt/kvm/api.rst
> > +++ b/Documentation/virt/kvm/api.rst
> > @@ -7066,6 +7066,7 @@ values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
> > struct kvm_user_vmgexit {
> > #define KVM_USER_VMGEXIT_PSC_MSR 1
> > #define KVM_USER_VMGEXIT_PSC 2
> > + #define KVM_USER_VMGEXIT_EXT_GUEST_REQ 3
>
> Assuming we can't get massage this into a vendor agnostic exit, there's gotta be
> a better name than EXT_GUEST_REQ, which is completely meaningless to me and probably
> most other people that aren't intimately familar with the specs. Request what?

EXT_CERT_REQ maybe? That's effectively all it boils down to, fetching
the GHCB-defined certificate blob from userspace.

>
> > __u32 type; /* KVM_USER_VMGEXIT_* type */
> > union {
> > struct {
> > @@ -3812,6 +3813,84 @@ static void snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp
> > ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
> > }
> >
> > +static int snp_complete_ext_guest_req(struct kvm_vcpu *vcpu)
> > +{
> > + struct vcpu_svm *svm = to_svm(vcpu);
> > + struct vmcb_control_area *control;
> > + struct kvm *kvm = vcpu->kvm;
> > + sev_ret_code fw_err = 0;
> > + int vmm_ret;
> > +
> > + vmm_ret = vcpu->run->vmgexit.ext_guest_req.ret;
> > + if (vmm_ret) {
> > + if (vmm_ret == SNP_GUEST_VMM_ERR_INVALID_LEN)
> > + vcpu->arch.regs[VCPU_REGS_RBX] =
> > + vcpu->run->vmgexit.ext_guest_req.data_npages;
> > + goto abort_request;
> > + }
> > +
> > + control = &svm->vmcb->control;
> > +
> > + /*
> > + * To avoid the message sequence number getting out of sync between the
> > + * actual value seen by firmware verses the value expected by the guest,
> > + * make sure attestations can't get paused on the write-side at this
> > + * point by holding the lock for the entire duration of the firmware
> > + * request so that there is no situation where SNP_GUEST_VMM_ERR_BUSY
> > + * would need to be returned after firmware sees the request.
> > + */
> > + mutex_lock(&snp_pause_attestation_lock);
>
> Why is this in KVM? IIUC, KVM only needs to get involved to translate GFNs to
> PFNs, the rest can go in the sev-dev driver, no? The whole split is weird,
> seemingly due to KVM "needing" to take this lock. E.g. why is core kernel code
> in arch/x86/virt/svm/sev.c at all dealing with attestation goo, when pretty much
> all of the actual usage is (or can be) in sev-dev.c

It would be very tempting to have all this locking tucked away in sev-dev
driver, but KVM is a part of that sequence because it needs to handle fetching
the certificate that will be returned to the guest as part of the
attestation request. The transaction ID updates from PAUSE/RESUME
commands is technically enough satisfy this requirement, in which case
KVM wouldn't need to take the lock directly, only check that the
transaction ID isn't stale and report -EBUSY/-EAGAIN to the guest if it
is.

But if the request actually gets sent to firmware, and then we decide
after that the transaction is stale and thus the attestation response
and certificate might not be in sync, then reporting -EBUSY/-EAGAIN will
permanently hose the guest because the message seq fields will be out of
sync between firmware and guest kernel. That's why we need to hold the
lock to actually block a transaction ID update from occuring once we've
committed to sending the attestation request to firmware.

>
> > +
> > + if (__snp_transaction_is_stale(svm->snp_transaction_id))
> > + vmm_ret = SNP_GUEST_VMM_ERR_BUSY;
> > + else if (!__snp_handle_guest_req(kvm, control->exit_info_1,
> > + control->exit_info_2, &fw_err))
> > + vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
> > +
> > + mutex_unlock(&snp_pause_attestation_lock);
> > +
> > +abort_request:
> > + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, SNP_GUEST_ERR(vmm_ret, fw_err));
> > +
> > + return 1; /* resume guest */
> > +}
> > +
> > +static int snp_begin_ext_guest_req(struct kvm_vcpu *vcpu)
> > +{
> > + int vmm_ret = SNP_GUEST_VMM_ERR_GENERIC;
> > + struct vcpu_svm *svm = to_svm(vcpu);
> > + unsigned long data_npages;
> > + sev_ret_code fw_err;
> > + gpa_t data_gpa;
> > +
> > + if (!sev_snp_guest(vcpu->kvm))
> > + goto abort_request;
> > +
> > + data_gpa = vcpu->arch.regs[VCPU_REGS_RAX];
> > + data_npages = vcpu->arch.regs[VCPU_REGS_RBX];
> > +
> > + if (!IS_ALIGNED(data_gpa, PAGE_SIZE))
> > + goto abort_request;
> > +
> > + svm->snp_transaction_id = snp_transaction_get_id();
> > + if (snp_transaction_is_stale(svm->snp_transaction_id)) {
>
> Why bother? I assume this is rare, so why not handle it on the backend, i.e.
> after userspace does its thing? Then KVM doesn't even have to care about
> checking for stale IDs, I think?

That's true, it's little more than a very minor performance optimization
to do it here rather than on the return path, so could definitely be
dropped.

>
> None of this makes much sense to my eyes, e.g. AFAICT, the only thing that can
> pause attestation is userspace, yet the kernel is responsible for tracking whether
> or not a transaction is fresh?

Pausing is essentially the start of an update transaction initiated by
userspace. So the kernel is tracking whether there's a potential
transaction that has been started or completed since it first started
servicing the attestation request, otherwise there could be a mismatch
between the endorsement key used for the attestation report versus the
certificate for the endorsement key that was fetched from userspace.

-Mike