ngo
ngo copied to clipboard
[vdso-time] Avoid using rdtsc for SGX 1
The vDSO time uses rdtscp
instruction internally. This would cause SGX exception if it runs on SGX 1 hardware. I think there are three strategies to fix it.
- Detect SGX version at runtime in the LibOS. If the version is SGX 1, then avoid using vdso-time .
- Extend the vdso-time crate so that it can fallback to the OCall version on SGX 1 hardware.
- Extend the vdso-time crate so that it avoids using
rdtscp
in the VDSO execution path on SGX 1 hardware. This may lose some accuracy. But the performance is still good.
I think the option 3 is most promising.
For option 3, it means that we return coarse clock time when the user wants high-resolution clock time. e.g., return CLOCK_MONOTONIC_COARSE when the user wants CLOCK_MONOTONIC.
In Linux, the resolution of high-resolution clock (e.g., CLOCK_MONOTONIC) might be 1ns (got by clock_getres()
), while the resolution of coarse clock (CLOCK_MONOTONIC_COARSE) might be 4,000,000ns, which is, 4ms.
However, an OCALL to do clock_gettime costs thousands of nanos, e.g., 4000 ns.
4,000,000 / 4,000 = 1000, I am afraid that the coarse resolution can not meet users requirements.
In SGX1, if we use rdtsc
by libos exception, it might cause problems. Since the rdtsc
instruction is invoked in a loop, if the rdtsc
cost too much time, we might retry many times in the loop, even retry forever. (It's my thoughts, I haven't try it in SGX1)
Hence, I think option 2 is better.