Add `time` crate
This PR is a simplification of #569. It adds the timer abstraction without implementing or using it anywhere.
We would implement ClockSource for various ZSTs e.g. Pit, Tsc, Hpet.
Then in captain we'd register them as so:
time::register_early_sleeper::<pit::Pit>().expect("couldn't set PIT as early sleeper");
// Discover devices including the HPET.
device_manager::early_init(kernel_mmi_ref.lock().deref_mut())?;
// Now we try set the HPET as the early sleeper. We prefer it over the PIT as it's more accurate.
time::register_early_sleeper::<hpet::Hpet>();
time::register_clock_source::<rtc::Rtc>().expect("failed to initialise a realtime clock");
time::register_clock_source::<tsc::Tsc>()
.or_else(|_| time::register_clock_source::<hpet::Hpet>())
.or_else(|_| time::register_clock_source::<pit::Pit>())
.expect("failed to initialise a monotonic clock");
You mentioned in our discussion that you would prefer clock sources are registered upon discovery, like device drivers. The issue with that, is certain clock sources like TSC aren't discovered. In the case of TSC, we have to query CPUID to determine whether or not it's supported.
I feel like choosing a clock source based on certain characteristics exposed by the trait rather than a fixed order will significantly complicate things. A number of different factors should be considered when choosing a clock including accuracy, access speed, whether or not it uses interrupts, and resolution.
You mentioned in our discussion that you would prefer clock sources are registered upon discovery, like device drivers. The issue with that, is certain clock sources like TSC aren't discovered. In the case of TSC, we have to query CPUID to determine whether or not it's supported.
Yes, I do think that's the best way forward. In this case, by "discovery" i mean that the crate that implements support for each clocksource should be the one to determine whether that clock source is valid before configuring it and registering it with whatever central timekeeping crate or clock-provider crate. There's really no other way to do it, since each clocksource crate is effectively the only source for whether a clocksource exists and how to configure it.
In your TSC example, the tsc crate should be the one issuing CPUID commands to determine whether the TSC exists and whether it's a stable clocksource, and to determine its period/frequency/resolution/whatever other clocksource characteristics. The alternative is that the captain or whatever other init crate has to encode that information within it (?), which is poor design for myriad reasons.
It is indeed true that those init routines (e.g., tsc::init()) will have to be explicitly invoked, but that's true of many devices. That doesn't stop them from being "discoverable", it just means that we have to explicitly invoke their "discovery" (init) routines in order to check whether they exist. IMO, that's the job of the device_manager's init/early_init routines.
I feel like choosing a clock source based on certain characteristics exposed by the trait rather than a fixed order will significantly complicate things. A number of different factors should be considered when choosing a clock including accuracy, access speed, whether or not it uses interrupts, and resolution.
Absolutely, it's totally fine to have a fixed order of clocksources. That fixed order can either use specific clocksources by name, or simply keep track of a priority list of each clocksource category based on characteristics. In the general case, it's impossible to identify a fixed order by name since that's arch-specific and even platform-specific, but it's okay for now.
The clocksource trait is missing quite a few components describing the characteristics of the clocksource. In my mind, that trait exists solely for two reasons: to abstract access to concrete hardware clocks, and to describe the frequency, resolution, etc of each clock.
I don't mind the idea of using an associated type to establish what kind of clocksource it is, but I do question why it's beneficial to do it that way. Typically an associated type is used as more than just a simple marker trait, e.g., in Iterator the Item associated type is used to set the return type of next(). We're not really using it as anything beyond a marker, so it'd be more ergonomic and easier to understand if we just returned the type of the clocksource via a standard trait method.
If we ever wanted to offer specific methods that differ between a monolithic clock or wall-clock clocksource, then the associated type-based design would be more appropriate.
In general, I feel like this design forces the usage of generic types for no benefit, both in the clocksource trait and in the registration function.
Aaaand GitHub removed everything that I typed up :smile:.
In your TSC example, the
tsccrate should be the one issuing CPUID commands to determine whether the TSC exists and whether it's a stable clocksource, and to determine its period/frequency/resolution/whatever other clocksource characteristics. The alternative is that thecaptainor whatever other init crate has to encode that information within it (?), which is poor design for myriad reasons.
I'm not sure if I understand. In the API proposed by this crate, the only information about clock sources encoded in captain (or any other init crate) is the order. For example the CPUID commands would be run in tsc::Tsc's implementation of ClockSource::exists, found in the tsc crate.
It is indeed true that those
initroutines (e.g.,tsc::init()) will have to be explicitly invoked, but that's true of many devices. That doesn't stop them from being "discoverable", it just means that we have to explicitly invoke their "discovery" (init) routines in order to check whether they exist. IMO, that's the job of thedevice_manager'sinit/early_initroutines.
ClockSource::exists is the "discovery", not ClockSource::init. init is only run if exists succesfuly "discovers" the clock. This is already done in register_hardware_clock.
The clocksource trait is missing quite a few components describing the characteristics of the clocksource. In my mind, that trait exists solely for two reasons: to abstract access to concrete hardware clocks, and to describe the frequency, resolution, etc of each clock.
How would we expose the frequency.? Would we define functions like so:
pub fn frequency(ty: ClockType) -> usize;
As soon as we include more than one function, we can no longer technically use atomics because there could be race conditions if someone calls a function while we're in the middle of registering a clock. In practice, I don't think this is of much concern because we don't ever modify the atomics after startup. I don't mind adding the functions, but I'm not quite sure of the use case. When would a user need the frequency of the currently running clock?
Also, what does etc. specifically include? Resolution is just the inverse of frequency, and I don't think there's a way of determining the accuracy of a hardware clock.
I don't mind the idea of using an associated type to establish what kind of clocksource it is, but I do question why it's beneficial to do it that way. Typically an associated type is used as more than just a simple marker trait, e.g., in
IteratortheItemassociated type is used to set the return type ofnext().
IIUC you're proposing something like this:
pub trait ClockSource {
fn exists() -> bool;
fn init() -> Result<(), &'static str>;
fn now() -> Duration;
fn clock_type() -> ClockType;
}
enum ClockType {
Realtime,
Monotonic,
}
impl ClockType {
fn func_addr(&self) -> &'static AtomicUsize {
match self {
Self::Realtime => &REALTIME_CLOCK_FUNCTION,
Self::Monotonic => &MONOTONIC_CLOCK_FUNCTION,
}
}
}
pub fn register_clock_source<T>() -> Result<(), RegisterError>
where
T: ClockSource,
{
if T::exists() {
T::init()?;
T::clock_type().func_addr().store(T::now as usize, Ordering::SeqCst);
Ok(())
} else {
Err(RegisterError::NonExistent)
}
}
pub fn now(ty: ClockType) {
let addr = ty.func_addr().load(Ordering::SeqCst);
if addr == 0 {
panic!("time function not set");
} else {
let f: fn() -> Duration = unsafe { core::mem::transmute(addr) };
f()
}
}
(excuse the atrocious indenting)
I thought that these kinds of type shenanigans weren't uncommon in Rust. I guess the only thing that GATs provide are a compile time guarantee that the clock type for a clock source won't change and a tiny performance improvement but I don't think either of those are particularly important. I'm fine with changing it to use enums.
this crate may also be of interest, though we probably don't need to use it directly https://lib.rs/crates/embedded-time
There are ways of avoiding floating point operations by expressing the frequency in GHz (1 GHzonemeans 1 tick per nanosecond) when calculating subsecond intervals. It is probably simpler to use the period as that is a simple multiplication period * counter.
FWIW I think Linux uses frequency internally and OS Dev Wiki suggests calculating the frequency of the HPET, although neither of these is particularly important.
The HPET provide the period, whereas the TSC and PIT provide the frequency.
None of this is currently a problem, as we only use the frequency to compare clocks. However, if we know the frequency (or period), we don't need the instant_to_duration and duration_to_instant functions as they are period * counter and duration / period, respectively, regardless of the clock source.