heim icon indicating copy to clipboard operation
heim copied to clipboard

Addition of a bunch of info on CPU and RAM

Open moonheart08 opened this issue 5 years ago • 5 comments

This is mostly a note for myself, and so other people can see what i'm doing.

CPU additions

Ability to read back the following:

  • CPU Name (model name in /proc/cpuinfo)
  • CPU Supported Features (flags in /proc/cpuinfo. Field name seems inconsistent between x86 and ARM, may need extra logic.)
  • CPU Bugs (bugs in /proc/cpuinfo. May be None)
  • CPU Cache info
  • CPU Family

Sudo mandatory features

  • CPU Socket(s)
  • Other DMI data?

RAM additions

All cards

  • ECC Enabled (Seems like it'll be stupidly difficult, Linux provides poor facilities for checking, windows may not provide them at all, will need to see.)
  • Buffered (See above)
  • Offline/Online memory (May be linux only)

Per card

  • RAM speed (Requires sudo(!). Will need to know how that should be implemented.)
  • Card Size (See above)
  • Type (DDR4, DDR3, LPDDR2, etc etc etc) (See above)

moonheart08 avatar Feb 21 '20 15:02 moonheart08

Great things to add to heim!

A few thoughts for discussion:

  • I would really appreciate separate PRs for each feature, for sake of easier code review
  • CPU flags can be represented with enum and this abstract Core struct of yours can return something that implements Iterator<Item=Flag>
  • Same to heim::disk::FileSystem there should be Flag::Other(String) fallback variant if there is some unknown flag occured
  • Same goes to CPU core bugs

So, this could probably look as something like

fn cpu_cores() -> impl Stream<Item = Result<Core>> { /* .. */ }

where each struct Core has implementation like

impl Core {  // Or CpuCore?
    fn name(&self) -> &str { /* .. */ }
    fn flags(&self) -> Flags { /* .. */ } // `Flags` in here is an Iterator over `Flag` enum variants
    fn bugs(&self) -> Bugs { /* .. */ } // Same as with `Bugs`
}

I'm not sure how to implement DMI stuff yet. Any shell commands are strictly forbidden (so we should not use dmidecode, for example), maybe there are some DMI/SMBIOS crates exist? So far it is hard to say how they could be plugged into the Core, let's gather some info about how we can fetch these details.

As for the RAM modules, as long as fetching any of this information requires admin privileges, I see no problem with adding note into the function documentation, that says that on Linux this method requires extra privileges or it will fail with an error.

svartalf avatar Feb 21 '20 18:02 svartalf

There's a incomplete DMI crate, that I can probably work on to enhance. Hah, this has just become a walk upstream. Nushell, to here, to the dmidecode crate.

moonheart08 avatar Feb 21 '20 19:02 moonheart08

Will work on one feature at a time. To start, i'll bring in a bunch of common CPU info, like cache size (just figured out how to read it. Cache info is scattered in a bunch of files in /sys/devices/system/cpu/cpu${cpunum}/cache/ and in more files in /sys/devices/system/cpu/cpu${cpunum}/cache/index${cacheid}. Aka complicated never-the-less.) Will probably introduce structs for info on cache.

Will need to figure out how I want to handle CPU specifc data. I personally own a AMD Zen 2 CPU, and have access to a server that has a Intel Kaby Lake Xeon in it. For example the Intel CPU has CPU-specific info on power states, and I imagine the AMD one does too (can't check right now, will check when i'm home)

moonheart08 avatar Feb 21 '20 19:02 moonheart08

Great, that sounds really cool!

Since reading CPU cache info sounds like an expensive operation, maybe it should be moved to a separate method? Ex. calling cores() will get some basic information on the CPU cores and if you want the CPU cache details too, you need to use

async fn cache(&self) -> Cache { /* .. */ }

Please note that this not my final decision on this matter, but just an idea for discussion. At the end it might look like the heim::process::Process struct, where any info fetching requires calling another async method, which in this case points that this call might take some time.

svartalf avatar Feb 21 '20 19:02 svartalf

Yea, a lot of this requires FS calls on Linux (Not the best perf, not the worst), and probably an annoyingly deep tree of API calls on windows. I'll probably move stuff like cache info, RAM info, etc behind structs with async calls. Gathering cache info, for example, would require ~15 filesystem operations if my count is right, and that's only on systems that have the usual L1i/L1d -> L2 -> L3 structure. Some intel CPUs have L4

moonheart08 avatar Feb 21 '20 19:02 moonheart08