node-problem-detector
node-problem-detector copied to clipboard
Add UEFI Common Platform Error Record (CPER) support
CPER is the format used to describe platform hardware error by various tables, such as ERST, BERT and HEST etc.
The event severity message is printed here: https://github.com/torvalds/linux/blob/v6.7/drivers/firmware/efi/cper.c#L639
Examples are as below.
Corrected error: kernel: {37}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 162 kernel: {37}[Hardware Error]: It has been corrected by h/w and requires no further action kernel: {37}[Hardware Error]: event severity: corrected kernel: {37}[Hardware Error]: Error 0, type: corrected kernel: {37}[Hardware Error]: section_type: memory error kernel: {37}[Hardware Error]: error_status: 0x0000000000000400 kernel: {37}[Hardware Error]: physical_address: 0x000000b50c68ce80 kernel: {37}[Hardware Error]: node: 1 card: 4 module: 0 rank: 0 bank: 1 device: 14 row: 58165 column: 816 kernel: {37}[Hardware Error]: error_type: 2, single-bit ECC kernel: {37}[Hardware Error]: DIMM location: CPU 2 DIMM 30
Recoverable error: kernel: {3}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 4 kernel: {3}[Hardware Error]: event severity: recoverable kernel: {3}[Hardware Error]: Error 0, type: recoverable kernel: {3}[Hardware Error]: fru_text: B1 kernel: {3}[Hardware Error]: section_type: memory error kernel: {3}[Hardware Error]: error_status: 0x0000000000000400 kernel: {3}[Hardware Error]: physical_address: 0x000000393cfe5040 kernel: {3}[Hardware Error]: node: 2 card: 0 module: 0 rank: 0 bank: 3 device: 0 row: 34719 column: 320 kernel: {3}[Hardware Error]: DIMM location: not present. DMI handle: 0x0000
Fatal error: kernel: BERT: Error records from previous boot: kernel: [Hardware Error]: event severity: fatal kernel: [Hardware Error]: Error 0, type: fatal kernel: [Hardware Error]: fru_text: DIMM B5 kernel: [Hardware Error]: section_type: memory error kernel: [Hardware Error]: error_status: 0x0000000000000400 kernel: [Hardware Error]: physical_address: 0x000000393d7e4040 kernel: [Hardware Error]: node: 2 card: 4 module: 0 rank: 0 bank: 3 device: 0 row: 34743 column: 256
Hi @wenjianhn. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/ok-to-test
/test pull-npd-e2e-test
Thanks @wenjianhn! /lgtm /cc @vteratipally
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hakman, wenjianhn Once this PR has been reviewed and has the lgtm label, please assign xueweiz for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/retest
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten