HOTP concerns
How much effort would be required to add such a support? The aim would be to not force the user to use OpenPGP-compatible smartcards/tokens. There is a gnupg-pkcs11-scd project, but I'm unsure how feasible it would be to incorporate it.
I hope this isn't too ignorant a question — unfortunately, I'm not very good with technicalities.
The Nitrokey 3 (main token used for heads, with reverse-hotp support) does have PIV slots as well, i.e. you can use both with it. If you skip reverse-hotp, yubico tokens support both as well.
But that wasn't really the question, at least I hope I didn't make it sound this way. I'd like to use smartcards/tokens that are not OpenPGP-compatible, like Smartcard-HSM.
But that wasn't really the question, at least I hope I didn't make it sound this way. I'd like to use smartcards/tokens that are not OpenPGP-compatible, like Smartcard-HSM.
I don't quite get the use case here @clinkist. You want to replace OpenPGP smartcard for a pkcs#11 one (that answers the what), but not the why nor how.
Purism and Heads developped reverse HOTP on top of nitrokey pro version 1,which became nitrokey pro v2/librem key, so that OpenPGP secured private key could be used to continue to sign content, while the nitrokey could be used through reverse HOTP for remote attestation. So the OpenPGP smartcard+firmware of the usb security dongle could be used to sign and for remote attestation.
Yes Heads could use another standard for signing, with HSM dongles if there is a a need for them. But why and how? What would be used instead, which technology to be used for remote attestation, and who would do the work?
Does it really need that big justification? Some people, like myself, consider smartcards to be more secure in some specific threat models. Also, what does the "reverse HOTP" bring to the table? User could verify the OTP themselves, manually, like they could always do in Heads, IIRC.
I'm also not sure what is the reason of the question of "who would do the work". I am merely asking a question about effort needed, to get some better view and maybe some pointers for dabbling in it myself. I'm not sure why such feature proposals/question should carry the burden of planning/assigning the work – isn't proposing new features and improvements a benefit for open-source projects in itself? Of course, as long as they don't completely lack merit and aren't completely off-topic, and I hope my proposal meets these criteria.
Does it really need that big justification? Some people, like myself, consider smartcards to be more secure in some specific threat models. Also, what does the "reverse HOTP" bring to the table? User could verify the OTP themselves, manually, like they could always do in Heads, IIRC.
@clinkist please take a look at http://osresearch.net/Prerequisites#usb-security-dongles-aka-security-token-aka-smartcard which was updated last week.
And tell me what is left unclear to what HOTP brings to the table in terms of remote attestation, TOTP limitations on clock requiring to be in sync for manual verification (which is often skipped by experience) and why HSM smartcard better then any Yubikey solo key or other currently supported keys (without reverse HOTP).
I'm also not sure what is the reason of the question of "who would do the work". I am merely asking a question about effort needed, to get some better view and maybe some pointers for dabbling in it myself. I'm not sure why such feature proposals/question should carry the burden of planning/assigning the work – isn't proposing new features and improvements a benefit for open-source projects in itself? Of course, as long as they don't completely lack merit and aren't completely off-topic, and I hope my proposal meets these criteria.
I'm asking because whatever additional tools would need to be packed, it would need to be wrapped under oem-factory-reset for OEM use case + user re-ownership, tested and supported forever in init up to kexec boot for everything signature related.
This is why I ask for "why" first. What card, why better. As prior answer from community member, current supported usb security dongle could also support PKCS#11. There is desire to switch from gnupg to another pgp toolstack (for OpenPGP smartcard forever support), where space in SPI chips is limited and tools addition needs real good justifications.
Sorry to insist here: why the need of support for additional smartcard support? Server space? Cloud computing? Unattended smartcard interaction? I need to understand better the use case.
For example, if post quantum crypto is better supported in XYZ, that opens the door to justify looking for alternatives and SPI chips space consomption. Ideally using a toolstack that would support user needed smartcard, but first understanding why is needed.
Current codebase:
- use pgg public distro key to verify iso integrity+authenticity of tails, archilux, qubesos, user detached signed iso.
- use gpg +smartcard +fused user public Key in firmware to sign /boot digest and verify /boot integrity and authenticity at each boot so user can be assured things are as last signed with smartcard protected private key
- reverse HOTP is used to verify/attest automatically TPMTOTP + counter (as opposed to TOTP time based protocol) against supported USB security dongle, will return to heads success/failure and visually attest to user (flashing green/red) firmware state prior of continuing with automated boot if successful.
One advantage of smartcards over hardware keys: PIN pad support. Technically, this only shifts the attacker's PIN-sniffing efforts from software to "real life", but for some, this might be precisely the goal.
As for HOTP/TOTP, clock syncing can definitely be problematic. However, with HOTP's "automatic attestation", there is a risk that my security key could be replaced by a device designed to confuse me and subvert my trust. I know that's a very specific threat scenario, but it is one that is made possible by that "automation".
Also, if I understand correctly, HOTP would be vulnerable to replay attacks. What would stop me from launching an "evil maid" attack where I boot your laptop a few times to gather HOTP codes and then provide those to my custom malware that will deceive your hardware token?
There are no HOTP codes to gather, replay is not a feasible scenario. There are attacks to extract the HOTP secret from a token (like there are attacks for any secret on them, incl. smartcards). With previous token generations there were also related CVE assigned for successful attacks against their secrets (incl. HOTP but not limited to it). That's a general risk and can happen anytime. To my knowledge, none of the CVE-assigned attacks were feasible remotely, i.e. physical access to the token and disassembly was necessary. The latter mitigates the risk more or less, depending on individual usage.
Your argument re PIN pad support certainly holds. Smartcard readers with pinpad avoid the USB interface. That would be an advantage of a smartcard. On the other hand, a current token PIN can be alphanumeric (higher entropy), and there is no limit on changing it. For example, if you had to use the token on a PC with unknown integrity, you could change the PIN next time and - given the token is physically safe meanwhile, the risk to attack the measured boot due to the PIN sniffing is limited due to the preconditions.
There are no HOTP codes to gather, replay is not a feasible scenario.
How is that so? As I understand it, an OTP code is generated in Heads (after the secret is unsealed) and sent to the hardware token. I can boot your system, insert a dummy token to receive the codes and then replace Heads with my own software that will send the OTP codes from memory. This will trick you into thinking that everything is OK. This scenario wouldn't work if the codes sent to the hardware keys weren't really OTP codes, and if there were a challenge-response mechanism instead. However, my understanding is that this is not the case.
There are no HOTP codes to gather, replay is not a feasible scenario.
How is that so? As I understand it, an OTP code is generated in Heads (after the secret is unsealed) and sent to the hardware token. I can boot your system, insert a dummy token to receive the codes and then replace Heads with my own software that will send the OTP codes from memory. This will trick you into thinking that everything is OK. This scenario wouldn't work if the codes sent to the hardware keys weren't really OTP codes, and if there were a challenge-response mechanism instead. However, my understanding is that this is not the case.
Documentation for HOTP is under the projects usage section of README.md https://github.com/Nitrokey/nitrokey-hotp-verification?tab=readme-ov-file#verifying-hotp-code
Yes, I've read that. I do not see anything preventing the evil maid / replay attack I described.
Yes, I've read that. I do not see anything preventing the evil maid / replay attack I described.
The counter being incremented, it cannot be reused for a calculated HOTP value to be "replayed" unless shared secret is known and combined to generate HOTP code to be validated?
The counter can be read/written under /boot. It is incremented on successful validation of HOTP. Am I missing something?
Am I missing something?
Believe me, it's awkward for me to suggest so, but I think you may be!
- Leave your laptop somewhere I have access to.
- I turn your laptop on a few times with my own version of Nitrokey plugged in. This device saves the HOTP codes received from Heads.
- I replace Heads on your laptop with a fake one, that will send the codes I got in step two.
- You return to your laptop unaware of what happened. You insert your Nitrokey and boot the system. The Nitrokey flashes green – it received valid HOTP from the fake firmware I planted. Your Nitrokey's counter wasn't increased, and I have saved quite a few of them, which will hopefully last for enough boots to give me opportunity to get to your laptop again and obtain the decryption password you entered.
Am I missing something?
Believe me, it's awkward for me to suggest so, but I think you may be!
- Leave your laptop somewhere I have access to.
- I turn your laptop on a few times with my own version of Nitrokey plugged in. This device saves the HOTP codes received from Heads.
- I replace Heads on your laptop with a fake one, that will send the codes I got in step two.
- You return to your laptop unaware of what happened. You insert your Nitrokey and boot the system. The Nitrokey flashes green – it received valid HOTP from the fake firmware I planted. Your Nitrokey's counter wasn't increased, and I have saved quite a few of them, which will hopefully last for enough boots to give me opportunity to get to your laptop again and obtain the decryption password you entered.
Reverse HOTP value is shared secret +counter, where counter is incremented at each success of reverse HOTP verification.
Your attack scénario requires attacker to have both usb security dongle and platform to compromise both devices at once and capture exchanged data and put back counter on usb security dongle prior of incremented. Correct?
No, I don't think my description of the attack scenario suggested that's it necessary to compromise both devices.
OK, let me ask differently. What defines "successful" verification? Could I program a fake Nitrokey so that it would tell Heads "thanks for the OTP, verification successful, go on"?
Please see the beginning of the doc link again. Heads inferfaces its reverse-HOTP reply, the verification against secret is done in the token (green/red LED). Let's not forget the user could both simultaneously verify the TOTP as well, or decide to disregard any mismatch and boot anyway. Heads only keeps a counter, allowing X unverified boots, to get around that is easy by removing the SSD - nothing to do with the firmware/token.
Also consider that this reverse-HOTP feature we discuss is specific to Nitrokey so far. It is open-source. One aspect to consider for your original issue/idea also should be how much that's the case switching to a different technology stack. The project maintainer already stated the "why" should be answered first. For the HOTP support that's an open question, are there smartcard/pinpad combos on the market that would allow to implement HOTP functionality and support a visual status display? If not, I don't understand why it is discussed. (If reverse-HOTP has an issue, that should be opened on its own because it affects most current heads users.)
Heads inferfaces its reverse-HOTP reply, the verification against secret is done in the token (green/red LED).
Yet again, I fail to see how this is supposed to answer my question. If you could refer to its merit without vaguely pointing at the documentation, I would appreciate that!
For the HOTP support that's an open question, are there smartcard/pinpad combos on the market that would allow to implement HOTP functionality and support a visual status display?
No, I don't think so — not yet, at least.
If not, I don't understand why it is discussed.
It's quite easy to trace that back: this @tlaurion's comment, as I understood it, pointed to reverse HOTP verification as one of the reasons why the currently supported technology is superior to smartcards (which would then be an argument against efforts to support them). My perception is that this feature actually does users a disservice by introducing a vulnerability mainly stemming from the convenience it is supposed to provide.
(If reverse-HOTP has an issue, that should be opened on its own because it affects most current heads users.)
For now, the discussion has naturally shifted in this direction. However, once we agree that the scenario I suggest presents a previously unconsidered vulnerability, then I agree that this matter should find its home in another, more appropriately named issue.
The reference by @tlaurion to HOTP was to show a single token is employed for multiple purposes. It's a widely used optional (yes) convenience (yes) feature of the firmware and important to consider what automatism (checks) could replace it. It was put forward as an example of considerations to be made regarding effort to switch technologies.
(NB I'm not sure you are aware, but there are qemu images for heads. In case you want to try it.)
What defines "successful" verification? Could I program a fake Nitrokey so that it would tell Heads "thanks for the OTP, verification successful, go on"?
In my last reply I gave examples to illustrate Heads does not care (no need for a fake token, 9 times - and the user would not know per default). Still, in your scenario we have the original token safe, so you'd have to either manipulate Heads secrets in place and not trip the token verification, or produce a fake one to either replace (physically) or clone the original.
I think what the documentation would benefit from is a flow to answer your first question. RFC4226 Section 9 has an example, perhaps we can adapt it.
Further ideas what checks could optionally replace HOTP when switching to a different smartcard: Smartcards are sturdy but simple. A QR-scanner (even a pinpad) is an extra burden for laptop users and a cost. Perhaps using an extra key slot? That's something the cards have enough.
Heads inferfaces its reverse-HOTP reply, the verification against secret is done in the token (green/red LED).
Yet again, I fail to see how this is supposed to answer my question. If you could refer to its merit without vaguely pointing at the documentation, I would appreciate that!
For the HOTP support that's an open question, are there smartcard/pinpad combos on the market that would allow to implement HOTP functionality and support a visual status display?
No, I don't think so — not yet, at least.
If not, I don't understand why it is discussed.
It's quite easy to trace that back: this @tlaurion's comment, as I understood it, pointed to reverse HOTP verification as one of the reasons why the currently supported technology is superior to smartcards (which would then be an argument against efforts to support them). My perception is that this feature actually does users a disservice by introducing a vulnerability mainly stemming from the convenience it is supposed to provide.
(If reverse-HOTP has an issue, that should be opened on its own because it affects most current heads users.)
For now, the discussion has naturally shifted in this direction. However, once we agree that the scenario I suggest presents a previously unconsidered vulnerability, then I agree that this matter should find its home in another, more appropriately named issue.
@clinkist @Ingo-Albrecht is this misunderstanding stemming from https://github.com/linuxboot/heads-wiki/pull/203 having become too high level?
@clinkist : reverse HTOP (HMAC) is not perfect, but the attack scenario here is an oversimplification as well. I have attempted many times in the past to document how things work under Heads, TPM measured boot, root of trust being in coreboot's bootblock and physical access staying the main issue if no security in depth (and Heads has been criticized a lot in that regard, since there is no bootguard and bootguard related proprietary blobs nor third party key verifying the boot block, which could theoritically fake the first reported measurement to TPM and tamper the root of trust). Then marketing speech complicates things since a lot of people says that the root of trust in in the USB Security dongle, which is also oversimplification.
TLDR of previous attempts (https://github.com/linuxboot/heads-wiki/issues/116 https://github.com/linuxboot/heads-wiki/issues/62):
- root of trust is in boot block which reports FMAP and bootblock (please do cbmem -L and cbmem -1 from Heads recovery shell, and/or put Heads informational output into DEBUG/INFO mode so that you can see what happens at boot, which otherwise is hidden now since Heads board configs turned QUIET informational output on).
- coreboot extends TPM PCR2, then boot Heads (payload) which extends TPM PCR7 and PCR5 (see https://osresearch.net/Keys/#tpm-pcrs)
- on OEM factory reset/Re-Ownership, secrets are provisioned into different security components with user provided secret, with OpenPGP public key fused into rom, measured and and extended into Heads TPM PCR extends operations.
- Then Heads tries to unseal TPMTOTP secret (which seals secret into TPM nvram with TPM PCR values. If they are the same (untampered bootblock, romstage, ramstage, postcar, cmos, payload, user related public key, keyring, coreboot config value, heads kernel modules loaded, etc etc) then TPMTOTP is unsealed without error and TOTP/HOTP can be calculated. If a signel bit is different on measured content, no secret can be unsealed; no reverse HOTP happens; no passphrase is asked from the user for disk encryption key. A perfect tampering of bootblock up to payload is needed to have same measurements needed to unseal TPM nvram region and generate a valid HOTP code to be verified with +10 counter, which will increment if valid.
- User is asked to verify TOTP on secondary device, same device from which Qrcode was scanned or secret was manually entered to generate same TOTP code if system clock is in sync
- if board is hotp variant, then TPMTOTP relative secret is verified against USB Security dongle with /boot stored counter value (this is where there is a 'known' design vulnerability: you can change that counter value yes, and Heads will generate different HOTP code to verify against USB Security dongle. BUT! That counter is increased in the USB Security dongle AND /boot at each success, so you cannot replay them.
There are discussions for alternative implementations. And Heads is not yet perfect in term of how it handles TPM2 and primary handle and related issues (https://github.com/linuxboot/heads/issues/1651 https://github.com/linuxboot/heads/issues/1006 etc).
Tagging Nitrokey/Purism's Librem Key people (Heads integrate their Nitrokey's module/hotp-verification toolstack for visual attestation currently based on reverse HOTP): @JonathonHall-Purism @daringer (feel free to add others needed to make this discussion go forward into issues that could be fixed by minimally more proper documentation).
As of now, docs are at https://osresearch.net/Prerequisites#usb-security-dongles-aka-security-token-aka-smartcard recently updated by https://github.com/linuxboot/heads-wiki/pull/203. Maybe https://osresearch.net/Heads-threat-model/ should include HOTP related threat modeling (Nitrokey/Purism related dongles and HOTP handling related design flaws/features?)
@clinkist @Ingo-Albrecht is this misunderstanding stemming from https://github.com/linuxboot/heads-wiki/pull/203 having become too high level?
This issue shows how an answer to a high level question triggers detail questions that remain unanswered in doc. I think we can all take that for granted. If I skim over the commits in https://github.com/linuxboot/heads-wiki/pull/203, it does indeed remove some detail that might have helped. For example, "HOTP ...is strongly recommended" does not imply above discussed convenience-factor. The previous version was more agnostic. I don't mind that, but your above answer does contain detail about the measurement I don't recall reading in doc - would be good to cover, yes. I'll look again.
Heads will generate different HOTP code to verify against USB Security dongle. BUT! That counter is increased in the USB Security dongle AND /boot at each success, so you cannot replay them.
@tlaurion At least on my machine (Nitrokey's final x230-legacy) it's feasible to retract the hotp_counter, prompting an "Invalid counter", then simply refresh from the menu, and it will succeed and increment automagically. I've actually noticed this a while ago since the OS hashes /boot files separately each boot/resume, then started to copy current counter value to log to monitor when I play with it. If this is still current, "at each success" should read "at each success and on refreshing HOTP/TOTP manually via the heads menu".
@tlaurion I wish I were proficient enough to understand most of the terms you described. At the very least, though, you have provided some useful pointers for future learning. Thank you!
BUT!
I still don't see why the replay is impossible. Let's say the counter value is 1002 when you boot your computer this morning. The HOTP verification is successful, so the counter increases to 1003 on both the boot partition and the USB dongle. Later today, you leave your laptop in your hotel room — this is where I come in. I sneak in, try to boot your computer, and use my custom dongle to record the HOTP value presented by your laptop for counter 1003. Then I replace your firmware with one that records your boot passphrase, having hardcoded the obtained HOTP code into it just a moment earlier. The next time you boot your laptop, your USB dongle will expect the correct HOTP code for counter 1003, and it will receive it.
One thing I hadn't thought through very well earlier is that it's probably not possible for me, as an attacker, to make that boot successful because I didn't obtain the TPM Disk Unlock Key. You will therefore know immediately after typing in your passphrase that something is wrong. I could generate some animations to deceive you into thinking it's a hardware display problem, but that would probably be futile. The point is that you revealed your passphrase to my firmware, and now I can try to steal your laptop overtly, which will give me access to the encrypted contents.
Again, I'm sorry that I can't discuss this with you at the same technical level. By the way, as far as documentation is concerned, it would really be nice to have everything described on the low-level but with layman's terms, explaining every concept, so that unexperienced users could quickly get up to speed and understand exactly what is it they can expect from their system.
@clinkist @Ingo-Albrecht is this misunderstanding stemming from linuxboot/heads-wiki#203 having become too high level?
This issue shows how an answer to a high level question triggers detail questions that remain unanswered in doc. I think we can all take that for granted. If I skim over the commits in linuxboot/heads-wiki#203, it does indeed remove some detail that might have helped. For example, "HOTP ...is strongly recommended" does not imply above discussed convenience-factor. The previous version was more agnostic. I don't mind that, but your above answer does contain detail about the measurement I don't recall reading in doc - would be good to cover, yes. I'll look again.
Heads will generate different HOTP code to verify against USB Security dongle. BUT! That counter is increased in the USB Security dongle AND /boot at each success, so you cannot replay them.
@tlaurion At least on my machine (Nitrokey's final x230-legacy) it's feasible to retract the hotp_counter, prompting an "Invalid counter", then simply refresh from the menu, and it will succeed and increment automagically.
Yes. It makes sense looking at unseal-hotp code which didn't die back in 2023 on tpmr unseal operation, so the code was continuing instead of failing. Would need to retest, but x230-legacy dates of 2023, where gui-init and a lot of things changes since then. Would need to fully read and retest https://github.com/linuxboot/heads/compare/f7019e80af3549fb38a5f661534b851a7bf9404f...aaeb63df78f8563c46d140f1dcdb51d380392048#diff-d0832bfa8bcbd1128aa957fad283dcdc4ae9c3c4bcac03791a4f5653d121c126
- https://github.com/linuxboot/heads/blob/f7019e80af3549fb38a5f661534b851a7bf9404f/initrd/bin/unseal-hotp (your version of unseal-hotp)
- https://github.com/linuxboot/heads/blob/aaeb63df78f8563c46d140f1dcdb51d380392048/initrd/bin/unseal-hotp (master version of it)
That codepath was fixed by https://github.com/linuxboot/heads/pull/1650. Otherwise HOTP counter was incremented to the point of getting out of sync even if no dongle was connected; documented in issue https://github.com/linuxboot/heads/issues/1648 which this PR resolved.
I've actually noticed this a while ago since the OS hashes /boot files separately each boot/resume, then started to copy current counter value to log to monitor when I play with it. If this is still current, "at each success" should read "at each success and on refreshing HOTP/TOTP manually via the heads menu".
@tlaurion I wish I were proficient enough to understand most of the terms you described. At the very least, though, you have provided some useful pointers for future learning. Thank you!
BUT!
I still don't see why the replay is impossible. Let's say the counter value is 1002 when you boot your computer this morning. The HOTP verification is successful, so the counter increases to 1003 on both the boot partition and the USB dongle. Later today, you leave your laptop in your hotel room — this is where I come in. I sneak in, try to boot your computer, and use my custom dongle to record the HOTP value presented by your laptop for counter 1003.
Up to here, I can follow you into doing a PoC that would succeed this. With the following conditions:
- USB Security dongle left with laptop, both unattended, and this is not filling case where an unattended server would be left in a server room with a camera pointing at it. This is not a supported use case; if you leave your house keys behind, having a single picture of it might be enough to create a copy of it and entering your home. This is why you have your keys in your pocket; and if you didn't know, now you won't leave your keys behind. The USB Security dongle, in case of Heads, was traditionally not only used for Heads through reverse HOTP, but also as an authenticity/encryption source. If someone replaces your USB Security dongle; you wouldn't be able to use it and tampering should be detected soon enough; the latest being when you update your system and need to use the dongle with your private key inside of it to detach sign /boot digests and couldn't.
- "I sneak in, try to boot your computer, and use my custom dongle to record the HOTP value": well again, the HOTP code is calculated from shared secret, unsealed from TPM nvram (TPM memory) that can only be accessed with the same value it was sealed with, which is the measurements of the firmware content (a signle bit difference won't unseal) and that value being used to be checked against the dongle with a valid counter+-5 being generated in the USB security dongle. See https://github.com/linuxboot/heads/pull/1650/files#diff-d0832bfa8bcbd1128aa957fad283dcdc4ae9c3c4bcac03791a4f5653d121c126R268 and prior comment
Then I replace your firmware with one that records your boot passphrase, having hardcoded the obtained HOTP code into it just a moment earlier. The next time you boot your laptop, your USB dongle will expect the correct HOTP code for counter 1003, and it will receive it.
That's where I stop agreeing, as documented at https://osresearch.net/Keys/#tpm_unseal-errors
If you change a single bit of the bootblock, romstage, ramstage or heads scripts, kernel etc being part of what is measured and extended in TPM operations (measured boot), then the tpmr unseal operation will fail at https://github.com/linuxboot/heads/pull/1650/files#diff-f283c90269e0b29d99776f2788d042dfc6125d259723c92e8691a0bec51530d9R41 and then the check of that secret+counter will fail. https://github.com/linuxboot/heads/pull/1650/files#diff-d0832bfa8bcbd1128aa957fad283dcdc4ae9c3c4bcac03791a4f5653d121c126R270
TLDR: this attack would succeed capturing one valid counter in the +-5 range for a valid TPMTOTP (hotp and totp share the same measured boot calculated shared secret for both TOTP (with time) and HOTP (where hotp uses counter value instead of time), as long as the prior htop_verification calls succeed in code (ie https://github.com/linuxboot/heads/pull/1650/files#diff-d0832bfa8bcbd1128aa957fad283dcdc4ae9c3c4bcac03791a4f5653d121c126R255)
One thing I hadn't thought through very well earlier is that it's probably not possible for me, as an attacker, to make that boot successful because I didn't obtain the TPM Disk Unlock Key. You will therefore know immediately after typing in your passphrase that something is wrong.
No, it would fail really early; the same exact way Heads warns end user after a firmware upgrade which changes any coreboot measurements made in TPM PCR2 (see https://osresearch.net/Keys/#tpm-pcrs). Enabling a TPM DUK requires not only PCR2 (coreboot measurements of bootblock up to heads payload (which is initramfs+kernel, which means everything reproducible builds here and where a single bit in any script would change PCR2), but also PCR4-7.
TLDR: changing a single bit of Heads would fail PCR2 reconstruction, which would fail unsealig TPM nvram sealed secret used in TOTP/HOTP (it won't unseal) and verifying it would fail (in case it was resealed, just like one does when upgrading/tampeing the firmware; which requires end user to reseal TPM shared secret in TPM nvram, which generates a new Qr code to scan (TOTP) and GPG Admin PIN (Librem Key/Nitrokey v2) or Secrets app PIN (Nitrokey 3). A reminder here that Heads doesn't pretend to be tamper proof but tamper evident; and that requires the "convenience" of the USB Security dongle (green flashing and automatic boot once verified up to TPM DUK if enabled) to respect seperation of laptop and USB Security dongle. Otherwise as secure as a home key left behind and a house theft without any sign of intrusion. The extermely paranoid can reflash expected to be flash ROM prior of booting system; TPMTOTP would be the same; TPM unseal operation would result in both TOTP/HOTP to be valid and TPM DUK passphrase to be asked and the system to boot into final OS with data at rest decrypted.
This inconvenience is the price to pay to have user owned keys; reproducible roms that can be flashed directly under Heads from a USB dongle and be owners of keys. Otherwise, there is advancements in the realms of bootguard keys fusing, 3mdeb first release of trustedroot (for trusted root of trust) permits either Novacustom to be trusted to deliver firmware upgrades forever, or the end user to fuse his key into efuses of laptop and enable bootguard with its own keys to flash a firmware that only him will be able to maintain and upgrade; rendering Heads tamper evidence obsolete on newer hardware; with superior root of trust anchor (see docs of bootguard which should be updated).
I could generate some animations to deceive you into thinking it's a hardware display problem, but that would probably be futile. The point is that you revealed your passphrase to my firmware, and now I can try to steal your laptop overtly, which will give me access to the encrypted contents.
Again, I'm sorry that I can't discuss this with you at the same technical level. By the way, as far as documentation is concerned, it would really be nice to have everything described on the low-level but with layman's terms, explaining every concept, so that unexperienced users could quickly get up to speed and understand exactly what is it they can expect from their system.
This issue should be renamed to "HOTP concerns" or something, where PKCS#11 should be reopened seperately @clinkist.
Any help welcome in improving the docs as https://osresearch.net/Contributing-to-Heads-wiki/ documents. I now have less paid time (free contributions only for now: maintenance mode) to invest in Heads, and I'm no documentation expert.
The challenge is to turn this discussion/concerns into valid documentation permitting those question to never be asked again. I was not so successful acoomplishing this, so anyone better at writing docs welcome here; I would help.
USB Security dongle left with laptop, both unattended
No, I don't think I need your dongle for the described scenario. I just boot your laptop to Heads with my dummy dongle that is going to receive the HOTP code.
well again, the HOTP code is calculated from shared secret, unsealed from TPM nvram (TPM memory) that can only be accessed with the same value it was sealed with
Yes, I understand — and in the beginning of my scenario your original firmware calculates it correctly, since there aren't any modifications made at this point. I just boot your laptop exactly as you left it.
If you change a single bit of the bootblock, romstage, ramstage or heads scripts, kernel etc being part of what is measured and extended in TPM operations (measured boot), then the tpmr unseal operation will fail and then the check of that secret+counter will fail.
But I replace Heads with something that is only going to deceive you into thinking it's the original firmware. I don't care that the unseal operation will fail, because I recorded the HOTP code earlier. I can't really fake knowing secret for TOTP (at least, not that easily), but I was already provided the next HOTP code by your laptop when I booted it.
The extermely paranoid can reflash expected to be flash ROM prior of booting system; TPMTOTP would be the same; TPM unseal operation would result in both TOTP/HOTP to be valid and TPM DUK passphrase to be asked and the system to boot into final OS with data at rest decrypted.
I'm not sure I follow you on this, could you explain it a bit differently?
USB Security dongle left with laptop, both unattended
No, I don't think I need your dongle for the described scenario. I just boot your laptop to Heads with my dummy dongle that is going to receive the HOTP code.
Agreed here. You could even go to recovery shell (if Authenticated heads not activated: the default. Or extract hard drive, externally, modify /boot hotp counter, reintegrate hard drive and iterate) and modify hotp counter value and capture value on usb port, again if dongle plays nicely with Heads expectations and give proper responses (hotp_verification info output) up to "invalid hotp" with current valid firmware to unseal tpm secret. You could capture a couple of valid HOTP values without needing both laptop and user's security dongle.
well again, the HOTP code is calculated from shared secret, unsealed from TPM nvram (TPM memory) that can only be accessed with the same value it was sealed with
Yes, I understand — and in the beginning of my scenario your original firmware calculates it correctly, since there aren't any modifications made at this point. I just boot your laptop exactly as you left it.
If you change a single bit of the bootblock, romstage, ramstage or heads scripts, kernel etc being part of what is measured and extended in TPM operations (measured boot), then the tpmr unseal operation will fail and then the check of that secret+counter will fail.
But I replace Heads with something that is only going to deceive you into thinking it's the original firmware. I don't care that the unseal operation will fail, because I recorded the HOTP code earlier. I can't really fake knowing secret for TOTP (at least, not that easily), but I was already provided the next HOTP code by your laptop when I booted it.
That is, once again where I disagree on practicality of the attack scenario. You would need a firmware to boot. Not saying impossible, but highly improbable. Let's say you create a firmware that gives exact same decoyed output on screen that deceives user up to typing TPM Disk Unlock Key passphrase (DUK). That one won't unseal TPM nvram (which requires both passphrase and same TPM PCRs values to unseal) which you could probably obtain with a backup of the firmware and enough time. Typing TPM DUK passphrase might be accepted on your crafted rom, but final system won't boot from it (disk won't decrypt) and user will be asked for Disk Recovery Key passphrase and should be worried.
Tamper evidence with user controlled root of trust in bootloader is flaw here, because it's not tamper proof, only evident: which is by design its own flaw. If someone has enough resource to craft a firmware that will extend the same hashes needed in different needed PCRs, that attacker will be able to extract TPMTOTP nvram secret, and TPM DUK secret if he observes TPM DUK passphrase. Again here, it would be way easier to implant a keyboard key logger physically then go through all those loops to capture TPM DUK passphrase today and steal laptop. Or put a camera in proper angle to film keypress and record keypresses sounds and reconstruct. The firmware here is not the easy part to tamper with. Which is why I keep repeating that proper opsec is needed, even if firmware provides tamper protection through bootguard or trustedroot or other technologies. Such implant would deceive any security today, as long as no chassis intrusion detection +tpm wipe exists or something better is made in collaboration with hardware makers/product owners. An attacker with resource will alway go for the easiest attack scenario: the one here is not as easy as it seems compared to a physical keylogger implant/camera installation. But then again, who has tamper evident seals on their screws to detect keylogger or would spot a camera in environment; who checks for that. That is again highly criticized with truth if everyone today expects plug and pray security.
TLDR: you replaced the firmware with your own. The user has TPM DUK enabled, and now that passphrase fails to unseal TPM DUK with a TPM error given (which still requires a Heads based firmware for proper look and feel) or boots but asks user for DRK passphrase at OS boot. Each attempt with user owned USB security dongle increases the HOTP counter. User should verify TOTP and spot unmatch.
The user should be worried and stop trusting the integrity of the firmware after rebooting once or twice, check TOTP, making sure capslock is not on. Someone with proper opsec would follow next advice...
The extermely paranoid can reflash expected to be flash ROM prior of booting system; TPMTOTP would be the same; TPM unseal operation would result in both TOTP/HOTP to be valid and TPM DUK passphrase to be asked and the system to boot into final OS with data at rest decrypted.
I'm not sure I follow you on this, could you explain it a bit differently?
Once having Heads current firmware version on a USB stick (zip file previously flashed internally), or under /boot (measured, part of signed digest: that's what I do) can easily internally reflash (standard upgrade path) known good firmware version from USB thumb drive.
Flashrom/flashprog will overwrite every bit changed, memory training will rehappen on next boot. The firmware will provide measured boot based measurements to TPM from coreboot side, extended measured boot from heads payload. And TPM sealed TPMTOTP secret will unseal, resulting in both TOTP/HOTP remote attestation to succeed and TPM DUK passphrase to unseal DUK and boot final OS successfully as usual. At this point, if prior boot was funky and things didn't boot cleanly, end user shouldn't trust computer. He should do untrusted backups if needed and burn laptop/inspect thoroughly/start fresh....
TLDR: Heads having RoT in bootblock is known limitation if not write protected (epoxy which is impractical to upgrade bootblock later if coreboot changes it, which it does). Heads prevents OS write access to SPI but can't protect against physical attacks which need security in depth.
Bootguard is the only current RoT being in PCH (tamper proofing firmware) offering tamper protection (unless flawed as in the past) but bootguard removes user control and place faith and firmware updates releases in sole OEM hands, otherwise needing user to sign his own firmware images on newer platforms requiring way more blobs to implement efused based security (fuse once, no ownership transfer possible etc).
Heads is intended still today for users having "trust but verify" needs/threat model.
Agreed that TOTP is "better", if user can keep system clocks of devices in sync (some user don't have phones that have GPS/GSM time synced and time drift of 10 seconds a day is not exaggerated for some platforms). That is a curse for Heads since forever, and more and more for layman usera. HOTP is imperfect as well, agreed, and do release values that can (possible) be used to bring attacker closer to having end user use a compromised system (not all users enable authenticated Heads, nor TPM DUK).
A lot of users will not follow any guidelines, including users that won't plug USB security device at boot, nor validate TOTP prior proceeding to boot. To me, the best security Heads provides is TPM DUK. But not everybody enforces that either.
Documenting the bits needed to clarify the limitations of each seems still needed if this discussion occurs still over and over, while something better than reverse HOTP is needed (ie: TPM challenge based with shared secret and HOTP secret encrypted in dongle, so no secret nor counter are ever exposed) but that doesn't exist yet, nor consequently has Heads support for yet.
Can you recap your current understanding in terms that make sense for you? If anything still unclear, let me know.
I'd like to inject a focus on the "HOTP concerns" title once again. @clinkist playbook starts by using a malicious token capable of faking a reply to heads to obtain an iterated HOTP to replay after planting their firmware later.
@tlaurion counters with "yes, possible, but it won't get you far since the difficult part is creating a firmware to trick the user into dislosing DUK."
... if dongle plays nicely with Heads expectations and give proper responses (hotp_verification info output) up to "invalid hotp" with current valid firmware to unseal tpm secret. You could capture a couple of valid HOTP values without needing both laptop and user's security dongle.
Does "give proper responses" imply having obtained the hotp_verification info output for the token tied to the firmware already? For example, does it include having obtained the token serial, which is digested by hotp_verification during "validation"? L63 https://github.com/linuxboot/heads/pull/1650/files#diff-f283c90269e0b29d99776f2788d042dfc6125d259723c92e8691a0bec51530d9R63