meteor-feature-requests
meteor-feature-requests copied to clipboard
Password hashing on client side is insecure
Migrated from: meteor/meteor#4363
Salting just makes things more complicated, but it does not really add extra security for communicating between client and server at login time. If you want to protect passwords, you have to use SSL on the communication channel. I think adding salt would give just false sense of (better) security.
Current approach is being hashed just to prevent password to be send in clear text, but not to really provide any security (beyond hiding the length of the password inside an encrypted channel). I think it is clearly documented in the guide:
Every production Meteor app that handles user data should run with SSL.
Also, if you are as an app developer wants to force the use of SSL, you can add force-ssl
package and distribute your app with it:
meteor add force-ssl
Hey @mitar, the solution that I proposed in meteor/meteor#4363 is less about hashing+salt and more about encrypting the password before sending to the server to mitigate replay attacks since there are often improper TLS implementations, occasional TLS exploits, and a few cases where Cert Authorities have been compromised.
I'm happy to take on this work, so feel free to assign the issue to me. I'll diagram the model and update my thoughts about the proposal. I have some other things going on next week, but expect progress later this month.
For anyone new to this issue, I'll quote myself:
nothing about this issue suggests Meteor is insecure today - just that most TLS implementations are not perfect and those imperfections can be exploited by a motivated attacker against any application framework.
What is the threat model you are trying to protect against? Active attacker? Passive attacker? MITM? Active attacker with broken TLS between client and server?
Definitely just looking at an active attack scenario. Likely from a mis-issued (not necessarily broken) TLS connection that would allow the attacker to MITM the user, force re-login, and capture credentials. There are a few other things that have to go wrong before this scenario becomes plausible, but CA's have been compromised in the past and TLS bugs happen from time to time. Adding another layer would give Meteor developers and their users another layer of protection when other bad things happen.
That Meteor already hashes the password on the client (instead of sending in plaintext over TLS) puts this framework ahead of many less-secure options, but relying on simple hashing and assuming TLS is always bulletproof makes me nervous. Meteor has, what, ~38k stars on GitHub... someone in that crowd screwed up their TLS implementation.
If you have an active MITM attacker, then they can also send malicious JavaScript from to the client, which leaks password there, and your encryption scheme does not help at all. So that threat model is not resolved by your proposal to additionally encrypt.
I think you are solving things at the wrong end. TLS issues is really not something Meteor can do something about. Maybe you want Mylar then.
Moreover, encryption will have also to add some polyfill to the client side, so that would take additional space (and complexity) on the client side. So we should really be clear what is additional protection you want to get.
Right, I addressed this in my comments on the original issue; see my comment on Mar 31 (2nd to last paragraph). This gets us trust-on-first-use (similar to HPKP), which mitigates the attack vector in my scenario.
Sadly, your suggestion does not work. You always have to load some code first from the server to go an check your sever keys in local storage. Attacker can provide a different code there, which simply ignores keys in local storage.
The issue is that browsers currently trust the server. They will always execute the code from the server first. You would have to get browsers to run something on the client before going to the server. Server workers with might help here (until the user clicks reload, when they are bypassed).
I have spent most of my last year developing a secured version of Meteor. Until we have service workers, suborigins, or a browser extension, I have not found a way to bootstrap correct execution with a compromised server (or compromised connection). I would love to be mistaken.
And I really do not think trying to protect against compromised TLS connection is a worthwhile way to spend resources. If something, Meteor should hire a 3rd party company to do a security audit of its codebase. That would increase security dramatically, for common cases, and not very rare edge cases.
Sadly, your suggestion does not work. You always have to load some code first from the server to go an check your sever keys in local storage. Attacker can provide a different code there, which simply ignores keys in local storage.
Respectfully, I disagree. There will, of course, be incremental code and load on the client (as I mentioned and quantified in my comment on Mar 31 in the previous issue). There's no free lunch, and the additional load time and computation should be part of the decision to accept or reject the eventual PR... but we're getting rather ahead of ourselves. The onus is still on the server to accept encrypted and signed login credentials, as I proposed. But, the point of my proposal isn't really protecting the integrity of that channel before credentials are entered by the user. Rather, I want to protect the transit of credentials to the server in the face of an active MITM attack as well as the return of the login token after authentication*. I think issue #35 is the right way to ensure code integrity in transit and a worthwhile enhancement, so I'll leave discussion of code integrity to that issue. As I proposed, keygen and signing on both sides of the message should guarantee that the messages are authentic and when the session token returns to the client that it is only decryptable by the client. Essentially pinning the public side of the server's key and using that information in the signed messages should ensure no intermediate access or tampering. When I work on this (after next week), I'll diagram this out so it's more clear to other folks that are less familiar with encryption than the two of us. I will be working on this issue since our threat model at Legal Robot includes MITM'ing a TLS connection due to known and potential future issues with TLS. It would be unfortunate for the community if I and my team have to maintain a local fork of accounts-password
to do that.
The issue is that browsers currently trust the server. They will always execute the code from the server first. You would have to get browsers to run something on the client before going to the server. Secure workers with might help here (until the user clicks reload, when they are bypassed).
Yep. Trusting the source of code you're loading is clearly an issue (#35 is important), but my proposal doesn't involve trusting that code. Rather, I propose pinning the public side of a key and using that information in the response to the supposed server so if the message is intercepted or tampered the ultimate authority (the real server) will reject verification and not even try to authenticate. I haven't ever worked with secure workers - I'll certainly look into that. I wonder if that could give this proposal some element of code integrity before #35 is complete. I'm quite interested in what other benefits are there.
I have spent most of my last year developing a secured version of Meteor. Until we have secure workers, suborigins, or a browser extension, I have not found a way to bootstrap correct execution with a compromised server (or compromised connection). I would love to be mistaken.
Cool! I'd love to know more. Perhaps, this proposed effort could give us a model for other sensitive client-server interactions. I mean, who knows... we could pin the server's public key(s) and wrap every DDP message in a tweetnacl
box. I think that's a bit beyond our threat model at Legal Robot, but as a thought experiment it could be worthwhile to explore. tweetnacl
is quite fast.
And I really do not think trying to protect against compromised TLS connection is a worthwhile way to spend resources. If something, Meteor should hire a 3rd party company to do a security audit of its codebase. That would increase security dramatically, for common cases, and not very rare edge cases.
Agreed, MDG and other community members probably have better things to do than building this. I suppose I'm just asking for their guidance along with the community's wisdom, and then a solid review of the PR once it's built. Now, an independent security audit... that would be a fantastic use of their resources, IMHO.
* = we use CloudFlare so my understanding is that we're pretty much passively MITM'ing ourselves... voluntarily. Anyone like us that terminates TLS at a CDN is doing the same. I believe there are fairly good controls at CloudFlare and there are other mitigating factors. <tinfoil hat> But, wouldn't it be nice to know that your CDN or, a "hypothetical" adversary that has sufficient legal authority to collect internet communications content (not to mention the authority to issue gag orders on such techniques) would have a more difficult time viewing your user's hashed password as it was passing through the servers where you terminate TLS? <\tinfoil hat>
Rather, I want to protect the transit of credentials to the server in the face of an active MITM attack as well as the return of the login token after authentication*.
Why would an active attacker start attacking just at the moment user tries to enter the password and not when they try to load the page? Your threat model is unrealistic. If attacker is around and active we can assume that they are like that for the whole session. They might not be in the previous session (so trust on first use can work in some scenarios), but during one session we should assume they can do an attack at any point.
I do not think we should be adding more complexity to the very sensitive part of the code just to address unrealistic and unreasonable threat models.
I think issue #35 is the right way to ensure code integrity in transit and a worthwhile enhancement, so I'll leave discussion of code integrity to that issue.
#35 does not help if the initial HTML file you loaded from the server is controller by the attacker. It helps only if you are loading external files from CDNs you do not want to be trusting. But the HTML content you have to trust. Otherwise I just replace the hash on any of those files and load my malicious script.
I am sorry that it does not work like that. Really. I wanted to make it happen myself as well. It would be so great if it would work. We could use so much of client-side crypto for crazy interesting things. We would not have to trust the server, for example. We could use them really just to store data, but trust would not have to be in them. But the things you are describing do not work and solve this problem.
I haven't ever worked with secure workers - I'll certainly look into that.
My bad, I meant service workers. Typo. I edited it above as well.
Cool! I'd love to know more. Perhaps, this proposed effort could give us a model for other sensitive client-server interactions.
What we did was ported Meteor to run inside Intel SGX secure enclave. Then you do not have to trust the server at all. Even for computation. But you still have to first get to the client code which validates the secure enclave. And this we were not able to solve without a browser extension.
"Your threat model is unrealistic."
I'd say that it is rather plausible that one of our users, who normally accesses our service on a safe connection, one day connects via an unsecured wifi connection at a coffee shop. By default, their browser would use the cached version of the HTML and JS code. In that scenario, that I described on Mar 26 in meteor/meteor#4363, an attacker that has compromised that unsecured connection could at that point observe the connection, decide that our service looks like an interesting target, and start MITM'ing the victim.
#35 does not help if the initial HTML file you loaded from the server is controller by the attacker.
Agreed, but my goal with this proposal is not to address every potential vulnerability. Code integrity is still a concern... just not a concern that I am trying to address with this proposal. All I'm trying to do here is 1) protect credentials in transit with compromised TLS and 2) allow the server to detect and reject a tampered login attempt. If you can inject scripts, you can steal credentials in other (easier) ways. I'm not trying to mitigate every potential attack vector or reinvent TLS, just trying to mitigate this specific one.
Anyway, at this point (two and a half months and 2700 words in) I feel like I'm just shouting into the wind. I guess we will just use a local fork of accounts-password
and the community will be poorer for it. If a Meteor core contributor eventually sees the value of this proposal, you know where to find me.
By default, their browser would use the cached version of the HTML and JS code.
Not it wouldn't. It would still make a HTTP request to the server asking if anything changed. An attacker could respond "yes, it did, here is the new index file". Try to make an app load and execute JavaScript without contacting the server at all, or being able to prevent contacting the server after it loads.
I'm not trying to mitigate every potential attack vector or reinvent TLS, just trying to mitigate this specific one.
But your proposal does not mitigate it. This is the problem. It does not defend your user in the coffee shop. You cannot assume that attacker in the coffee shop will not do something. That is the whole point of an attacker. They do things you didn't expect them to do, or do not want them to do. They are great at exploiting rare situations.
If you threat model is what you described, you have to show how you would defend against a smart attacker in that setting.
Anyway, at this point (two and a half months and 2700 words in) I feel like I'm just shouting into the wind. I guess we will just use a local fork of accounts-password and the community will be poorer for it. If a Meteor core contributor eventually sees the value of this proposal, you know where to find me.
Or maybe you have a bug in your thought process and community is trying to help you understand the issue. You are of course allowed to do whatever you want. But we are trying to explain to you that what you believe is not gonna work.
If you want to protect your users, to my knowledge, the only approach is to develop a browser extension you ask them to use (or require them).
You can also join some W3C body to try to standardize some way to allow good bootstrapping of JavaScript client code.
Or maybe service workers might be good enough for you. Try them out.
Also, I would propose that you investigate instead encrypting all DDP messages. I think that would be a much better approach than just encrypting logins. Encryption is not so expensive so why don't do it on all messages? If attacker is an active one, they do not necessary need a password. They would be able to inject and DDP message anyway. And for example change user's password by calling a Meteor method to do so.
I'm talking about an attacker observing the first request(s) over the unsecured wifi and then initiating MITM. Yes, the attacker could easily replace the code by telling the victim's client that there's a new version and then load a new version that bypasses the whole encryption and signing part... but then when the victim's request to log in to the real server isn't properly signed the real server can reject that login. My point is that I'm not addressing code integrity in my proposal because that opens up a whole different set of requirements, just protecting/authenticating the message. One thing at a time.
Nonetheless, I'm done trying to convince you of something you don't want to pursue. This exchange just doesn't feel constructive. I'll be fine, but in my opinion, this is a shining example of how lots of people get discouraged from contributing to open source.
The issue is in our threat model, so we'll just deal with it on our own. I'm ready to move on to other issues.
Oh, this is a shining example of something else. But I will leave it at that.
Good luck with defending against your attackers. I hope you will learn at some point that somebody tried to do what you are imagining and that they failed because of protections you put in place.
@dhrubins @mitar First off, please don’t get discouraged - rest assured there are many people listening! This has been a really great discussion, and there are valid points on both sides of the argument. Any input you guys have that helps make Meteor more secure is invaluable!
It looks like we’re at a bit of a stalemate. What can we do to help move this feature request forward? Are there any changes we can make to client side password hashing (based on either the discussion here or in https://github.com/meteor/meteor/issues/4363), that you guys both agree would make sense to implement? There has been a wide range of suggestions between these 2 threads:
- Stop client side password hashing completely, and force users to use SSL (more secure but increases the barrier to entry - ie. how do we force SSL when using localhost, without jumping through several hoops?).
- Go back to using SRP (which Meteor used prior to 0.8.2 - the reason for the switch is outlined here; the pros of the switch were decided to outweigh the cons).
- Implement a client side hash+salt approach (may improve password security, but doesn’t add much value if SSL isn’t being used).
- Come up with a way to encrypt + sign the hashed password before sending it to the server (more secure, but are we adding too much extra complexity when SSL could be sufficient?).
Is there some specific area we can shift the focus of this FR to, that you guys both agree would add value?
I think one point to consider is the state of the art in other frameworks and login systems.
Since this issue was migrated from https://github.com/meteor/meteor/issues/4363, I guess that makes me the OP.
@dhrubins: Stop with the client side hashing - it's security theatrics. Submit plain text passwords over TLS. The focus should be on having TLS enabled by default, and making sure the server has a sufficient amount of cryptographic work-factor during authentication to render brute-force attacks ineffective (eg: use server-side bcrypt).
Honestly, javascript password shenanigans in the browser is as dumb as backing up HOTP/TOTP secrets from your 2FA app in case you lose your phone. Hint: that changes "something you have" into "something you know", authenticating with two things you know is single factor authentication. And yet, a group of "smart people" created an app that allows just that. Don't be another one of those "smart people".
If you're still unsure about ditching client-side password hashing, have a read on what others have said:
- https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/
- http://thisinterestsme.com/client-side-hashing-secure/
- https://security.stackexchange.com/questions/53594/why-is-client-side-hashing-of-a-password-so-uncommon
- https://cybergibbons.com/security-2/stop-doing-client-side-password-hashing/
- https://www.reddit.com/r/crypto/comments/375lor/is_client_side_hashing_of_passwords_viable_to/ 5.1. One of the comments specifically mentions SRP, but goes on to say:
You'll probably be interested in the Secure Remote Password protocol (SRP). It uses a variant of the Diffie-Hellman key negotiation protocol to simultaneously authenticate the client with the server, the server with the client, and establish a session key for sending secrets between the client and server. It's not very useful with web applications, because you still have to trust the encrypted channel to deliver the right version of the JavaScript to manage the communication. It could be useful for other client/server applications, where the client application can be verified and isn't retransmitted every session.
If you're still not convinced after reading all of the above, I'd suggest you contact a notable cryptographer for further advice. Prof. David A. Wagner might be a good choice.
I think nobody is saying that use of Meteor authentication without TLS is secure. This is not mentioned anywhere. I am not sure why you are claiming this? You are saying that it raises a false sense of security, but you know that it is being hashed only if you really go into internals on how things work. It does not mention client-side hashing in the documentation either.
I agree with you that client-side hashing as the only protection is a bad design. Meteor does not claim that. I agree that client-side crypto (as more than just hashing) is currently not yet possible to achieve without a secure channel, but then the crypto is in most cases unnecessary.
But if Meteor already has hashing, there is nothing wrong with having it. Unless Meteor would be making bold claims that it is secure without TLS because of it.
(Hashing does help hide the length of the password, which TLS does not necessary protect against.)
Yes, I can go ask David Wagner. He literally sits in the office next to mine. I doubt he cares about this question though. So he will just say to me: Mitar, what are you spending your time on? Do something more meaningful.
(Nemo lost in David's office)
I consider that responding with the "laugh" emoji in this context is borderline demeaning (ie: causing someone to lose their dignity and the respect of others), and therefore is questionable behaviour under the Meteor Code of Conduct.
I feel this way because I interpret it like the "ha ha" retort popularised by Nelson Muntz in The Simpsons. I accept that it may not have been the intent of those involved, but feel somewhat demeaned regardless. Sadly, I am shamed when reading back through my own comments above - I've not been a shining light of humility and dignity. :disappointed:
Meanwhile, client side password hashing hinders upgrades to the password hashing scheme, and nothing has been done.
What can be done to move this forward? Have we come to an impasse?
Are you sure this is preventing upgrading the password scheme? It can still be done in the same way on the client, just with more rounds?
But I completely agree that password hashing scheme should be configurable, that rounds should be configurable, and that maybe the current approach of hashing on the client could be seen as only one of those, even legacy one. So you could have client and server component to the hashing, and then you can pick which one you want.
Are you sure this is preventing upgrading the password scheme? It can still be done in the same way on the client, just with more rounds?
Hindering, not preventing. Of course you can send many hashed variants of the password from the client to the server to account for whichever legacy hash exists in the DB, but in my that weakens the system by allowing many hashes to be submitted simultaneously (or in short order) with no work factor. The result is that brute forcing an account becomes much easier.
So you are saying that now load is on the client, while otherwise it would be on the server? But isn't this good? This means that the client needs a lot of CPU and/or memory which means the attack costs a lot for the attacker. Instead of putting the load on the server.
And if you want to prevent number of attempts, you can just rate limit attempts on the server. Without having to do any CPU work to limit that. Why would a server have to do any work at all? Why rate limit with work?
But the attacker might not even really run any hashing, in current scheme attacker could be just generating strings which look like hashes, without hashing anything. They can just guess the hash, not the underlying password. But hopefully the space of the hash is large enough that with any rate limiting on the server this is impossible.
so, I think there may several unrelated things being talked about here, @mitar
- MITM scenarios
- Hashing alone won't help, you need something else. This has already been covered.
- Password DB compromise scenarios
- If your password database stores server-side hashes, then there's a massive workload on the attacker to 'reverse' the hash, which involves brute-forcing. This is still non-ideal, as it's not server-side rate-limited anymore, so you should still reset all of your users passwords upon breach still, but it massively increases the effort involved in pulling off an attack in this scenario. This is where having compute-intensive hash functions really comes into play.
- On the other hand, if it stores client-side hashes, then upon acquiring the DB the attacker can still successfully auth to any account, and as you've stated the attacker can just generate things that looks like hashes.
- This is what I think of when someone says "Client-side hashing is insecure.
- Client-side hashing hinders upgrading the hash scheme
- The Client-side doesn't have the password scheme for a given account, so you'd have to send hashes for all previous hash schemes (at once? seperately?). Compare this to the server knowing the hashing scheme and computing a single hash of the password. It's just plain a lot simpler, doesn't punish people who decide to play around with the hashing scheme, and doesn't require the use of javascript on the login page to compute hashes.
In addition, I don't know of any situation where client-side hashing improves security. It seems more like an obfuscation/security through obscurity. This is most likely why it was called "security theatrics" earlier.
Does this help in any way?
I think this is a very good summary and analysis. Thanks for making it very clear. I agree.
I would just add that if you worry about compromised servers, then you might worry about server seeing plain text password at any point, because this way it does not ever have to reverse the hash. Of course, this matters if you reuse password on other sites or something like that. Which you should not.
Well, the server itself is often seperate from the database -- you may be able (and are often able to?) to compromise the database without being able to compromise the server itself, and of course, if the server itself is compromised, there's nothing we can do. However, there have been many many password database leaks over the years, from both big and small companies.
And, again, it doesn't matter if the server sees the plain-text password or a client-side hashed version of it or whatever -- whatever the server sees for passwords when starting authentication is effectively a plain-text password from the eyes of the server, and should be treated as such.
But, you're right. A """good""" thing about client side hashing is that nothing ever sees the plaintext password, so the user can't be as bitten by password reuse. Assuming that someone doesn't decide to brute-force the hash anyway to see if they used that password anywhere else. shrug It doesn't help your site not get compromised, though.
A """good""" thing about client side hashing is that nothing ever sees the plaintext password, so the user can't be as bitten by password reuse. Assuming that someone doesn't decide to brute-force the hash anyway to see if they used that password anywhere else.
I don't mean to necro this thread but can I ask why the semi-sarcastic triple air-quotes around "good" here?
Client-side hashing offers nothing for security along a compromised line of communication, sure. But, it protects the user against cross-site impersonation by hiding the human-memorable input. Password reuse is unfortunately very common; even those that know to avoid this may use patterns or "variations" (like supersecret_sitename
) to increase memorability. Thus, the knowledge of the "content" of the plaintext has greater value than the plaintext itself. Sure, cracking the hash is possible by a dedicated attacker, but the same techniques we use server-side (salts, slow hashes, etc.) continue to be deterrents.
Client-side hashing objectively makes a passive eavesdropper's job harder if their goal spans beyond gaining access to a single service. What's wrong with that?
if their goal spans beyond gaining access to a single server
indeed, which is why I mentioned it in the first place. The real solution is not to reuse passwords, or if you're going to do something on the client, implement something like https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol . I guess the reason I put it in sarcastic quotes is because client-side hashing alone doesn't help your own security -- it protects a user's bad habit and lulls you into a false sense of security: "oh, my password is already hashed, I can store it directly in the database" even though it's now the password equivalent.
The other issue is that either way, you require JavaScript in order to login. Pretty ubiquitous, but not universal. Unless SRP is supported in browsers now -- but if it were I'd think more sites would be using it.
Protecting users from themselves is something we should strive to do! Unfortunately, we can't expect to educate everyone.
Thank you for the link to SRP; it seems like a proper formalization of the obviously-simplistic "just send the hashed password" thought process. Likewise, excellent point with regards to JavaScript!
Thank you all for having this in depth discussion - Essentially every other client-side hashing (as an extra layer on top of SSL and server-side hashing) that I’ve found, hasn’t had nearly as much good-faith back-and-forth
Often the other discussions assume that folks are just trying to prevent MiTM and ignores a key question which usually revolves around a malicious employee, insecure logging or partial compromise (compromise of the receiving server but not the client). They also often ignore cross-site user habits in favor of treating the compromised site as existing in vacuum
The reason I’m commenting is that I believe that this year shined some light on this question a little, with a couple high-visibility cases like this one:
https://www.wired.com/story/facebook-passwords-plaintext-change-yours/
It is possible that the scale of the website and thus the amount of visibility that trained security personnel have into the day-to-day of the engineers would factor into whether or not client-side hashing of passwords has large benefit.
In larger organizations, across teams it could be harder to keep track of all the evolving systems that touch user information and how they store it. Having defaults that straight-up prevent engineers from easily messing up could have merit
At the same time I see, through this discussion, that it could also lead to a false sense of security or confusion from engineers that may lead to other security vulnerabilities being created