toxcore icon indicating copy to clipboard operation
toxcore copied to clipboard

Crypto and Security Audit

Open danpalmer opened this issue 11 years ago • 40 comments

Would be good to get this audited by an external person/group/company so the public can have a bit more faith in the security of the application, especially after the recent Cryptocat failures stemming from the developers having a lack of understanding of cryptographic principles.

danpalmer avatar Jul 29 '13 15:07 danpalmer

I understand what you're saying, but early audits could be crucial at finding architectural mistakes that would be costly to undo later, when the project has grown large. Instead of conceiving of an audit as something to be redone every commit or build, how about establishing development milestones along the way, with an audit at each one?

Now that tox.im is undergoing a public launch, and asking for public trust, the first milestone should be now.

konklone avatar Jul 29 '13 16:07 konklone

Also, if it is later found out to be insecure (as happened with Cryptocat) then a huge amount of user trust will be lost, not to mention the possibility of secure communications being intercepted.

I think the project should explicitly state whether a security audit has happened, or have much more in the way of evidence that it is secure.

In its current state I think the site and application should probably have a warning that the security has not yet been tested.

On Monday, 29 July 2013, Eric Mill wrote:

I understand what you're saying, but early audits could be crucial at finding architectural mistakes that would be costly to undo later, when the project has grown large. Instead of conceiving of an audit as something to be redone every commit or build, how about establishing development milestones along the way, with an audit at each one?

Now that tox.im is undergoing a public launch, and asking for public trust, the first milestone should be now.

— Reply to this email directly or view it on GitHubhttps://github.com/irungentoo/ProjectTox-Core/issues/121#issuecomment-21733097 .

danpalmer avatar Jul 29 '13 16:07 danpalmer

Agreed, I believe we should have some sort of disclaimer on tox.im basically saying not to trust your life or mission/safety critical websites info to the software until further notice.

On Monday, July 29, 2013 at 11:49, Dan Palmer wrote:

Also, if it is later found out to be insecure (as happened with Cryptocat)
then a huge amount of user trust will be lost, not to mention the
possibility of secure communications being intercepted.

I think the project should explicitly state whether a security audit has
happened, or have much more in the way of evidence that it is secure.

In its current state I think the site and application should probably have
a warning that the security has not yet been tested.

On Monday, 29 July 2013, Eric Mill wrote:

I understand what you're saying, but early audits could be crucial at
finding architectural mistakes that would be costly to undo later, when the
project has grown large. Instead of conceiving of an audit as something to
be redone every commit or build, how about establishing development
milestones along the way, with an audit at each one?

Now that tox.im is undergoing a public launch, and asking for public
trust, the first milestone should be now.


Reply to this email directly or view it on GitHubhttps://github.com/irungentoo/ProjectTox-Core/issues/121#issuecomment-21733097
.

— Reply to this email directly or view it on GitHub (https://github.com/irungentoo/ProjectTox-Core/issues/121#issuecomment-21733649).

ghost avatar Jul 29 '13 20:07 ghost

This project has to go through some security audit after the public exposure the last two days... What are the options here? Will anyone do this for free, or do we have to get money invested to some firm, to do an audit? How much time will this take?

@nfkd I agree with you, that no top cryptographer would have time for every single build on every single platform, but a first security audit of the protocol, and the implmenentation would be very fair at this point in time.

ghost avatar Jul 31 '13 09:07 ghost

Tox was pointed out to me recently by a friend. Being a security conscious person, I was interested. After looking at your draft crypto proposal, I'm worried.

First of all, the main issue with public key crypto is authenticating the public key, or in other words, making sure the public key you have corresponds to the private key of the individual you think you're talking to. I don't see that addressed anywhere. During friend requests, the only check that I see a user doing is looking to see if their friend's name is in the message. As a MITM, I could easily create a keypair, then send friend requests to people claiming to be someone else.

PGP addresses this with key signing parties and the web of trust. In theory, this is nice, in practice, nobody but crypto nerds actually go to them, so for your intended user base, that's out. Off The Record uses the socialist millionaires protocol to allow users to authenticate each other's public keys. They've also done a usability study that should be of interest if you want to know how the average person will react to this sort of thing.

My second worry is that the draft crypto protocol is ambiguous and makes me think that whoever wrote it doesn't understand the crypto primitives they're using. Ambiguity is not an option in serious crypto, it is where the mistakes and misunderstandings will occur, leading to broken crypto.

To give some examples of the ambiguity and misunderstandings, The draft mentions messages being "encrypted with the public key of the receiver and the private key of the sender". I can think of three different possible meanings for this, which I've outlined below.

One, the writer thinks that both a public and private key are needed to encrypt a message, which is false. I can encrypt a message using just the public key of the receiver, and the receiver will decrypt it using just their private key. I don't even need a private key of my own to send an encrypted message.

The second possible meaning is that the sender chooses a symmetric secret key, encrypts the message under that symmetric secret key, encrypts the symmetric secret key under the receiver's public key, and sends both the encrypted message and encrypted symmetric secret key to the receiver, who can then decrypt the symmetric secret key, using their private key, and decrypt the message using the decrypted symmetric secret key. This is the traditional hybrid cryptography approach, used because public key encryption tends to be slower than symmetric encryption.

Third option is that "encrypted with the sender's private key" actually means that the message has been signed with the sender's secret key, a measure intended to add integrity and authenticity protection to the protocol. ( Side note: As mentioned above, I don't believe that the public key of the sender is ever authenticated properly, so I wouldn't trust it)

Another problem I see is the complete lack of attention towards unexpected situations. The draft reads as if everything is going smoothly, but the whole point of crypto is to give protection while under malicious attack by powerful entities. What happens if a duplicate message is received? what about message reordering, message loss, or subtly corrupted messages? Is there cryptographic protection for those message properties, or at least indications of what to do if those situations happen? If so, I'm not seeing it in the proposal.

It's been said repeatedly that roll-your-own crypto is possibly more harmful than no crypto, due to a false sense of security. Please consider using an established, well researched, already analyzed crypto protocol such as OTR, rather than trying to build your own. I'm by no means a top tier crypto person, but if I can skim your proposal and find flaws, there are almost certainly more in there.

That said, I'm still interested in Tox. You've got an admirable set of goals, but I fear that the implementation will be lacking. I'm open for conversation if you want it.

WillMorrison avatar Jul 31 '13 15:07 WillMorrison

http://www.reddit.com/r/crypto/comments/1jdhpl/hi_im_a_contributor_to_tox_and_we_would_love_to/cbdue3s Try firstly to correct all of that.

ghost avatar Jul 31 '13 16:07 ghost

recent Cryptocat failures

Thanks for bringing that to my attention. Uninstalled Cryptocat (not that I ever used it). That specific vulnerability might be fixed, but lost my confidence.


I remain sceptical about this project. The "crypto plan" discusses encryption and forward secrecy but neglects authenticity and integrity. How are public keys authenticated? Is the communication vulnerable to a man-in-the-middle attack?

hickford avatar Aug 01 '13 18:08 hickford

I want to chime in here to say that authentication and MITM protection is at least as important as perfect forward secrecy. Right now, you have crypto and that sounds like "well yea, we are secure!" but when I can't know who I am actually speaking to, this becomes quite moot.

Natanji avatar Feb 21 '14 15:02 Natanji

I suggest those who think Tox isn't secure to read: http://nacl.cr.yp.to/

This is more specifically what Tox uses for crypto: http://nacl.cr.yp.to/box.html

Then after that if you still have doubts you can open the code in a text editor and try to find issues with it.

Tox is secure.

A real crypto audit would however be very welcome.

irungentoo avatar Feb 21 '14 17:02 irungentoo

Tox is secure.

That statement is always hyperbole, whether or not you have a security audit. The people I respect who make secure systems don't ever make statements like that.

konklone avatar Feb 21 '14 17:02 konklone

@konklone ok, please find the sec bug in nacl, and I think that you receive money from djb :) Good luck! :)

dcegielka avatar Feb 21 '14 17:02 dcegielka

Your defensive posture will bite this project in the long-term. Your project will be more likely to succeed if you accept that all code is fallible, yours is definitely broken somewhere, and that regular audits and humble instincts will keep you as close to secure as possible (without ever reaching it).

konklone avatar Feb 21 '14 17:02 konklone

@konklone How do you want to do an audit? Powerful institutions all the time trying to find weaknesses in crypto. Does it seem to you that we can compare with them? Audit tox code may make sense, but nothing more.

dcegielka avatar Feb 21 '14 17:02 dcegielka

I agree with @konklone that your attitude is the wrong one to take. You are stating that "we use a good crypto library so we are secure, and you can't prove us wrong." This is utter and complete nonsense.

I myself have used good crypto libraries in the past and used them wrong until someone else pointed out that I was doing so. I did not turn around and say "but I'm using crypto so it must be secure", I went and fixed it, and thanked the person pointing out the flaw in my code. My point here is that even if the crypto library was perfect, your code is not going to be. Don't claim that it is.

Second, you invite us to open the code and poke around. This is another bad sign that raises warning flags for me. Design documentation first, then code, even in security audits. An auditor needs to know exactly what you intend to implement from design documents before they go poke around in the code to see if you're doing it. As well, flaws in the design (not the code) will be found more easily from design documents. See my previous comment in this thread where I point out issues in your design (not your code), comments which are as-yet unacknowledged by any of the Tox developers.

Fix your attitude, then fix your design, then we'll talk about code quality.

WillMorrison avatar Feb 21 '14 17:02 WillMorrison

After reading @WillMorrison 's comments, I think authentication is a very serious problem in tox. How can I be sure the one I'm talking to is actually who I think he is? Without authentication or some kind of key-server infrastructure like PGP, there is no way to tell if a public key really belongs to someone.

aitjcize avatar Feb 21 '14 17:02 aitjcize

I probably should have said "The design of Tox has a very high probability of being secure" instead but I find saying "Tox is secure" is more to the point.

irungentoo avatar Feb 21 '14 17:02 irungentoo

@WillMorrison "You are stating that "we use a good crypto library so we are secure, and you can't prove us wrong."" - I never said that. I believe that examination of the tox code may make sense, but not a nacl audit.

dcegielka avatar Feb 21 '14 17:02 dcegielka

@irungentoo And yet the only proof you've given is "we use a good crypto library". The wording is unimportant. This is insufficient for claiming proper security. At this point, I wouldn't trust Tox to protect my communication, because I see baseless assumptions of correctness, not openness to being wrong. And I still see no response to my earlier concerns about authentication and the network crypto protocol, echoed by others in this thread.

Quite frankly, your attitude makes me less likely to point out anything else, because I can see you aren't interested in fixing what has already been brought up. Until you realize that audits mean bringing up and fixing issues with both design and code, I'm done here.

@silentbits Your words: "ok, please find the sec bug in nacl"

WillMorrison avatar Feb 21 '14 17:02 WillMorrison

@aitjcize

there is no way to tell if a public key really belongs to someone.

Tox ids contain public keys. Unless someone MITMs it and replaces it when you send it to someone that person can be sure that it's yours.

irungentoo avatar Feb 21 '14 17:02 irungentoo

@WillMorrison Your words: "ok, please find the sec bug in nacl" That's right, but tox != nacl. @aitjcize OTR?

dcegielka avatar Feb 21 '14 17:02 dcegielka

@irungentoo

but that is possible, is it not? if NSA really want to tap your conversations, that is exactly what they will do. Is it possible to implement something like WoT in PGP?

aitjcize avatar Feb 21 '14 18:02 aitjcize

I heard about Tox for the first time yesterday on reddit and spent the night looking at it and I am convinced that it can be, as it claims, a competitor to Skype and that it will be unpractical for agencies like the NSA or an internet service provider to apply mass surveillance on that network.

Of course I am not qualified to say any of this but I do trust Tox, I think I understand what it does and what it does not do.

You can add as many security features as you want, you will never be completely protected from the NSA. If the NSA has a special interest in you, you should probably not use Tox. You will however drive end-users out, if Tox is too complicated they will use Facebook instead.

I think this kind of discussion is completely counterproductive.

If you haven't seen it I recommend this talk: NSA operation ORCHESTRA Annual Status Report.

baltoche avatar Feb 21 '14 18:02 baltoche

@aitjcize hypothetically yes it's possible.

In practice though MITMing hex strings or other formats your Tox id will be distributed in and replacing them is very hard to do, especially if users all use different channels to give their keys to other people.

irungentoo avatar Feb 21 '14 18:02 irungentoo

@irungentoo You just handwaved away a major threat by saying "it's hard, nobody would do that" and suggested that all your users are smart and hardworking enough to use and verify multiple channels for passing around their IDs. If this is the world you live in, that's great, but the rest of us use TLS when doing online banking for a reason, and are surrounded by people who are perfectly willing to click through the big warning screens browsers give when they hit certificate errors.

It's clear to me you don't understand how security happens, and don't care enough to learn.

WillMorrison avatar Feb 21 '14 18:02 WillMorrison

@WillMorrison People have securely been giving their "insert popular chat service here" ids without being mitmd to other people since the dawn of the internet.

I don't see why Tox ids would be any different.

irungentoo avatar Feb 21 '14 19:02 irungentoo

This is getting harsh and unproductive. Let's return to the point of this thread -

The Tox project is live now, and telling people that it's secure. It says "coming soon" and that nightlies "may be buggy", but also says "Tox is an easy to use application that allows you to connect with friends and loved ones without anyone else listening in."

So, it's time to either dial that back, or do an audit of Tox's code, and to publish Tox's design choices, as @WillMorrison says. It doesn't matter that you know the roadmap is still early on, and that this is a rough version 1. You're promising a secure product at a time where people are anxious and looking for answers.

For the kind of project you're doing, an audit makes a lot of sense as an early step, and could provide a lot of valuable feedback for version 2. As you plan rapid iterations of the base system, expect and plan for iterations of the security systems as well as the UX. Independent audits, free of your team's assumptions and biases, will give people something closer to the level of reliability you're promising.

konklone avatar Feb 21 '14 19:02 konklone

Cryptographers don't have all free time to audit a program that changes every single day.

Then have your security assumptions audited, if not your code. Invite (pay) a team in to review your design, review your roadmap, review your current choices, and to publish their feedback. Do it once early on, and then do it again later on after you've had a chance to stabilize and respond to their feedback.

While I appreciate that "security isn't black and white", this is not encouraging:

Tox is as secure as it can be right now. It might be improved in the future, if that's possible without compromising bandwidth and user experience.

You have most definitely not positioned Tox in your marketing as "the highest level of security we're able to provide without compromising bandwidth/UX" -- you've positioned it as "the highest quality UX we're able to provide without compromising security".

That's Tox's entire promise. That's what this says:

screenshot from 2014-02-21 15 37 29

That means you absolutely should expect to sacrifice UX someday, if a community member or auditor finds a security flaw that necessitates a UX compromise. End of discussion.

Consider carefully what you're promising, and what standard you will be held to if your users are compromised.

konklone avatar Feb 21 '14 20:02 konklone

@irungentoo I don't have a problem with you people stating that Tox is as secure as it can be right now. It's a fresh project, fine.

But crypto is not soy sauce for security, and it needs to be designed into the application from the start. You use NaCl, that is trustworthy, sure - but a state-of-the-art crypto implementations definitely needs to be designed to thwart MITM attacks. This is only possible when you authenticate users to each other, and when this authentication process is explained and prominent in the UI and UX.

Threema for instance does a very good job at this, by signifying if the identity of another person was verified or not in every single chat. Is this something planned for Tox? Your current replies make it seem like "we encrypt our stuff" is all that you do. But crypto is more than encrpytion; for instance, who ensures that nobody can modify the DHT and give out fake public keys? Does Tox even alert and warn a user if someone's public key changed? If not, that would be a very important feature to implement.

What about PFS? What about deniability? If nobody ever thought about this, it is about time you do so. Only half of security is even concerned with crypto at all - you don't just need good crypto libraries, you also need good software engineering.

I am not asking you to immediately put in those features. I am asking the core Tox developers to issue a statement regarding the security design, and either clearly state that deeply integrated support for authentication, PFS and deniability is not planned - or which of them are planned. Work on a roadmap. Write a threat model and explain which attacks Tox protects against and which it is helpless against.

There are many security experts who gladly get involved in free software, and will gladly point out the most important flaws in your threat model to you for free. Really, now is the very best time to do so. As I see it, the security is ready to be designed, and this design reviewed and audited - or do you disagree, @kigu?

Natanji avatar Feb 21 '14 20:02 Natanji

@konklone is right, I was being hotheaded and I apologize. I should not have resorted to personal insults.

@irungentoo It's different because in your case, the IDs contain public keys that you then use to authenticate the other party. This is exactly why PGP key signing exists and why OTR uses the socialist millionaires protocol, because allowing unauthenticated people to give you a key does not magically turn them into authenticated people.

By way of illustration, consider the classic example of Alice, Bob, and Mallory. Alice gets a message via "popular chat service" saying "Hi, I'm Bob, and here's my Tox ID X" The problem is that Mallory sent this message and included her own public key. Alice is now communicating with Mallory thinking it's Bob, and Bob was never consulted. In fact, Mallory doesn't even need to go to the bother of doing a MITM attack.

This is a problem. Either you have to depend on the security of unknown third party channels for authentication, in which case your authentication guarantee is the lowest common denominator of whatever people use. Alternately, you find some way of authenticating the keys properly instead of just trusting that they're fine, and make it dead simple for the users to do that. See my first comment in this issue for my suggestions on that.

But this thread is not just about authenticating users, it's about the overall security of the product. There are multiple things to audit, starting with the design. Once you have a solid security design you can worry about having coded it right.

You do not need to audit all over again on every single change. You should have code that is modular enough that a UI change or a change to how data is stored has zero effect on the security of the product. (Aside from things such as encrypting user data on disk or changing how users know they're communicating securely, which should be audited, but separately from the communication protocol) The security should be provided by a handful of well audited protocols that the rest of Tox uses.

To drive this point home, have a look at how OTR does things. They provide libOTR to do the security critical stuff, then write software around that, like their plugin for Pidgin. LibOTR itself implements a protocol that is based on the design work done by their papers, also linked from their site. This is why I trust OTR to secure my communications and do not trust Tox. OTR has provided for authentication, PFS, and deniability, in addition to confidentiality. They have explained their design so that I can look at it and see that they know what they're doing, and have backed up their claims. They clearly state the assumptions and dependencies that they make, and work to reduce that to as small a set as possible.

To wrap up, I've looked at the documentation in the repository again, and it looks to me like this project is trying to reimplement TLS, Tor, and NAT traversal all at once, and stick a chat program on top of all this. TLS is a protocol that has been battle tested for over a decade and undergone multiple iterations to deal with the flaws that have been pointed out. TLS 1.2 has mathematical proofs of the security guarantees it provides. Tor has been used around the globe by hundreds of thousands of people in openly hostile situations, and apparently annoys the NSA enough to make a slide titled "Tor stinks". Tor is built using TLS for communication security.

I can see no reason to throw out these tried and tested protocols in favour of something you came up with. Like I said to begin with, you should be using existing, already audited protocols, not rolling your own. My suggestion is to use something like OTR for chat sessions and perhaps ZRTP for voice and video. If you want to give users the option to hide their IP, allow them to connect through a SOCKS proxy so they can use Tor, or use libtor to do fancier things like setting up hidden services. If you do think about listening for incoming connections, you have yet another attack surface to consider, and a problem with most firewalls, so I would recommend against it.

WillMorrison avatar Feb 21 '14 23:02 WillMorrison

The need for UDP instead of TCP is somewhat given, I think. Hole punching is much more reliable with UDP than TCP, and audio/video communication could also benefit from out-of-order processing via UDP. But it might be a better idea to use libraries that do this job, like tcpoudp.

Natanji avatar Feb 21 '14 23:02 Natanji