This is a topic that comes up every now and then, and it’s one I have a number of fairly carefully formed opinions on, but I’ve not taken the time to organize those opinions and write them down, so this post is an effort to do that.
First though, let’s lay out the context: why do we care at all? I think most of us are already on the same page here, but for the sake of clarity, let’s explore it anyways. There’s a million messaging mechanisms out there, and some of them are already quite widespread, with email and SMS (texting) being your basic, as-close-to-universal-as-anything-gets options. Why do we want something else?
Well, we each have to answer that question for ourselves, but at this point I know that emails and texts are being collected and stored by people I don’t know or trust. Private conversations between me and people I do know and trust are being intercepted, and could easily be tampered with in transit, by people I do not know or trust. From the start, that knowledge alone rubs me the wrong way. Even if I don’t have something I actively want to hide, we’re talking about my intimate details not merely being laid bare, but indexed and made searchable, to people I do not know or trust. Why would I want that? All else being equal, why would I not prefer to have that data kept private? Of course I prefer it is kept private.
But let’s not be coy: we all have secrets and private thoughts and discussions. If you claim not to have secrets, I’ll know never to trust you with any of mine. I absolutely do work with some information that I do want kept private, and I want good technical solutions for accomplishing that. Moreover, I know that there are people out there whom I actively distrust, who are actively recording mine and everyone else’s conversations online.
So what, then, do I want in a messaging system? Well, thanks to my background in cybersecurity, I understand quite a lot about what is possible and what is desirable, and perhaps I can help some of my readers shortcut directly to the end of this search without going to grad school for cryptography like I did.
A Bit About Security in General
Everything in security is about adversaries. I want X, someone else wants (not X). Plug in whatever you want for X, and start figuring out strategies: we employ one strategy to get X, someone else employs a counterstrategy. Security is about playing those games out in our heads until we come up with our strategy for getting X, such that there is no effective counterstrategy.
And security is always open-ended: is there actually an effective counterstrategy that we didn’t foresee? We don’t know. If there is, and our adversary finds it before we do, we obviously want to figure out what that counterstrategy is, and then define our counter-counterstrategy to thwart it.
In practice, these games tend to continue indefinitely with each side either finding a still more effective strategy, or losing interest and giving up. In practice, there is almost always a more effective counterstrategy out there still to be found, it’s just a question of motivation. This is the origin of the truism “every system can be hacked.”
So from the start, we can state with confidence: there is no such thing as a truly secure X app, but there are probably a whole string of options, each a closer approximation to that standard than the last, having been designed to withstand all known counterstrategies. This general pattern holds true for pretty much every area in security, be it cyber or otherwise. Every lock can be picked, but if you care, you can get one that makes it really, really hard so that most lock-pickers will give up before they succeed.
In information security specifically, which is the most relevant field to messaging systems, there are three main goals that all secure systems try to attain. These are known as the CIA Triad (no relation to the Central Intelligence Agency):
- Confidentiality — Only those who are supposed to know the information can see it
- Integrity — Only those who are supposed to be able to modify the information can modify it, and they can only modify it in the appropriate ways
- Availability — All those who are supposed to have access to the information do have access to it, readily and easily
Within the study of information security, a system’s security is formally defined as that system’s ability achieve those three goals. If a system fails to meet any of those goals, it is insecure to the extent it falls short of them. The point to understand here is that security is not all-or-nothing. A system might have awesome confidentiality and integrity, but be really hard to use, and within information security, that system is not as secure as it could be. Whether that system is more or less secure than a system that has awesome confidentiality and availability, but makes no promises of integrity, is formally undefined and is entirely a matter of opinion.
There is no official badge of secure-ness that a system can get; it doesn’t work that way. Information security gives us a language with which to understand and discuss what parts of a system are or are not secure against what attack strategies. It doesn’t give us an objective rule or score as to how secure a system is.
And Messaging Apps, Specifically?
In messaging apps, then, security means that only the people I intended to be able to read my message can read it (confidentiality); that the message they got was exactly what I sent (integrity); and that all of them could easily read it (availability). So to warm up, let’s look at our examples from earlier, email and texting:
- Email makes no attempt to hide the contents of communications from parties other than the addressed recipient
- Text messages are encrypted between cell phones and the tower, but this encryption has been thoroughly broken since the 90’s, and no attempt is made to hide the contents of messages while in transit between cell towers
- Email makes no attempt to prevent tampering of the contents of the message in transit, nor does it make any attempt to render such tampering evident after the fact
- SMS makes no attempt to prevent tampering of the contents of the message in transit, nor does it make any attempt to render such tampering evident after the fact
- Emails are usually delivered and people usually don’t have much trouble getting them, but no formal guarantees are made that emails will be delivered in order or at all
- Texts are usually delivered and people usually don’t have much trouble getting them, but no formal guarantees are made that text messages will be delivered in order or at all
So neither of these systems ranks high in confidentiality or integrity, but both do pretty well in availability. SMS is arguably better at confidentiality, but when its feeble attempt at encryption has been breakable to every hacker in his mom’s basement since 1999, it’s hardly even worth mentioning.
OK, So What Do We Want?
In general, we want an option that covers all three areas (confidentiality, integrity, availability) reasonably well. So let’s briefly discuss the state of the art in each of these areas:
Confidentiality is generally provided by encryption. Encryption means scrambling the message so unauthorized people can’t read it. There are a lot of different algorithms to encrypt things out there, and most of them are broken and can be decrypted by people who aren’t supposed to be able to. So we want to be sure to use an encryption algorithm that isn’t broken, such as AES (the Advanced Encryption Standard, which is really just a title given to the algorithm most trusted by the National Institute of Standards and Technology, or NIST, at any given time. This title is currently held by an algorithm named Rijndael).
But encryption is a bit more complicated than that. Consider SMS: as we discussed above, SMS uses broken encryption, and it only uses this encryption between the cell tower and the cell phone. Everywhere else, no encryption is used. So even if SMS used AES, it wouldn’t be very confidential because it would only hide the message for part of its journey. For encryption to give us full confidentiality, it must be End to End, which means that the sender encrypts the message so that no one except the intended recipient can decrypt it. Even if the message is not decrypted by a middleman, the mere existence of a middleman who could decrypt the message breaks the End to End property of an encrypted system. In practice, designing a system where no such middlemen can exist is quite tricky, and just because a system is called “End-to-End Encrypted” doesn’t mean it really is.
So to sum up confidentiality, we achieve this through encryption, but not only do we need encryption, we need a trustworthy encryption algorithm, and that algorithm has to be deployed in such a way that we don’t accidentally empower unauthorized parties to decrypt our messages. This is quite tricky to do in practice, and people make mistakes at it every day.
Integrity is sometimes provided by the encryption algorithm, but is sometimes provided by other algorithms. For example, AES alone does not provide any guarantees of integrity — an AES encrypted message might have been tampered with, even if the tamperer didn’t know what the message said. Suppose Eve has recorded several encrypted messages from Alice to Bob, including one that says “Yes” and another that says “No.” Without actually knowing which one is which, Eve could simply swap one for the other, and this will destroy the integrity of the conversation without necessarily compromising its confidentiality.
Cryptographic protocols ensure integrity in a number of ways, and the issues at hand are complex enough to warrant several posts, so I won’t attempt to cover them in detail here. It is important to note, however, that integrity and confidentiality often go hand in hand: while it’s entirely possible to have either one without the other, we usually secure them both together, and when one goes, the other often goes with it.
The state of the art is that computers and software are now getting quite good at establishing a securely encrypted link to someone with guarantees of confidentiality and integrity of the messages between you and that someone, but the software can’t guarantee that that someone is who you think it is, so to be sure, the humans must take some additional steps to verify that no third parties sneaked into the middle and started quietly passing messages back and forth between you and your intended recipient, possibly reading and/or changing them in transit. This is known as a Man in the Middle attack, or MITM.
Usually we solve this problem by trusting a central server to keep track of who is who and make sure that everyone is really talking to who they think they’re talking to, but that server could just as easily lie and grant itself or someone else MITM access. Blockchain technology provides a decentralized, trustless solution to this problem, allowing software to associate a human-provided username to a particular account without trusting anyone who might lie about that pairing, but this is pretty cutting edge, and I don’t know if anyone is doing this securely yet or not.
Availability is the red-headed stepchild of information security. While security blowhards will pontificate long and hard about confidentiality and integrity, frequently speaking as though these are the only goals that matter, in practice availability is the metric that actually guides people’s choices in software. Consider email and SMS: although they both abysmally fail at confidentiality and integrity, they’re highly available, which is to say they’re easy and reliable, so everyone uses them. Simultaneously, other systems like GPG/PGP may have strong confidentiality and integrity guarantees, but only cryptography experts know how to use them, and even they rarely actually do because they’re so much effort. So while confidentiality and integrity get all the press, availability is what makes the decision, and availability essentially boils down to “Yeah, but can my grandma use it?”
And therein lies the rub: a ‘secure’ messaging system that no one uses is not actually secure, because it’s not available. Usability is part of security. It is a key part. Don’t let anyone tell you otherwise. A truly secure system isn’t just hard to break technically, it must also be easy to use correctly, and hard to use incorrectly.
In many cases, security boils down to a choice: do we take confidentiality and integrity at the cost of availability, or do we take availability at the cost of confidentiality and integrity? My goal is to find an acceptable balance of both.
Down to Brass Tacks: My Recommendations
In case it isn’t yet clear from all of my discussion on security so far, determining whether a particular messaging app is secure or not is really freaking hard. To emphasize, when someone out there releases some app and says “This is a secure messaging app,” that claim means absolutely nothing until a lot of really smart people who understand security at least as well as I do spend a lot of time and effort reviewing the underlying protocols and application code to verify that claim. I consider myself qualified to do such a verification, but I rarely do because it’s a bloody ton of work to do well. For this reason, it’s generally only security professionals (expensive) and academic institutions that go to the trouble.
Of the secure messaging apps I have made any effort to review, my recommendations today boil down to two different apps: Wire and Signal. To be clear, I have not undertaken a formal review of these apps myself (someone would have to pay me to do that, and I would charge a lot); however, I have read their own security claims and have examined the formal reviews of others.
Wire (https://wire.com) is my favorite, since it ranks pretty well in all categories. It’s easy to use, it has pretty rigorous security standards, it’s a partially open source system (the client apps and parts of the server code are open source), it has undergone formal third party security review (with acceptable results), and there don’t appear to be any known serious flaws or vulnerabilities.
If Wire is my favorite, Signal (https://whispersystems.org) is my second favorite. Though far less feature-rich than Wire, it is based on the well-known, thoroughly reviewed, and widely implemented Double Ratchet protocol designed by Open Whisper Systems. This is probably the best protocol out there for confidentiality and integrity, since it’s so well-known and battle-tested, being the protocol behind Signal, What’sApp, Google Allo, and Facebook Messenger, to name a few just off the top of my head.
The reason I recommend signal over these other apps is that, although these other apps are more user friendly and widely used (availability), they are also privacy risks as they are all closed source apps owned by companies known for hoovering up and storing forever any and all private data they can find. Thus we can safely assume that, unless additional evidence shows otherwise, when using these apps we have no confidentiality from their makers. Signal, in contrast, is open source, and the company behind it, Open Whisper Systems, publicly commits to protecting their users’ privacy by retaining as little information as possible about the users’ communications, even where doing so prevents them from implementing user-friendly features. This improves confidentiality at the cost of availability.
So Signal may offer a bit more confidentiality and integrity, but Wire is a lot easier and more fun to use (roughly: higher availability). I also note that Signal requires a phone number, which they use to improve integrity at the cost of privacy (a facet of confidentiality). Also, for maximum security, both of these apps support an additional manual key verification step to ensure that no Man in the Middle has crept into the connection.
Honorable Mention goes to Keybase (https://keybase.io) which was recently pointed out to me. Based on the widely respected, but rarely used, GPG/PGP protocol, Keybase makes GPG easy enough that people can now use it painlessly. Furthermore, Keybase leverages the Bitcoin blockchain to help provide confidentiality and integrity guarantees without the manual verification steps that most other apps benefit from. Of course, manual verification can also be performed for optimal security. From what I see so far, Keybase might be more secure than Wire or Signal; however, I haven’t spent enough time looking into it to form a trustworthy opinion.
And Now for the Snake in the Grass
Extreme Dishonorable Mention goes to Telegram (https://telegram.org), which I want to highlight specifically as an app which, in my opinion, is not secure at all.
Telegram is marketed, quite emphatically (“Telegram is more secure than mass market messengers like WhatsApp” is a direct quote from their FAQ page), as a secure messaging app; however, since shortly after its publication, Moxie Marlinspike (a well-known and respected hacker, co-author of the Double Ratchet protocol that powers Signal and others) pointed out irregularities in the protocol which render its security claims suspicious.
One would expect a reasonable team acting in good faith to re-evaluate their protocol’s security, and perhaps enlist a respected security firm to review their designs, after such a cold reception by the cryptography community. Instead, Telegram doubled down and launched an open challenge to break Telegram’s security. This would seem to indicate their confidence in the security of their protocol, and put the ball in the court of those claiming it is flawed. Instead, Marlinspike pointed out that this challenge was designed in such a way that it can’t be won, no matter how bad the crypto is. He even provided an example of a trivially breakable crypto protocol, and pointed out that even that protocol can’t be broken according to the rules of the challenge.
This conversation is fairly old at this point, but Telegram continues to persist and market itself as a secure messaging app. There are plenty of unsubstantiated claims in the wild that Telegram is secure, but I’ve never seen one with any substantiation based on the underlying cryptography. There are, however, plenty of articles on how it’s not secure, from respected sources that provide substantial evidence for their claims. And there are now at least two papers formally presenting actual attacks on Telegram’s protocol: 1, 2 (I have not reviewed these papers in detail; I see no reason to spend the time on it).
So why so much hate for Telegram? Because they still actively market their app as secure, and at this point, I can only assume that claim is an intentional lie. I try to give people the benefit of the doubt, and apply Hanlon’s Razor (“Never attribute to malice that which is adequately explained by stupidity”), but at some point I have to ask myself: can I really believe they’re that stupid? Or, are they trying to deceive people? I honestly cannot imagine that someone can be that stupid; I think anyone acting in good faith would have questioned themselves by this point, and in this case, once the question is honestly asked, the answer is honestly obvious. So while I have no positive proof that they are intentionally lying, all signs seem to point that way. Please tell me, dear reader, am I being unreasonable?
So to wrap things up, let me emphasize that this is a complex issue, and it’s one that I do not take lightly. I have a great deal of experience that I believe qualifies me to opine on what is and is not a secure messaging app, but I do so with hesitation because even for me, it’s a lot of work to form a quality opinion. It is for that reason that I don’t have an opinion on every messaging app out there. I have found a couple of apps that I do trust for my day-to-day messaging, and I’m always on the lookout for more, but at the end of the day, this is a game of one-upping that we’ll be playing forever, because that’s how security works.
I recommend Wire and Signal, and possibly Keybase. I strongly warn all to actively distrust Telegram. These opinions are based on thorough and thoughtful, if not professional grade, reviews of the software and security in question, based on a background of formal training in cryptography and cryptographic software protocol design, reverse engineering, analysis, and exploitation at Rensselaer Polytechnic Institute, thanks to which I am able to understand and participate in technical security reviews.
My opinions are my own, and they are only intended to be good enough to satisfy me, which is a highly subjective standard. They are provided in the hopes that they are useful, but I make no promises that they are valid. If they aren’t, please let me know. 🙂
Thanks for reading
With a background in software development and a passion for security, Nathan has identified blockchain technology as his niche. He is dedicated to creating applications which empower individuals to shape a better world for themselves and others.