Encryption over Hidden Services
Mike Perry
mikeperry at fscked.org
Fri Aug 6 10:07:12 UTC 2010
Thus spake Nathan Freitas (nathan at freitas.net):
> Is transport or message layer encryption redundant between a tor client
> and a hidden service?
The short answer is yes, we believe it is redundant and unnecessary.
The long answer is "My god man, hidden services? Nobody touches that
code. Who knows what lurks in the deep..."
In particular, the Tor Hidden Service protocol for some reason
discounts the threat of hash collisions, opting for 80bit hashes for
"usability" reasons in onion urls. These urls are the only real means
a user has of authenticating their destination.
In an ideal world, users are perfectly capable of memorizing every
character of the 80bit base64 key hash in the .onion url.
In the real world, it is disturbingly practical to compute .onion urls
that have a significantly large number of characters in common with an
arbitrary target url, in arbitrary positions of the url.
There was a program called 'shallot' which optimized hidden service
key generation to accomplish exactly this using THC's Fuzzy
Fingerprint technique. It seems to exist only in rumor and legend
these days, but if you would like an arbitrary snapshot of the code
that calls itself 0.0.1, I can post it somewhere.
It was originally created for the sake of creating vanity .onion urls.
However, the author optimized it far enough so that the hash could
have something like 8 characters in common with a target .onion url,
in either the prefix, or the suffix, or both, with just a few
machine-days of computation. Their implementation also only created
"strong" RSA keys for the resulting .onion urls. If they allowed weak
key generation for their targets, much more optimization was possible
(and if your goal is to deceive a user into visiting or chatting with
your spoofed hidden service, why not use weak keys?).
So, while in theory, hidden services are probably just barely beyond
the horizon of the cryptographically insecure, in practice they are a
usability nightmare to use for things like chat, where humans need to
establish visual continuity for the endpoint they are communicating
with.
Sadly, this issue may just be the tip of the iceberg in the hidden
service design, and there may be other edge cases that could be a huge
problem for uses like chat. Denial of service conditions are
particularly susceptible to gaming for chat, and the hidden service
code may have quite a few of those. "Oh, I'm sorry, my .onion is down
because of reason X. It will never come back. This my new .onion. I
made them as similar looking as possible for security reasons. Trust
me."
In an ideal world, we'd have a distributed dictionary that safely maps
full-strength hidden service key hashes to unique, human readable
names, a-la .i2p eepsites to avoid lookalikes and gaming.
In the real world, secure distributed dictionaries for this may not
exist, and/or may have subtle vulnerabilities of their own. They probably
do exist for our centralized directory model, but we haven't really
devoted enough thought into the hidden service design to really bother
with them. Perhaps you could build one as part of your chat protocol.
All that said, hidden services do have one signifcant advantage over
regular Tor usage that is probably worth mentioning: Hidden services
overall are generally more secure than using Tor to contact an
arbitrary Internet endpoint using an arbitrary application.
This is due to the likelyhood of user error in the form of using
insecure, non-torrified applications. Given the choice between an
insecure app that has every feature that they want, and an secure app
that is missing a feature or two, the average user will choose the
insecure app *every* time, and recommend it eagerly to their friends.
Hidden services at least have the advantage that they tend to be
harder to connect to in a fashion that is insecure. Either they work,
or they don't. Especially if you trust the hidden service you are
attempting to connect to.
> We are working on a simple "p2p" messaging service between Android
> devices running Tor, with each running a hidden service to accept
> messages. I am trying to figure out if we need to OTR or SSL on top of
> this for any reason.
Have you seen torchat? https://code.google.com/p/torchat/
I've not used it, but perhaps it could be worth looking at, at least.
> I am just trying to understand how far we should rely on encryption with
> the Tor stack for this type of thing, vs. adding in our own bits (err,
> bytes).
Probably your primary threat at this point is user error, more so than
interception or the direct subversion of the hidden service protocol,
which most likely is end-to-end secure...
If your secondary crypto system could somehow improve upon the utterly
broken usability model of hidden services, then it would be a good
idea to add it. Most likely though, a secondary system would just add
confusion, and therefore insecurity. Ultimately it depends on the
userbase and the UI design.
--
Mike Perry
Mad Computer Scientist
fscked.org evil labs
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20100806/a630e482/attachment.pgp>
More information about the tor-dev
mailing list