Encrypting content of hidden service descriptors
Karsten Loesing
karsten.loesing at gmx.net
Mon May 7 21:31:42 UTC 2007
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
is this discussion still alive? ;) It took me some time to think about
the problem (and then to discuss it with a colleague of mine), but
finally I imagine that there could be a solution! :)
Roger Dingledine wrote:
> On Sat, Apr 14, 2007 at 08:01:21PM +0200, Karsten Loesing wrote:
>> The reason why I asked the question is, that I am rethinking the format
>> of rendezvous service descriptors at the moment. And there still is this
>> idea around of encrypting the descriptor content which is, as you wrote,
>> not completely solved.
>
> Right. [...]
Thinking about the problem did not change my mind that encrypting the
complete content of hidden service descriptors is *not* the solution.
The storing node needs to verify the publisher's authenticity. If it
cannot do that, everyone knowing a descriptor index can publish as many
random "descriptors" for that index as she likes. This prevents us from
simply implementing Lasse's and Paul's Valet Service approach in the
proposed form -- though it has many interesting parts which we can
adopt. If I am wrong with this, please correct me, especially Lasse and
Paul! :)
> [...] I remember a discussion a couple years ago about how to encrypt
> the hidden service descriptors. I wanted three things at once:
>
> a) The places that store and serve the descriptors can't learn the
> introduction points,
> b) But they can make sure they're signed correctly and can pick out the
> newest descriptor.
> c) Select clients can learn them through some extra key or whatever
> they're given.
These three things are absolutely reasonable! But I want more... :)
d) The places that store and serve the descriptors change periodically.
e) It is hard for such a storing and serving node to track the activity
of a hidden service, even given that a set of such nodes colludes.
f) For all other nodes than 1) the subset of currently storing and
serving nodes and 2) the clients it is impossible to track the
activity of a hidden service.
> Doing all three of these with just one key (or derivatives of that key)
> seems hard. It probably requires crypto magic that I don't have.
Right, neither do I.
> An easier option might be to use two keys. I haven't worked out the
> details, which means all the hard work still remains ;), but the idea
> would be that we'd have a signing key which is publicly known, and
> the name, timestamp, signing key, and signing key signature would be
> clearly visible to everybody. The rest of the descriptor would be the
> introduction points, encrypted with the second (encryption) key. Then
> the client would be given both keys (e.g. x-y.onion), whereas the public
> only knows the signing key.
I agree with your arguments. What follows next is the "hard work"... :)
1. Bob registers his hidden service at some introduction points. In
contrast to the original approach he passes a private key from a fresh
key pair to the introduction point and does not pass his own onion key.
The reason is that the introduction point shall not be able to recognize
Bob and track his activity. Bob includes the corresponding public key in
the rendezvous service descriptor. This idea is copied 1:1 from the
Valet Service approach. Whatever I may have described incorrectly is
meant like it's written in Lasse's and Paul's paper.
2. Bob creates a rendezvous service descriptor with the following
contents and stores it at those nodes in the ID circle which are
"responsible" for storing or replicating the given ID:
1) ID = h(h(PK_B) + h(date + cookie_AB))
2) PK_B
3) h(date + cookie_AB)
4) timestamp
5) {introduction point IP + port + public service key}_AB
6) signature of 1) to 5) using Bob's private key
The symbols are:
- - h: a cryptographically secure hash function
- - PK_B: Bob's public, permanent onion key
- - +: string concatenation
- - h(PK_B): Bob's permanent onion ID
- - date: a dynamically changing component, e.g. the current date or
something changing more often (1/2 day, ...)
- - cookie_AB: a secret shared by Bob and all of Bob's clients A; though
it is assumed that such a cookie is only given to a subset of
authenticated clients, it could also be passed to all of them together
with h(PK_B); the latter would then represent a situation of a hidden
service without authentication
- - timestamp: Bob's current time in millis to ensure freshness
- - everything in 5): contact information encrypted using cookie_AB
This approach adopts some important ideas from Lasse's and Paul's paper,
but changes some parts to fulfill requirement b).
The ID is used to store and retrieve the descriptor. It needs to change
its position in the identifier circle, so this is why the date is
included. Its position needs to be unpredictable for everyone else than
Bob's clients, so it is based on cookie_AB. But the storing node needs
to be sure that it's Bob who stores something using this ID, so that
h(PK_B) is not included in the inner hash function, but separately, and
that PK_B is given in plain in 2). Hence, the storing node can use 3)
(which looks like random noise to it) and 2) to see that only the person
with the private key for PK_B could have created the descriptor, or to
be more precise the signature in 6). All of Bob's clients are able to
create the same ID, retrieve the descriptor for it, and decrypt the
contact information given in 5). But they are not able to create a valid
descriptor with this ID.
When thinking about attacks I found the following possible attackers and
their possibilities:
- - Evil node that is selected for storing a descriptor: It knows about
Bob's current activity but cannot decrypt the introduction point
information. However, chances are quite low to be picked as storing
node, so that it's nearly impossible to track Bob's activity all the
time -- or very expensive.
- - Evil node that is not selected for storing a descriptor: As long as it
does not collude with a node that has been selected, it does not know
anything. It cannot even predict which temporary ID Bob has calculated.
- - Evil introduction point: It does not know that it works for Bob. At
least as long as it isn't told by someone.
- - Evil client or former client: She knows everything. Bob needs to trust
his clients, if he wants them to contact him. But at least his clients
cannot publish false descriptors on behalf of Bob. If he distrusts a
client, he needs to change cookie_AB.
> The next step would be fixing it up so knowing the encryption key doesn't
> necessarily mean you can always decrypt things in the future. Rotating
> the encryption key periodically might do it, or we could do something
> more complex.
Why would you want to do that? To exclude some clients? However, this
would require Bob to tell the new encryption key to all of the remaining
clients which can be very expensive.
> But of course, if the rest of the protocol remains the same, then
> the adversary can still enumerate introduction points pretty easily
> by attempting to introduce himself at each Tor server one by one
> using the public onion name. Fixing that starts to make things more
> complicated. Hm.
This should be fixed when using a fresh key pair for each introduction
point, or not?
> When we originally designed this, we had no intention of keeping
> descriptors private. Putting them on the dirservers was just a hack
> because I didn't have anywhere better to put them. But the notion that the
> dirservers give them more "secrecy" has gradually sprung up since then. So
> the question is: so what? What happens if all descriptors are public?
In the face of the above proposal, we should rather discuss what happens
if descriptors are stored here and there on less trustworthy routers
than the directory nodes, but with an encrypted introduction-points
part. The question is whether we are more secure if all responsibility
lies in the hands of a few mostly trustworthy persons or if it is
distributed among a lot of more or less untrustworthy persons...
> First, you can DoS the intro points of a hidden service, even if the
> hidden service hasn't revealed itself to you. [...]
That's possible. But then Bob can simply change his introduction points.
> [...] This is an issue, but if
> you want to enumerate hidden services, you have other options -- a less
> efficient approach would be to run a bunch of stable nodes and hope you
> get picked as an intro point a lot.
Would be solved in the above setting with fresh keys.
> Second, you can visit a hidden service, even if it hasn't revealed itself
> to you. You won't know what ports it supports, but you can portscan the
> whole thing, and besides it's probably just on port 80 anyway. [...]
How can I portscan the hidden service when I don't know its IP?
> [...] Is the
> answer that hidden services that want authorization should implement
> it end-to-end (e.g. http auth) and not try to keep their address itself
> a secret?
Sure, authentication is better than hiding the location. But in the
design above clients can only connect to a hidden service with the
correct key that is stored within the descriptor in encrypted form. This
is a kind of authentication, too.
> What other issues are there if each hidden service descriptor is public?
> Rather than postponing this mail another day while I sleep, I'm going to
> send it and hope other people bring up the other issues. :)
Good question. What else can happen? Let's discuss it before we build it.
All in all, this is the idea. Now pick it to pieces. ;)
- --Karsten
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGP5q+0M+WPffBEmURAht1AKDGbZayxBmxc0Ifm8R9NaeMJHbMAACgwmKB
1hWgCXgZGtl2zSD5fB3kBu0=
=VA70
-----END PGP SIGNATURE-----
More information about the tor-dev
mailing list