[tor-dev] Temporary hidden services
George Kadianakis
desnacked at riseup.net
Fri Oct 19 13:01:26 UTC 2018
Michael Rogers <michael at briarproject.org> writes:
> On 18/10/2018 13:26, George Kadianakis wrote:
>> Michael Rogers <michael at briarproject.org> writes:
>>
>>> Hi George,
>>>
>>> On 15/10/2018 19:11, George Kadianakis wrote:
>>>> Nick's trick seems like a reasonable way to avoid the issue with both parties
>>>> knowing the private key.
>>>
>>> Thanks! Good to know. Any thoughts about how to handle the conversion
>>> between ECDH and EdDSA keys?
>>>
>>
>> Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
>> signing is something that should be done only under supervision by a
>> proper cryptographer(tm). I'm not a proper cryptographer so I'm
>> literally unable to evaluate whether doing it in your case would be
>> secure. If it's possible I would avoid it altogether...
>>
>> I think one of the issues is that when you transform your x25519 DH key
>> to an ed25519 key and use it for signing, if the attacker is able to
>> choose what you sign, the resulting signature will basically provide a
>> DH oracle to the attacker, which can result in your privkey getting
>> completely pwned. We actually do this x25519<->ed255519 conversion for
>> onionkeys cross-certificates (proposal228) but we had the design
>> carefully reviewed by people who know what's going on (unlike me).
>>
>> In your case, the resulting ed25519 key would be used to sign the
>> temporary HS descriptor. The HS descriptor is of course not entirely
>> attacker-controlled data, but part of it *could be considered* attacker
>> controlled (e.g. the encrypted introduction points), and I really don't
>> know whether security can be impacted in this case. Also there might be
>> other attacks that I'm unaware of... Again, you need a proper
>> cryptographer for this.
>
> Thanks, that confirms my reservations about converting between ECDH and
> EdDSA keys, especially when we don't fully control what each key will be
> used for. I think we'd better hold off on that approach unless/until the
> crypto community comes up with idiot-proof instructions.
>
>> A cheap way to avoid this, might be to include both an x25519 and an
>> ed25519 key in the "link" you send to the other person. You use the
>> x25519 key to do the DH and derive the shared secret, and then both
>> parties use the shared secret to blind the ed25519 key and derive the
>> blinded (aka hierarchically key derived) temporary onion service
>> address... Maybe that works for you but it will increase the link size
>> to double, which might impact UX.
>
> Nice! Link size aside, that sounds like it ought to work.
>
> A given user's temporary hidden service addresses would all be related
> to each other in the sense of being derived from the same root Ed25519
> key pair. If I understand right, the security proof for the key blinding
> scheme says the blinded keys are unlinkable from the point of view of
> someone who doesn't know the root public key (and obviously that's a
> property the original use of key blinding requires). I don't think the
> proof says whether the keys are unlinkable from the point of view of
> someone who does know the root public key, but doesn't know the blinding
> factors (which would apply to the link-reading adversary in this case,
> and also to each contact who received a link). It seem like common sense
> that you can't use the root key (and one blinding factor, in the case of
> a contact) to find or distinguish other blinded keys without knowing the
> corresponding blinding factors. But what seems like common sense to me
> doesn't count for much in crypto...
>
Hm, where did you get this about the security proof? The only security
proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and I don't see
that assumption anywhere in there, but it's also been a long while since
I read it.
I think in general you are OK here. An informal argument: according to
rend-spec-v3.txt appendix A.2 the key derivation is as follows:
derived private key: a' = h a (mod l)
derived public key: A' = h A = (h a) B
In your case, the attacker does not know 'h' (the blinding factor),
whereas in the case of onion service the attacker does not know 'a' or
'a*B' (the private/public key). In both cases, the attacker is missing
knowledge of a secret scalar, so it does not seem to make a difference
which scalar the attacker does not know.
Of course, the above is super informal, and I'm not a cryptographer,
yada yada.
> We'd also have to be careful about the number of blinded keys generated
> from a given root key. The security proof uses T = 2^16 as an example
> for the maximum number of epochs, giving a 16-bit security loss vs
> normal Ed25519. In this scheme T would be the maximum number of contacts
> rather than epochs. 2^16 is more than enough for the current context,
> where contacts are added manually, but we'd have to bear in mind that it
> wouldn't be safe to use this for automatic exchanges initiated by other
> parties.
>
>> And talking about UX, this is definitely a messy protocol UX-wise. One
>> person has to wait for the other person to start up a temporary HS. What
>> happens if the HS never comes up? Every when does the other person check
>> for the HS? How long does the first person keep the HS up? You can
>> probably hide all these details on the UI, but it still seems like a
>> messy situation.
>
> Messy? Yeah, welcome to P2P. ;-)
>
> We're testing a prototype of the UX at the moment.
>
> Bringing up the hidden service tends to take around 30 seconds, which is
> a long time if you make the user sit there and watch a progress wheel,
> but not too bad if you let them go away and do other things until a
> notification tells them it's done.
>
> Of course that's the happy path, where the contact's online and has
> already opened the user's link. If the contact sent their link and then
> went offline, the user has to wait for them to come back online. So we
> keep a list of pending contact requests and show the status for each
> one. After some time, perhaps 7 days, we stop trying to connect and mark
> the contact request as failed.
>
Yeah, I don't think a progress wheel is what you want here. You probably
want a greyed out contact saying "Contact pending..." like in the case
of adding a contact in Ricochet.
> (In my first email I mentioned an alternative approach, where we set up
> a temporary hidden service in advance and just send its address in the
> link, which expires after 24 hours. In that case we can shave 30 seconds
> off the happy path, but we need to work out the UX for explaining that
> links will expire and dealing with expired links. So there are pros and
> cons to both approaches.)
>
More information about the tor-dev
mailing list