[tor-dev] Open topics of prop247: Defending Against Guard Discovery Attacks using Vanguards
teor
teor2345 at gmail.com
Sun Jul 2 23:16:17 UTC 2017
> On 3 Jul 2017, at 06:02, s7r <s7r at sky-ip.org> wrote:
>
> George Kadianakis wrote:
>> Hello,
>>
>> here is some background information and summarizing of proposal 247
>> "Defending Against Guard Discovery Attacks using Vanguards" for people
>> who plan to work on this in the short-term future.
>>
>
> Hello,
>
> I have discussed in Amsterdam briefly with David about this and want to
> further discuss it here where everyone can throw an eye on it. I have an
> idea to use a different technique that will replace the concept of
> vanguards and will only activate in case the hidden service might be
> under attack - I refer here to Hidden Service Guard Discovery Attack,
> which is currently fast, easy, (relatively, depending on adversary)
> cheap and effective. It will also address the load balancing questions
> we have in the context of using vanguards and make it much harder and
> more expensive for an adversary to mount a HS Guard Discovery attack.
>
> The main idea was discussed last year a little bit:
> https://lists.torproject.org/pipermail/tor-dev/2016-January/010291.html
>
> but its initial logic (banning a suspicious rendezvous point for a
> period of time) had some issues as pointed out by arma:
> https://lists.torproject.org/pipermail/tor-dev/2016-January/010292.html
I recommend people interested in this proposal re-read the old thread.
It contains several objections, many of which apply to this proposal as
well.
> ...
>
> A rendezvous relay is considered suspicious when the number of
> successfully established circuits in the last 24 hours per a certain
> rendezvous relay exceeds with more than x2 factor the number of expected
> circuits having that relay as rendezvous point.
Why 2x?
Is it just a number you picked?
In general, why the particular numbers in this proposal?
Are they just guesses (most of our proposal numbers are), or are they
evidence-based?
> ...
>
> When a relay triggers it, instead of banning it and refusing to use it
> any longer, we just use hop 2 and hop 3 from the last circuit to further
> build new rendezvous circuits with this relay as rendezvous point for a
> random period between 24 to 72 hours. This ensures we mitigate the issue
> where the attacker DoS-es the HS by making all the relays in the
> consensus suspicious by hitting the limit for every relay, one by one.
Here's an attack that's enabled by this proposal:
1. Send lots of traffic through various rend points until you trigger the
limit on a particular hop2 or hop3 or rend you control.
2. Stop sending traffic on that particular rend.
3. Observe encrypted client traffic/paths on hop2, hop3 or rend for
24 to 72 hours.
4. When hop2 or hop3 rotate, repeat from 1.
This attack can be performed in parallel on multiple rend points for the
same service, and only needs to succeed once.
How much effort would it take to bind all the rend points in the consensus
to a particular hop2, hop3 for a service?
(I think the minimum answer is min(Q) * count(rend), or about 15000-20000
connections.)
Why not just use this defence (slow hop2, hop3 rotation) all the time?
If we did, that makes this attack pointless, because you can't keep
rotating hop2, hop3 fast until you get the ones you want.
Why not also use this defence (slow hop2, hop3 rotation) for clients?
In the last thread, you said that clients can't be forced to make
circuits. But with features like refresh and JavaScript, this just
isn't true.
In general, how do we know the suspicion thresholds are right?
Also, in general, it is harder to test and maintain software that
changes its behaviour in rare circumstances. That doesn't mean this
is a bad design: just that it costs extra to do right and make sure
it's right. How would you test this?
> ...
>
> It is assumed that the protection is not usually triggered, only in
> exceptional cases (a normal Tor client will just randomly pick
> rendezvous points based on middle probability, this should not be able
> to trigger the protection). In the exceptional cases where we reuse hop
> 2 and hop 3 of the last circuit for a 24 to 72 hours period, the load
> balancing issues shouldn't be a problem given we talk about isolated cases.
How much would it cost an attacker to *not* make it an isolated case?
Could an attacker bring down a relay by making multiple hidden services
go through a hop2 or hop3?
For example:
1. Perform the attack above until the victim relay is in the hop3 position
(with a malicious rend point, the client knows hop3).
2. Repeat 1 with a different service and the same malicious rend point.
Also, Tor2web with Tor2webRendezvousPoints will always trigger this case,
as I said in response to the last proposal:
(for "break" read "trigger on")
> * This will break some Tor2Web installations, which deliberately choose
> rendezvous points on the same server or network for latency reasons.
> (Forcing Tor2Web installations to choose multiple RPs may be a worthwhile
> security tradeoff.)
https://lists.torproject.org/pipermail/tor-dev/2016-January/010293.html
I won't repeat the entire thread here, but if this protection will always
be triggered when Tor2webRendezvousPoints is on, please document that in
the proposal, and talk about the load balancing implications.
(Tor2webRendezvousPoints allows a Tor2web client to chose set rendezvous
points for every connection. Please re-read the thread or the tor man page
for details.)
> One question:
> Are we creating an additional risk by keeping this additional
> information (hop2, hop3 of every rendezvous circuit) on the hidden
> service server side? How useful can this historic information be for an
> attacker that comes aware of the location of the hidden service?
It provides the entire path to the rendezvous point.
This is useful for an attacker that knows the rendezvous point. It is also
useful for an attacker whose priority is to locate clients, rather than
locate the service.
> We
> already keep this information regarding the Guard. From my point of view
> this is irrelevant, given this information only becomes available after
> the location of the hidden service is already discovered (which is
> pretty much maximum damage).
... to the hidden service, not necessarily its clients.
T
--
Tim Wilson-Brown (teor)
teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
------------------------------------------------------------------------
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20170703/ee0a1e6c/attachment-0001.sig>
More information about the tor-dev
mailing list