Multiple Directory Authorities' keys compromise: a partial solution for Tor clients protection
unknown
unknown at pgpru.com
Sat Mar 1 18:15:59 UTC 2008
On Wed, 13 Feb 2008 04:08:04 -0500
Roger Dingledine <arma at xxx.xxx> wrote:
> Ygrek just reminded me that nobody had answered this. Sorry for the delay;
> I'll try to provide a brief response here.
Thanks Roger. We continue to discuss the problem on our website with our heads
overheated. :-)
> Actually, of the current v3 authorities:
> 3 are in the US (moria1, lefkada, ides)
> 3 are in Europe (tor26, gabelmoo, dannenberg)
> but yes, it's definitely an issue we need to stay aware of.
[...]
> Worse, since these attacks rarely (never?) show up in practice, we're
> never going to get feedback that the code is still working.
This issue can lead to the crisis of trust. In case of the attack 3+1 corrupted
authorities + 1 controlled ISP is no more better than a single anonymizer with
access to users logs. It wouldn't be "rare" when the adversary is law enforcement
agency or another "political" adversary,
which is able to get DA keys and install Tor virtual network simulator on an
ISP. And all this is way easier than pretend to be a "global adversary". Worse,
this "hypothetical" attack will be deterministic, not probabilistic.
It's to say that ways to deter this attack is similar to thunderbolt protection. You
can place a lightning rod on a house's roof, and will avoid direct lightning
strike. Lightning striking an unprotected area is uncommon. In the absence of a
lightning rod it's very rare for your house to be stroke too. You may think that
the lightning rod is unnecessary. But if the direct lightning stroke will happen
it will be devastating.
> The most
> likely scenario for any balance that we pick between too brittle and
> not brittle enough is that in about a year people will start seeing the
> warnings when no actual attack is occurring, and we'll wonder why the
> code is there.
Agreed. The core problem is: We don't have a clear criteria of attack, and we
don't have a simple scenario for defense.
We assume that the adversary conducts this attack against a single or a very
small fraction of isolated Tor users in order to avoid disclosure of such
abilities. Attack is based on creating a distorted view of the Tor network for
the targeted user(s) (virtual network simulation and rerouting intercepted and
decrypted/re-encrypted traffic to the real Tor network afterwards).
User can detect this attack in two ways: autonomous (collecting anomalies in the
network statistics), and relative (comparing the network statistics with other
trusted users by exchanging suspicious network statuses on secure off-the-band
channels: encrypted email, personal face-to-face contacts). If concerned users can
identify suspicious node keys (signed by rogue DAs) with the help of more users,
they can publish that keys or their hashes for "peer review" as a demonstration
of evidence. Concerned users will play a role of "Tor watchers".
Here we propose a very simple (and maybe a naive) approach for the first step
— autonomous detection (just as an example, wasn't tested in the wild).
If the user has N_k nodes' keys from the current network status and N_k-1 from
the previous network status, then
C=N_k/N_k-1 is the change in the numbers of keys (and nodes themselves).
Let I be the index of keys found in both lists of N_k and N_k-1
I_max = max I = min (N_k, N_k-1)
Let D=I/I_max -– changes of keys themselves (differences).
We can replace this with D=(I+1)/(I_max+1) to avoid zero values (not always needed,
in a reality we have a big values for I).
If C>=1, then we calculate S=C/D; and if C<1, then we calculate S=C*D
The closer S to 1, the more stable Tor network is. If S changes significantly,
then we have the network instability (S<1 is better than S>1) . S can be easily
calculated from the Tor network status file. But we can't easily count broken or
blocked connection to nodes from the status list (which could be a sign of attack
by itself).
More correct solution is calculate independed sum of S=(C-1)^2+(D-1)^2 and keep
this value as close to zero as possible.
Another possible approach is measuring descriptors as intersections in the set M =
M1 AND M2, and consider Z(M1, M2) = min( |M|/|M1| , |M|/|M2| ) as normalized and
time direction invarianted value in the range of [0..1], 0 – most unstable;
1 – fully stable. Finally, calculate R = (1-Z)
With differences D we can calculate any values from status (consensus) file for
nodes (IP, type) to avoid great changing with numbers of entry and exit nodes
and other distortions.
We discuss about "delayed partial cache poisoning" or "multistep attack" from
attacker: ability of avoiding any visible statistical disturbances and guess,
this is impractical for attacker and can be easy detected with our criteria.
For example: if only 10% nodes keys changes per step by attacker then user have
a very litle value of chance to download false status 9 times in succession:
Pr=0.1x0.2x0.3*...0.9= .00036
But user have a big chance 1-Pr in one of any 9 steps to download back real Tor
network status (nodes consensus) from node, not controlled by adversary (mirror)
and detect attack by measuring S.
Our conjecture is "adaptive (or selective) blocking" connections is possible but
don't gives a significant advantages to adversary.
Another problem for adversary is too many broken connections from mixing real and
false nodes. In our another example user has a half of false nodes and another
half of real ones. He can select randomly 8 types of circuits:
(1-1-1) - full controlled, connection going to virtual Tor simulator, deanonimized
by adversary and rerouted to real Tor without user suspicion.
(0-0-0) - full unknown and free from adversary, can be passive retransmitted
adversary to real Tor cloud without deanonimizing
(0-0-1), (0-1-0), (0-1-1), (1-0-1) -partially known to adversary, but cause a
broken connection (not working circuits)
(1-0-0), (1-1-0) - partialy known, but not deanonimizing by adversary, working
if adversary reroute this from virtual Tor simulator to real Tor cloud.
With 50% falsed keys and virtualized nodes an adversary haves only 1/8 deanonimizing
circuits and user haves 50% broken circuits.
Too many broken circuits or blocked connections with stable high disturbance of
S in close steps of consensus refreshes is a sign of attack too.
Real scenario for attack is one step blocking all nodes (except false
DA's), waiting of downloads false concensus from rogue DA and replace
100% of nodes keys for simulate virtual Tor network without giving any
chance to user escapes from virtual Tor to real Tor.
It's possible from these examples to develop working and more corrected
numerical recipe to determine the hazardous difference between two network-status
documents. Such offline statistics gathering and analyzing can be performed with
separate tools used by enthusiastic paranoids :)
Maybe this statistics and info about network status can be distributed for peer
review by means of web of trust – based on the existing infrastructure, not
relying on Tor, thus providing one more way of external verification not requiring
any changes in tor source.
Concerned users ("Tor watchers") could print diagrams of suspicious changes and
publish them, or exchange and analyze data using web of trust or other ways to
communicate such DA audits to the public.
We assume the problem need to have a lot of research and experiments to get an
acceptable solution.
"OpenPGP-in-Russia Team"
More information about the tor-dev
mailing list