tor-spec comments
Joel N. Weber II
ordev at joelweber.com
Sat Aug 30 17:05:53 UTC 2003
Brandon Wiley (blanu) also suggested just using ssh port-forwarding to
build up a bunch of ssh tunnels within tunnels.
Doing it with nested ssh's is quite inflexible though, because we want
to be able to experiment with link padding, use fixed-size cells so an
observer can't distinguish control cells from relay cells, etc.
The sshv2 has some message types that explicitly exist to support
padding, so I'm not sure I buy this.
Longer term, having the option of using either sshv2 or TLS to
interconnect onion routers strikes me as possibly interesting. It
would be neat to have an onion router connection to a friend's machine
anytime I was logged into it, although that makes some different
assumptions about threat models and approaches to dealing with them
than you seem to be making.
It does seem like if you can get the ability to run onion routing on
less trusted nodes that have low bandwidth connections, that
initiating a connection anonymously is going to be a lot more
effective. Otherwise, anyone who can watch the connections between
the exit nodes and the internet, and between the onion proxies and the
internet, is going to be able to figure everything out, I think
(modulo any services that are running on onion routers themselves).
Unless the connection between the user's machine and the onion proxy
is encrypted, and you do clever padding, but even then, there are
probably some attacks involving noticing when a packet is there, and
if you can forward other people's traffic through, you might confuse
things furthur.
(But I need to read a lot more literature before I'll really feel like
I understand all the security implications of things.)
I'd actually like to solve link key rotation at the same time as I update
the handshake protocol. It would be nice if establishing the first key
is pretty much the same operation as rotating the key, so we can keep
the design simple. More design work remains. Feel free. :)
I don't know offhand if TLS does this, but sshv2 certainly takes care
of all of this.
The cells are fixed size to reduce vulnerability to traffic analysis.
Fixed-size cells plus encryption is a great combination against the
simplest traffic analysis attacks. We could do mixing, batching, or
delaying down the road to get better protection; whether this will be
worth it requires more research though.
I think if you use a protocol with variable length cells that allows
padding, you ought to be able to simulate fixed lengths if you need
to.
I'm also not sure if resisting the simplest traffic analysis actually
buys you anything, if there's another attack that takes an extra week
of programmer time but the same amount of computing resources.
Yes, the onion routers are going to be a relatively stable group. This is
so we can resist Sybil attacks (http://freehaven.net/anonbib/#sybil) by
I'm really not convinced that's the right approach. It means that I
have to trust a particular group of trusted operators.
What I would like is to be able to use OpenPGP to verify the identity
of various servers, with seprate trust paths to each, and a sense that
the various server operators I'm depending on aren't really all that
close to each other.
Also, if the middle of my onion path is some random untrusted node,
that doesn't actually add any risk, as far as I can tell, and defends
me against the cabal a bit, assuming the cabal doesn't happen to be
secretly running it.
I'm not convinced either way either. On the other hand, having ways
of doing things which the developers don't use is a sure way to have
lurking bugs.
And you always have the potential for lurking buffer overflows in code
paths that you don't test. I suspect if you were using OpenSSL rather
than a homegrown thing, the amount of code unique to tor would go
down, and thus the number of security bugs unique to tor would go
down. And presumably lots of people have been auditing OpenSSL over
the years, or, um, something.
If the user cares, he should be doing his own end-to-end authentication
and integrity checking.
Yes, and this is something that we should be encouraging independent
of anonyminity development. (And why I've been working on making
OpenPGP an option for host identity verification in various protocols.)
I think it's definitely the case that we will have dozens of onion
routers with fully 10mbit or more pipes to the Internet. I've already
had that many people tell me they want to run routers and describe their
hosting facilities, and several of them are talking 100mbit or more. And
if we're going to have a fairly limited number of nodes, then we want
people with big pipes.
Whether our code can saturate big pipes in real-world situations remains
to be seen (we're still working on getting approval for an Internet-wide
beta network). But even if we can only barely fill up a 10mbit machine
(I think my 1GHz Athlon can fill between 5 and 10mbit now), I don't
want to be asking these volunteers to give me all of their CPU too:
it would be really nice if this was a typical unnoticed network daemon.
What do 1GHz Athlons cost these days? Is it really the case that
people aren't going to be able to buy a stack of 10 or 20 machines if
that's what it takes?
Are you using optomized versions of all the crypto algorithms? You
really should be microoptomizing the assembly code long before you
start microoptomizing the protocol.
What do you expect these people are going to do with their CPUs if we
don't give them crypto to run?
What's going to happen in two or three years with Moore's Law? Is CPU
performance growing faster than internet traffic?
More information about the tor-dev
mailing list