[tor-talk] Spoofing a browser profile to prevent fingerprinting
Mirimir
mirimir at riseup.net
Wed Jul 30 00:06:59 UTC 2014
On 07/29/2014 03:28 PM, Seth David Schoen wrote:
Thank you :)
> Mirimir writes:
>
>> Discussions of measured entropy and stuff are too abstract for me. Maybe
>> someone can help me with a few simpleminded questions.
>>
>> About 2.2 million clients are using Tor these days. Let's say that I've
>> toggled NoScript to block by default, and that I have a unique pattern
>> of enabling particular scripts on particular sites. That is, I'm unique
>> among all Tor users. In what ways does that put my Tor use at risk of
>> being linked to IP addresses seen by my entry guards?
>
> It means that if you go to site A today, and site B next week, the site
> operators (or the exit node operators, or people spying on the network
> links between the exit nodes and the sites) might realize that you're
> the same person, even though you took mostly or completely separate paths
> through the Tor network and were using Tor on totally different occasions.
OK, I get that. If I cared about site A and site B knowing that I'm the
same person, I'd never visit them using the same machine/VM. But that's
not the norm, of course.
> There are several ways of looking at why this is a privacy problem.
> One is just to say that there's less uncertainty about who you are,
> because even if there are lots of site A users and lots of site B users,
> there might not be that many people who use both. Another is that you
> might have revealed something about your offline identity to one of the
> sites (for example, some people log in to a Twitter account from Tor
> just to hide their physical location, but put their real name into their
> Twitter profile) but not to the other. If you told site A who you are,
> now there's a possible path for site B to realize who you are, too, if
> the sites or people spying on the sites cooperate sufficiently.
Expecting Tor to protect against mistakes like that is quite a stretch!
But yes, such users need maximal uncertainty.
> In terms of identifying your real-world IP address, it provides more
> data points that people can try to feed into their observations. For
> example, if someone is doing pretty course-grained monitoring ("who
> was using Tor at all during this hour?") rather than fine-grained
> monitoring ("exactly what times were packets sent into the Tor network,
> and how many packets, and how big were they?"), having a link between
> one time that you used Tor and another time that you used Tor would be
> useful for eliminating some candidate users from the course-grained
> observations.
You're considering network adversaries here. It seems like obfuscated
bridges would be a better strategy against them. That way, they couldn't
so easily monitor Tor use.
> For instance, suppose that you went to site A at 16:00 one day and to
> site B at 20:00 the following day. If site A and site B (or people
> spying on them) can realize that you're actually the same person through
> browser fingerprinting methods, then if someone has an approximate
> observation that you were using Tor at both of those times, it becomes
> much more likely that you are the person in question who was using the
> two sites. Whereas if the observations are taken separately (without
> knowing whether the site A user and the site B user are the same person
> or not), they could have less confirmatory power.
That's getting perilously close to traffic confirmation, isn't it?
More information about the tor-talk
mailing list