[tbb-dev] User Safety Proposals
Tom Ritter
tom at ritter.vg
Sun Apr 7 01:18:23 UTC 2019
I have updated the attached proposals from our last conversation and
I'd like to request they be committed to the tbb-dev proposal
repository.
-tom
-------------- next part --------------
Filename: xxx-cryptocurrency-user-safety.txt
Title: Protecting Against Malicious Exit Nodes Performing Cryptocurrency Hijacking
Author: Tom Ritter
Created: 06-Mar-2019
Status: Open
1. Motivation
Sometimes, exit nodes are malicious. One activity malicious exit nodes
perform is rewriting the addresses of cryptocurrencies to hijack and steal
funds users are trying to send the original address. Tor Project and
volunteers scan and report malicious exit relays where-upon they are
given the BadExit flag.
In the period of time between the nodes being identified and being
blocklisted, users are put at risk from these nodes.
2. Proposal
2.1. Required Infrastructure
This proposal is complementary to the xxx-selfsigned-user-safety.txt proposal.
We assume that (only) one of the following is in place.
2.1.1 selfsigned-user-safety
The selfsigned-user-safety proposal is implemented.
2.1.2 Self-signed certificate error detection
As in selfsigned-user-safety, we classify TLS Certificate Errors into two
categories.
Class 1: Suspicious Certificate Errors
- A self-signed Certificate
- A certificate signed by a Trust Anchor but for a different hostname
- A certificate that appears to be signed by a Trust Anchor, but is
missing an intermediate allowing a full path to be built
Class 2: Unsuspicious Certificate Errors
- An expired certificate signed by a Trust Anchor
- A certificate that requires an OCSP staple, but the staple is not
present
The browser will detect a Class 1 error and make this state available for
the browser to base decisions off of.
2.2. Browser Logic
The browser will be able to recognize addresses of common cryptocurrencies
and when a user executes a copy event, will search for such an address in the
copied text.
If an address is detected and:
- the page is loaded over HTTP
or
- selfsigned-user-safety is not implemented, the page is loaded over HTTPS,
and the certificate has a Class 1 Suspicious Certificate Error
Then the text MUST NOT be copied to the clipboard.
Basically this prevents the address from being copied if the address could
have been changed by the exit node.
3. False Positives
Not every cryptocurrency address served over HTTP is being attacked by a
malicious exit node.
4. False Negatives (Attacker-Controlled)
An attacker could change the address to a QR code and prompt the user to
scan it with their phone. This would not be detectable if the attacker
rendered the QR code using background-colored <div> elements for example.
There are likely other bypasses to consider.
5. User Interface/Experience
The text wil not be copied. But when the user executes the copy shortcut or
menu item a model dialog (like alert()) could be presented explaining why the
copy failed.
We could also use a doorhanger or information bar - but both of these seem prone
to being missed or ignored; while a modal dialog will be immediate, come with a
sound, and
6. User Bypass
The user can, of course, manually type the address.
7. Implementation
In Firefox, this entire concept can likely be implemented as a WebExtension
using the TLS Web Extension APIs and te Clipboard APIs.
-------------- next part --------------
Filename: xxx-selfsigned-user-safety.txt
Title: Protecting Against Malicious Exit Nodes Performing TLS Interception
Author: Tom Ritter
Created: 06-Mar-2019
Status: Open
1. Motivation
Sometimes, exit nodes are malicious and perform TLS Interception using self-
signed or otherwise invalid TLS certificates. Tor Project and volunteers
scan and report malicious exit relays where-upon they are given the BadExit
flag.
In the period of time between the nodes being identified and being
blocklisted, users are put at risk from these nodes.
2. Proposal
2.1. Classifying TLS Certificate Errors
First we classify TLS Certificate Errors into two categories. We will use
these classifications later.
Class 1: Suspicious Certificate Errors
- A self-signed Certificate
- A certificate signed by a Trust Anchor but for a different hostname
- A certificate that appears to be signed by a Trust Anchor, but is
missing an intermediate allowing a full path to be built
Class 2: Unsuspicious Certificate Errors
- An expired certificate signed by a Trust Anchor
- A certificate that requires an OCSP staple, but the staple is not
present
2.2. Browser Logic
If the browser encounters an invalid TLS Certificate when connecting to a
hostname, and the type of invalidness is a Suspicious Certificate Error,
the browser will not _immediately_ allow the user to bypass the error and
add an exception.
Instead it will create a new circuit through a new exit node (making sure
to check the Family of the nodes), begin a TLS handshake, and obtain the
certificate offered.
If the certificate is the same as the one offered through the initial
circuit, the user is allowed to add an exception and continue. If the
certificate is different, the user not allowed to bypass the error.
2.3. Optional Extension
If a certificate mismatch occurs, the browser could prompt the user to
send a report to Tor Project.
The simple version of this feature could open an email message with
details prepopulated and addressed to badrelays at .
The more advanced version could submit the information to an onion
service operated by Tor Project. On the backend, we could build an
automatic verification process as well.
The details would include the hostname visited, time, exit nodes, and
certificates received over which exit nodes.
3. False Positives
It is possible, although I suspect uncommon, that a server may have
geographic or other load balancing that presents different self-signed
certificates to different exit nodes.
If we receive reports of such occurances, we could either relax protects
for such domains we hardcode into the browser, or perform the new-circuit
verification choosing an exit node in the same country.
4. User Interface/Experience
While the certificate is being verified over another circuit, it would be
best to provide feedback to the user.
a. The button can appear disabled and say something like
'Pending (Verifying Certificate)'
b. A small progress bar can appear under the button that tracks the progress
of creating and extending the circuit, sending the request and getting
the reply.
c. A small 'Retry' underlined, clickable link could sit by the progress
bar to retry the circuit in case it gets stalled.
If the certificate comes back and is a mismatch, we could replace the entire
error page with more information, including a cloudflare-style diagram[0] showing
the malicious exit node, and prompting the user to submit the information.
[0] https://external-preview.redd.it/S65-yhtC6IAqpzS6AMhMnrrFwvtyRA6WjuM_hQpJLg0.png?auto=webp&s=285e86af8e638df6ecc143a52af024f006389151
If the certificate comes back with a match, we could add some text noting that
some amount of verification has been performed. However it seems bad to
automatically accept the certificate or relax the warning too much, since it
is still possible a TLS attack is occuring (just not inside the Tor network.)
Alternately, we could not change the warning page at all.
5. Concerns
An exit node who observes an aborted TLS handshake will learn that a user
encountered a self-signed certificate error for this server on another circuit.
What would this tell them? It leaks a user's browsing activity. It also leaks
the prescence of a malicious exit node on the network (assuming the exit node
observes a valid TLS certificate.)
Exit nodes who lie about their family have a chance to successfully attack the
user.
-------------- next part --------------
Filename: xxx-download-user-safety.txt
Title: Protecting Against Malicious Exit Nodes Performing File Infection
Author: Tom Ritter
Created: 06-Mar-2019
Status: Open
1. Motivation
Sometimes, exit nodes are malicious. One activity malicious exit nodes
perform is infecting files (most commonly executables) downloaded over
insecure or otherwise compromised connections. Tor Project and
volunteers scan and report malicious exit relays where-upon they are
given the BadExit flag.
In the period of time between the nodes being identified and being
blocklisted, users are put at risk from these nodes.
2. Proposal
2.1. Required Infrastructure
Firstly, we assume that for each operating system, we have devised two lists of
file types for that system.
Executable File Types: These files are programs or otherwise things that
can definetly and intentionally execute code.
Examples: .exe .deb
Transparent File Types: These files are trivial or simple file types where
the risk presented is very low.
Examples: .txt .html .jpg, .png
Additionally, it would be ideal if, for file archive types (e.g. .zip), we read
the file archive manifest and classified the file archive accordingly.
Secondly, this proposal is complementary to the xxx-selfsigned-user-safety.txt
proposal. We assume that (only) one of the following is in place, and we concern
ourselves only with downloads that meet one of the follow criteria
Criteria:
- the resource of concern is loaded over HTTP
or
- selfsigned-user-safety is not implemented, the resource of concern is loaded
over HTTPS, and the certificate has a Class 1 Suspicious Certificate Error
(defined below)
2.1.1 selfsigned-user-safety
The selfsigned-user-safety proposal is implemented.
2.1.2 Self-signed certificate error detection
As in selfsigned-user-safety, we classify TLS Certificate Errors into two
categories.
Class 1: Suspicious Certificate Errors
- A self-signed Certificate
- A certificate signed by a Trust Anchor but for a different hostname
- A certificate that appears to be signed by a Trust Anchor, but is
missing an intermediate allowing a full path to be built
Class 2: Unsuspicious Certificate Errors
- An expired certificate signed by a Trust Anchor
- A certificate that requires an OCSP staple, but the staple is not
present
The browser will detect a Class 1 error and make this state available for
the browser to base decisions off of.
2.2. The difference between the download _link_ and the download
We concern ourselves with two situations, that is whether or not the
download link appears on is secure (defined as not having a Class 1
Error for any active page content).
When the download link comes from a secure page, but the target of the
download is insecure (defined as being HTTP or having a Class 1 Error),
the link itself impacts some amount of authenticity for the download.
However, when the download link comes from an insecure page, no
authenticity is possible, as a MITM attacker can point the download link
at a malicious file.
2.3. Browser Logic for Executable Files
Option 1: If the filetype of a download is one of a predefined set of executable
formats, the download is prevented entirely.
Option 2: If the filetype of a download is one of a predefined set of executable
formats, we attempt to verify the download.
If the download link is secure, we could consider either option.
If the download link is insecure, we should only consider Option 1. Verification
can impart no additional value.
2.4. Browser Logic for Non-Transparent, Non-Executable Files
This essentially reverses the option numbers from above, to reflect the reduced
risk of infection of non-executable files.
To be clear, however, the risk is still non-zero. Complex types such as .doc can
include macros or other executable code, alternately they are prime suspects for
client-side exploits.
Option 1: If the filetype of a download is NOT one of a predefined set of executable
formats, we attempt to verify the download.
Option 2: If the filetype of a download is NOT one of a predefined set of executable
formats, the download is prevented entirely.
Option 3: No verification is performed, and the download is allowed.
Again, if the download link is secure, we could consider Option 1 or 2.
If the download link is insecure, we should only consider Options 2 or 3. (Option 3
given the reduced risk for this filetype, Option 2 given the non-zero risk.)
2.4. Browser Logic for Transparent Files
To be exhaustive, no special action is taken for transparent files.
2.5 Verifying a File Download
To verify a file download, several different approaches could be taken:
Option 1: The entire file could be downloaded over a new circuit (taking care to
avoid the same exit family) and compared.
Option 2: Assuming the server supports range requests, random parts of the file could
be requested over a new circuit, and compared. This would save bandwidth and
time.
Note that we must choose random parts of the file; otherwise an attacker
could rewrite the binary in a way that avoids alternating the checked parts.
[[ How probable is it that we catch an alteration? We'd need to check a
component of the file already downloaded, and for large files we'd need to
check a lotbecause thered be a lot of places malicious code could hide...]]
Option 3: If the file supports Authenticode or a similar signature extension, we could
a) Check if the file has an authenticode signature. If not, verify by
some other means
b) Download the PE Header and the signature block at the end of the
file over a new circuit
c) Compare the Header,, Signature Block, and Signature Public Key and
confirm they match
d) Confirm that the signature isn't a weak signature that would verify
any file
At this point we should be assured that a) The OS will check the
authenticode block b) if the file hash doesn't match the signature
block the OS won't run it c) the file hash was the same on both
circuit downloads.
[[ Are there other approaches to file verification we could do that would work? ]]
2.6. Optional Extension
If a download verification fails, the browser could prompt the user to
send a report to Tor Project.
The simple version of this feature could open an email message with
details prepopulated and addressed to badrelays at .
The more advanced version could submit the information to an onion
service operated by Tor Project. On the backend, we could build an
automatic verification process as well.
The details would include the hostname visited, time, exit nodes, and
file data received over which exit nodes.
3. False Positives
False Positives during verification could occur if a server provides customized
binaries or only allows download of a file once.
4. User Interface/Experience
We should, in some way, alter the download screens to ensure that they do not register
a download as complete before the verification process has occured.
Similarly, we should not rename in-progress .part download files until the verification
has completed.
5. Concerns
An exit node who observes a range request will learn that a user is downloading
this file on another circuit. What would this tell them? It leaks a user's browsing
activity. Anything else?
Exit nodes who lie about their family have a chance to successfully attack the
user.
6. Research
It would probably be possible to perform a research experiment at one or more
exit nodes to determine the frequency of users download filetypes over HTTP.
We could record the number of filetypes downloaded over HTTP and the total
amount of exit traffic pushed by Tor. (No other records should be kept, for
safety reasons.) This would given us some ratio to indicate the frequency such
files are downloaded. This may guide us to choose more stringent blocking or
less stringent.
More information about the tbb-dev
mailing list