[metrics-bugs] #31435 [Metrics]: Emulate different Fast/Guard cutoffs in historical consensuses
Tor Bug Tracker & Wiki
blackhole at torproject.org
Fri Aug 23 11:13:02 UTC 2019
#31435: Emulate different Fast/Guard cutoffs in historical consensuses
---------------------+------------------------------
Reporter: irl | Owner: metrics-team
Type: project | Status: new
Priority: Medium | Milestone:
Component: Metrics | Version:
Severity: Normal | Resolution:
Keywords: | Actual Points:
Parent ID: | Points:
Reviewer: | Sponsor:
---------------------+------------------------------
Comment (by karsten):
Fun stuff! I have given this project some thoughts and came up with a
couple questions and suggestions:
1. Is the scope of this project to run simulations using historical data
only, or is the plan to also use modified consensuses as input for
performing OnionPerf runs using changed Fast/Guard flag assignments? Both
are conceivable, the latter is just more work.
2. What's the purpose of generating modified vote documents in #31436? Is
the idea to evaluate how parameter changes produce different flag
assignments? If so, couldn't we start with a first version that outputs
statistics on flag assignments (and similar characteristics), rather than
writing code to generate votes? We could always add a vote exporter at a
later point, but if the main results is the simulation, then we might not
need the votes.
3. Same as 2, but for consensuses. If the plan is to just run simulations
(and not use consensuses as input for new OnionPerf measurements, cf. 1),
then we might just keep relevant consensus information internally.
Thinking about MVP here.
4. It seems to me that faster parsers (#31434) would be an optimization,
but not strictly necessary for the MVP. We might want to put that on the
nice-to-have list and try to deliver the MVP without it.
5. The approach to exclude "impossible" paths from existing OnionPerf
measurements was also my initial idea when thinking about this topic a
while back. But maybe we can do something better here. Either including or
excluding a measurement only works if paths become impossible or remain
possible, but it doesn't reflect whether paths become more or less likely.
For example, if half of the Guard flags go away, and we look at a path
including one of the remaining guards, that path would become more likely;
and if a Stable flag goes away for a relay in the path, that path would
become less likely. I wonder if we take an approach where we ''resample''
OnionPerf measurements by selecting k paths using old/new path selection
probabilities as weight. We might want to consult somebody who has done
such a thing before.
Maybe we can discuss this project some more and write down a project plan
that starts with the MVP plus the possible extensions.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/31435#comment:1>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the metrics-bugs
mailing list