[tor-bugs] #33076 [Metrics/Analysis]: Graph onionperf and consensus information from Rob's experiments
Tor Bug Tracker & Wiki
blackhole at torproject.org
Thu Feb 13 08:26:54 UTC 2020
#33076: Graph onionperf and consensus information from Rob's experiments
-------------------------------------------------+-------------------------
Reporter: mikeperry | Owner:
| metrics-team
Type: task | Status:
| needs_review
Priority: Medium | Milestone:
Component: Metrics/Analysis | Version:
Severity: Normal | Resolution:
Keywords: metrics-team-roadmap-2020Q1, sbws- | Actual Points: 3
roadmap |
Parent ID: #33121 | Points: 6
Reviewer: | Sponsor:
-------------------------------------------------+-------------------------
Comment (by karsten):
Thanks, dennis_jackson, for the great input!
I like your percentiles graph with the moving 24 hour window. We should
include that graph type in our candidate list for graphs to be added to
OnionPerf's visualization mode. Is that moving 24 hour window a standard
visualization, or did you further process the data I gave you?
Regarding the dataset behind bandwidth measurements, I wonder if we should
kill the 50 KiB downloads in deployed OnionPerfs and only keep the 1 MiB
and 5 MiB downloads. If we later think that we need time-to-50KiB, we can
always obtain that from the tgen logs. The main change would be that
OnionPerfs consume more bandwidth and also put more load on the Tor
network. The effect for graphs like these would be that we'd have 5 times
as many measurements.
But I think (and hope) that you're wrong about measurements not having
finished. If DATAPERC100 is non-null that actually means that the
measurement reached the point where it received 100% of expected bytes.
See also the [https://metrics.torproject.org/collector.html#type-torperf
Torperf and OnionPerf Measurement Results data format description].
It's quite possible that op-nl and op-us did not report measurements
during the stated days. We have a reliability problem with the deployed
OnionPerfs, which is why we included work on better notifications and
easier deployment in our funding proposal. But we should also keep in mind
that the main purpose of the currently deployed OnionPerfs is to have a
baseline over the years. If we're planning experiments like this in the
future we might want to spin up a couple OnionPerfs and watch them much
more closely for a week or two.
Are you sure about that 10k ttfb measurements number for the month of
August? In theory, every OnionPerf instance should make a new measurement
every 5 minutes. That's 12*24*31 = 8928 measurements per instance in
August, or 8928*4 = 35712 measurements performed by all four instances in
August. So, okay, not quite 10k, but also not that many more. We should
spin up more OnionPerf instances as soon as it has become easier to
operate them. What's a good number to keep running continuously, in your
opinion? 10? 20? And maybe we should consider deploying more than 1
instance per host or data center, so that we have more measurements with
comparable network properties.
To summarize, we have a new candidate visualization, a best practice to
set up additional OnionPerfs when running experiments, and suggestions to
kill 50 KiB measurements and to deploy more OnionPerf instances. Does this
make sense?
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/33076#comment:24>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the tor-bugs
mailing list