[tor-bugs] #22453 [Core Tor/Tor]: Relays should regularly do a larger bandwidth self-test
Tor Bug Tracker & Wiki
blackhole at torproject.org
Wed Nov 21 23:26:26 UTC 2018
#22453: Relays should regularly do a larger bandwidth self-test
-------------------------------------------------+-------------------------
Reporter: arma | Owner: juga
Type: defect | Status:
| needs_information
Priority: Medium | Milestone: Tor:
| 0.4.0.x-final
Component: Core Tor/Tor | Version:
Severity: Normal | Resolution:
Keywords: 034-triage-20180328, | Actual Points:
034-removed-20180328, tor-bwauth, |
035-backport, 034-backport-maybe, 033 |
-backport-maybe, 029-backport-maybe-not |
Parent ID: #25925 | Points:
Reviewer: teor | Sponsor:
-------------------------------------------------+-------------------------
Comment (by teor):
Replying to [comment:40 arma]:
> Replying to [comment:33 teor]:
> > More realistically, the top 10% of relays are at 125 megabits per
second:
> > https://metrics.torproject.org/advbwdist-
relay.html?start=2018-08-21&end=2018-11-19&n=500
> >
> > Therefore, it would take log,,2,,(125 / 20)*5 = 13 days for sbws to
get an accurate bandwidth for most (90%) of relays, if there was no client
traffic.
> >
> > Do you think that's ok?
>
> This is a fun analysis!
I think it's slightly wrong, because sbws' target download time is 5-10
seconds, not 11 seconds. (We'd need 11 seconds to make sure we covered
tor's 10 seconds, when the seconds don't line up.)
But still, we could live with 1x - 2x growth.
The important thing is that relays won't go backwards (like they sometimes
do with Torflow).
> First, I'll start with a "yes we could live with that."
>
> But second, if we have six sbws's going, and there's a lot of variance
with each test (sometimes it's faster than expected, sometimes slower), I
think the time until one of the tests happens to hit some great
throughput, on a relay that's way faster than its self-advertised number,
would end up quite a bit less than this analysis predicts. That is, I
think it would be quite common to more-than-double, not just double, at
each iteration.
It is possible that we would get more than 2x.
> And third, what's the "every five days" parameter?
Results are valid for 5 days, and we take the median.
> Should we teach sbws that when its recent measurements have showed the
relay to be way faster than its consensus weight, that means we "don't
have enough good recent measurements" and we need to get more (and better)
measurements? It seems like there's a lot of space for sbws to be smarter
about doing tests for relays that appear to be on an upward trajectory.
(But this said, I would still want to try to keep the priority function as
simple as possible, to avoid making it hard to analyze what's going wrong:
#28519)
I think we could open a ticket to try to make fast relays grow faster. But
there's a tradeoff: fast growth means less network stability and more
chance of a sybil.
Let's do this kind of optimisation in sbws 1.1.
> Ultimately, in a design where we base our changes proportional to the
self-advertised bandwidth, we are limited by the feedback cycle between
"we induce load on relay" and "relay publishes descriptor with higher
number". We intentionally slowed down that feedback cycle in the #23856
fix, so I don't see a way around accepting that -- even best case -- it
will take some days to get to the proper number.
And that's ok. Again, there's a growth/stability/security tradeoff.
We won't know if the growth is acceptable until we deploy. So let's stay
with what we've got for now?
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/22453#comment:42>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the tor-bugs
mailing list