[tor-bugs] #28424 [Core Tor/Tor]: Refactor hs_service_callback() to no longer need to run once per second?
Tor Bug Tracker & Wiki
blackhole at torproject.org
Wed Nov 28 16:35:43 UTC 2018
#28424: Refactor hs_service_callback() to no longer need to run once per second?
--------------------------+------------------------------------
Reporter: nickm | Owner: (none)
Type: defect | Status: new
Priority: Medium | Milestone: Tor: 0.4.0.x-final
Component: Core Tor/Tor | Version:
Severity: Normal | Resolution:
Keywords: | Actual Points:
Parent ID: | Points:
Reviewer: | Sponsor: Sponsor8-can
--------------------------+------------------------------------
Comment (by akwizgran):
At the risk of going off-topic, I just wanted to mention one other thing I
noticed when running the simulations. The lookup success rate can be
improved by changing the parameters without storing more copies of
descriptors.
Reducing `hsdir_spread_fetch`, without changing the other parameters,
improves the mean lookup success rate. Intuitively, if a replica is stored
on dirs d_1...d_n at positions 1...n, churn is less likely to displace d_i
from its position than d_{i+1} because there's less hashring distance into
which churn could insert a new dir. Thus a client is more likely to find
the descriptor at position i than position i+1 after churn. Reducing
`hsdir_spread_fetch` concentrates more of the client's lookups on
positions that are more likely to hold the descriptor.
However, while the mean lookup success rate is improved, the effect on the
first percentile is more nuanced. The success rate remains at 100% for
longer, but then falls faster. I think this is because each lookup chooses
from a smaller set of positions to query, so there's a smaller set of
possible outcomes. When `hsdir_spread_fetch == 1`, all lookups query the
same positions.
This is relevant in the real world if clients can retry failed lookups. We
might accept a lower mean success rate if clients can have another chance
at success by retrying.
Another interesting possibility is to trade a larger `hsdir_n_replicas`
for a smaller `hsdir_spread_store`. In other words, use more replicas and
store fewer copies at each replica. This improves the lookup success rate
(both mean and first percentile), without increasing the number of copies
of the descriptor that are stored.
However, I'm not sure how increasing `hsdir_n_replicas` would affect query
bandwidth. Do clients try replicas in parallel or series? If parallel,
increasing `hsdir_n_replicas` would increase bandwidth for all queries. If
series, the worst-case bandwidth would increase (a query would visit more
replicas before failing).
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/28424#comment:6>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the tor-bugs
mailing list