[tor-bugs] #4086 [Analysis]: Compare performance of TokenBucketRefillInterval params in simulated network
Tor Bug Tracker & Wiki
torproject-admin at torproject.org
Thu Mar 15 16:14:45 UTC 2012
#4086: Compare performance of TokenBucketRefillInterval params in simulated
network
-------------------------------------+--------------------------------------
Reporter: arma | Owner:
Type: task | Status: new
Priority: normal | Milestone:
Component: Analysis | Version:
Keywords: performance flowcontrol | Parent: #4465
Points: | Actualpoints:
-------------------------------------+--------------------------------------
Comment(by robgjansen):
Replying to [comment:24 arma]:
> Exciting! For your "no ewma, bulk download, refill 1/s" case, it looks
like 60% of them finish in a reasonable time, and the other 40%...what?
That's a high fraction of cases that look basically broken.
I noticed that, but some results are better than no results ;-)
> It looks like in the ewma case, refilling more than 1/s is the best
option for bulk downloaders? Why would refilling more often slow them down
so much? Are we just seeing network breakdown because we kept the load the
same while reducing the capacity too much? Hm.
One thing that comes to mind is an increased CPU load. I currently model
CPU for each node by measuring the actual time the experiment box takes to
run the Tor parts of the simulation. This CPU delay time is then
multiplied by the ratio of the node's configured CPU speed and the
experiment box's CPU speed. Future events for that node are then delayed
if it becomes "blocked on CPU".
My previous experiments were run on EC2 and with consensus weights as
capacity, as opposed to my server with observed bandwidth as capacity. (I
made the classic mistake here of changing too many variables.) I can turn
off the CPU delay model, and we can take a look at performance under the
assumption that CPU will never be a bottleneck, if you'd like.
> Do you think there's a lot of variance from one run to the next? Is the
variance from the choice of topology? You're already averaging lots of
individual fetches from clients I believe. What else might be big
contributing factors to variance?
It may depends on how clients pick paths, as that directs network
congestion. Smaller networks may have more variance since if you get
unlucky and happen to clog up some important nodes, it affects a high
fraction of clients.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/4086#comment:25>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the tor-bugs
mailing list