[metrics-bugs] #29370 [Metrics/Onionperf]: Measure mode with arbitrary tgen traffic models
Tor Bug Tracker & Wiki
blackhole at torproject.org
Wed Jun 3 20:50:20 UTC 2020
#29370: Measure mode with arbitrary tgen traffic models
---------------------------------------+------------------------------
Reporter: irl | Owner: metrics-team
Type: enhancement | Status: reopened
Priority: Low | Milestone:
Component: Metrics/Onionperf | Version:
Severity: Normal | Resolution:
Keywords: metrics-team-roadmap-2020 | Actual Points: 0.1
Parent ID: #33321 | Points: 1
Reviewer: | Sponsor: Sponsor59
---------------------------------------+------------------------------
Comment (by karsten):
Thank you, robgjansen and acute, for your comments above! The more I think
about this feature, the more I come to the conclusion that we do not need
it.
I'll start with addressing acute's thoughts:
Replying to [comment:9 acute]:
> Having a pass-through feature could be very useful for research. For
example, evaluating Tor performance with clients in mobile or other type
of bandwidth-constrained networks would require a model that minimises the
used bandwidth; or if a user wanted to fill the pipe for a congestion
control experiment a larger file size model would be needed etc.
I can see how different network environments would require different
measurement models. But maybe we can identify how these models should
differ and then add parameters to OnionPerf's command line that feed into
the generated TGen models. For example, the file size of the downloaded
file could easily be a `--filesize` parameter on the command line.
> If there is no appetite for implementing this feature, what we could
have instead is documentation that explains to users how to use their own
model if they want to, and keep our own models (including oneshot)
internally as suggested here.
If researchers need to change parts of a model that cannot be configured
using the OnionPerf command line interface, they will have to change the
OnionPerf sources to do what they want. I'd say that that's still easier
than editing a TGen XML file. If the missing piece is better
documentation, we should provide that.
Now to robgjansen's thoughts:
Replying to [comment:8 robgjansen]:
> The use case was to allow OnionPerf to measure traffic patterns other
than the usual "file download" pattern (the model that OnionPerf generates
for itself internally). So, for example, you could set up a tgen traffic
model to act like a ping utility, sending a few bytes to the server side
and back to the client to measure circuit round-trip times.
The ping model is something that we're considering to implement in #30798,
so let's consider what that would entail:
- We would need a different TGen client model that sends a few bytes
every second or every configurable amount of time.
- We would also need to update the analysis code in OnionPerf. We'd no
longer be interested in elapsed seconds between start and reaching a
certain stage of the download. We'd want to extract the time between
sending a ping and receiving a pong, for every one of them.
- In fact, we might want to parse the TGen server logs as well, to learn
when a ping arrived and the pong was sent back. We have that information,
which is not available to the typical ping application, so why not use it.
- We would have to update the visualization code to extract and display
different metrics than in the bulk download case.
Thinking about different traffic models, what if we wanted to measure
something like an `HTTP POST` rather than the `HTTP GET`? I'd assume that
we'd have to provide a different TGen ''server'' model file as well, but I
don't know for sure. If that's still possible with replacing just the TGen
client model, there's probably another model that requires a custom TGen
server model which we just didn't think of yet.
All in all, it's more than just ''the'' TGen model. We'd have to write a
fair amount of code in order to implement a useful ping model in
OnionPerf.
> My initial idea was not to make OnionPerf generate these models, but
rather to create them externally and have OnionPerf just "pass them
through" to tgen. That means whoever generated the models would need to
correctly set the server addresses and SOCKS ports, etc.
>
> I'm not sure that Tor wants this feature for OnionPerf. An alternative
could just be to wait until you have a specific model in mind that you've
decided you want to start measuring, and then generate that model
internally as we do now with the 5MiB downloads. In that case the pass-
through feature would not be needed, and you wouldn't have to maintain
something that you don't use.
>
> Do you think the pass-through feature is actually useful for Tor, or
does generating models internally make more sense?
>
> (The tgen models in Shadow are generated with the correct
addresses/ports during the phase when we generate the Shadow experiment
configuration. This is currently done using the Shadow-Tor config
generator [https://github.com/shadow/tornetgen here].)
If there had been existing models that we could have plugged in easily,
that would have been a good argument in favor of this feature. But it
seems like we could as well reuse the model-generating code from the
Shadow-Tor config generator if we wanted to support these models in
OnionPerf. And still, we would have to write new analysis and
visualization code to evaluate those new measurements.
The internally generated model also has the advantage that it's easier to
use. All it takes to start a measurement is a (potentially quite long)
command with several parameters. But it doesn't require a (still
potentially long) command plus one or two files. Describing the experiment
would then be a matter of listing all software versions and the OnionPerf
command used to start measurements.
My suggestions are that we:
- make the current bulk transfer model more configurable by adding
parameters like initial pause, transfer count, or filesize as part of
#33432;
- develop a ping model as internal model where OnionPerf generates TGen
files, plus analysis and visualization code, as part of #30798, assuming
there's need for developing such a model; and
- remove the `-m/--traffic-model` parameter from the codebase and close
this ticket as something we considered carefully but decided against.
Oops, this comment was longer than I had expected when starting to write
it. Thanks for making it to the end, and thanks again for sharing your
thoughts above. I'm open to discussing this more if there are aspects that
I didn't acknowledge as much as I should.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/29370#comment:10>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
More information about the metrics-bugs
mailing list