[tor-commits] [stem/master] Drop BytesIO TODO comment
atagar at torproject.org
atagar at torproject.org
Sun Jan 5 21:39:28 UTC 2020
commit 21faefda9f4bdfafd7d454ecf2909daaad654140
Author: Damian Johnson <atagar at torproject.org>
Date: Sun Jan 5 12:31:41 2020 -0800
Drop BytesIO TODO comment
BytesIO is great to concatenate large data or turn strings into file objects,
but I'm kinda unsure why I wanted one here. This buffer both adds *and* pop off
data, and BytesIO are kinda clunky at the later. Perhaps there's a good reason
I'm forgetting and we'll do so in the end, but for now just dropping the TODO.
---
stem/client/__init__.py | 8 --------
stem/process.py | 9 +++------
2 files changed, 3 insertions(+), 14 deletions(-)
diff --git a/stem/client/__init__.py b/stem/client/__init__.py
index 307bd4e4..2abeac88 100644
--- a/stem/client/__init__.py
+++ b/stem/client/__init__.py
@@ -64,14 +64,6 @@ class Relay(object):
"""
def __init__(self, orport, link_protocol):
- # TODO: Python 3.x adds a getbuffer() method which
- # lets us get the size...
- #
- # https://stackoverflow.com/questions/26827055/python-how-to-get-iobytes-allocated-memory-length
- #
- # When we drop python 2.x support we should replace
- # self._orport_buffer with an io.BytesIO.
-
self.link_protocol = LinkProtocol(link_protocol)
self._orport = orport
self._orport_buffer = b'' # unread bytes
diff --git a/stem/process.py b/stem/process.py
index 4ed10dfb..3f1a0e19 100644
--- a/stem/process.py
+++ b/stem/process.py
@@ -143,12 +143,9 @@ def launch_tor(tor_cmd = 'tor', args = None, torrc_path = None, completion_perce
last_problem = 'Timed out'
while True:
- # Tor's stdout will be read as ASCII bytes. This is fine for python 2, but
- # in python 3 that means it'll mismatch with other operations (for instance
- # the bootstrap_line.search() call later will fail).
- #
- # It seems like python 2.x is perfectly happy for this to be unicode, so
- # normalizing to that.
+ # Tor's stdout will be read as ASCII bytes. That means it'll mismatch
+ # with other operations (for instance the bootstrap_line.search() call
+ # later will fail), so normalizing to unicode.
init_line = tor_process.stdout.readline().decode('utf-8', 'replace').strip()
More information about the tor-commits
mailing list