[tor-talk] (D)DOS over Tor network ? Help !
fuckyouhosting at ruggedinbox.com
fuckyouhosting at ruggedinbox.com
Mon Dec 8 22:24:12 UTC 2014
Yes thanks for confirming.
Still wondering how big onion websites, for example major black-markets,
fight this problem :|
On 2014-12-07 12:45, Cyrus wrote:
> I concur that this is a serious problem and at the moment anyone can
> censor any hidden service at will. These attacks use little bandwidth,
> and seem to involve each request having a new rendezvous for each
> attempt, using lots of resources. can't imagine a fix myself at this
> point.
>
> fuckyouhosting at ruggedinbox.com wrote:
>> On 2014-12-02 21:45, l.m wrote:
>>>> Perhaps the new implementation of the hidden services will be better
>>> ?
>>>> How is it going ?
>>>
>>> I don't see anything in the improvements suggested for hidden
>>> services
>>> that would help this situation. Though I would be grateful for being
>>> corrected.
>>>
>>> First, I just want to say I only meant sheep(s) to emphasize that you
>>> don't know how many black sheep you have participating. I mentioned
>>> the part about this potentially being an attack external to Tor out
>>> of
>>> concern for your participation in a de-anonymizing attack on your
>>> hosted HS. I see your HS's are offline while you troubleshoot this so
>>> that's good. Next, I'm confused by what you describe. Sorry I deleted
>>> your previous email so I may repeat some things you already said.
>>>
>>> - no evidence of any HS being flooded from logs. No evidence of a
>>> load
>>> on any particular HS.
>>> and
>>> - next to no bandwidth consumption. Does this include no processor
>>> use? I don't recall if that was mentioned before.
>>>
>>> - no evidence is apparent from checking REND_QUERY=HS. So no request
>>> to rendezvous. Might make sense given little/no traffic.
>>>
>>> - your guards go offline. This is contradictory. If the attack is
>>> within Tor via a HS it implies the HS traffic *reliably* makes it to
>>> at least your guard before you experience the symptomatic overload
>>> and
>>> timeout. Meaning there must be traffic you can detect. Otherwise the
>>> attacker would likely lose their connection to the rendezvous point
>>> (at least sometimes) by committing to the attack. What I mean is in
>>> order for this to be an attack via malicious HS it would need to
>>> succeed in not timing out until the traffic reaches your guard and
>>> server. That's two circuits that must work before failing at your
>>> guard. Not to mention you already tried changing the guards. It just
>>> seems implausible to occur reliably enough to take your server down.
>>> This assumes little/no traffic and no heavy cpu usage.
>>>
>>> -- Now I don't know how you setup your logging but I assume you would
>>> know if there was a load on any particular site or flood. I can
>>> suggest beefing up this part of the auditing trail. You could use
>>> proxy (on the same server) in your server blocks (Nginx?) for each HS
>>> (or in batches). Then you can use SPI to analyses the traffic of each
>>> proxy for a build up of use that might be causing your timeouts.
>>> Though I don't see the use if your logging is as good as you
>>> mentioned.
>>>
>>> If there's no traffic, no cpu usage, no evidence of HS load except
>>> your guards are timing out--I'm back to the implausible. Two circuits
>>> that reliably take down your guards. There *has* to be traffic on
>>> your
>>> side you can measure or some load indicator. Either that or the
>>> attack
>>> is external to Tor. On the other hand you could reply and say 'yeh
>>> lots of cpu use'. In which case sorry for wasting your time. If there
>>> is alot of cpu use the VM-partitioning solution is the best solution
>>> as it would guarantee at least some guards available to your other
>>> HS's. It also provides you more granular control over hardware
>>> allocation. Either way you have to assume at some point you will be
>>> targeted externally (from Tor) to de-anonymize your HS's. Shared
>>> hosting.. many HS's.. you're an eavesdropping goldmine.
>>>
>>> -- leeroy bearr
>>
>>
>> Hi, thanks for supporting.
>>
>> As a reminder, here is the archive for this thread:
>> https://lists.torproject.org/pipermail/tor-talk/2014-December/035807.html
>>
>> And that's December:
>> https://lists.torproject.org/pipermail/tor-talk/2014-December/thread.html
>>
>> Problem is still going on, let's recap:
>>
>> Tor goes 100% CPU only when (re)starting and publishing the HS, this
>> takes around 5 minutes,
>> after that, it uses about 1% - 5% of 1 core.
>>
>> We _are_ able to see the access.log and error.log of each virtual
>> host,
>> since they are simple nginx vhosts.
>> We also have a script that records the access.log size on mysql, we
>> use
>> that script to delete unused accounts.
>> We confirm that both access.log and error.log doesn't move of an inch,
>> basically there is no http traffic.
>>
>> Bandwidth usage is consistently around 5KB - 15KB / second.
>>
>> Currently we disabled from torrc the most visited vhosts, have a look
>> at
>> this:
>>
>> http://fuckyouhotwkd3xh.onion/ (working)
>> http://ijaw6sx25rzose3c.onion/ (working)
>> http://3xsrosyv52vefh2l.onion/ (working)
>> http://v5uhidj456rn4cra.onion/ (working)
>> http://za3uovobxijq4grb.onion/ (not working)
>> http://4vxnpigkblvmud4i.onion/ (not working)
>> http://3jfbvyg5pggg7mfg.onion/ (not working)
>>
>> this is a list of the HS we are currently hosting. We tried more
>> addresses but in short, it looks like only 10% of the HS are working.
>>
>> Those are the Tor logs of the current session:
>>
>> Dec 03 XXX [notice] Bootstrapped 90%: Establishing a Tor circuit.
>> Dec 03 XXX [notice] Tor has successfully opened a circuit. Looks like
>> client functionality is working.
>> Dec 03 XXX [notice] Bootstrapped 100%: Done.
>> Dec 03 XXX [notice] Your Guard regar42 (XXX) is failing more circuits
>> than usual. Most likely this means the Tor network is overloaded.
>> Success counts are 120/181. Use counts are 453/453. 120 circuits
>> completed, 0 were unusable, 0 collapsed, and 0 timed out. For
>> reference,
>> your timeout cutoff is 91 seconds.
>> Dec 03 XXX [warn] Your Guard regar42 (XXX) is failing a very large
>> amount of circuits. Most likely this means the Tor network is
>> overloaded, but it could also mean an attack against you or
>> potentially
>> the guard itself. Success counts are 120/241. Use counts are 453/453.
>> 120 circuits completed, 0 were unusable, 0 collapsed, and 0 timed out.
>> For reference, your timeout cutoff is 91 seconds.
>> Dec 03 XXX [warn] Your Guard regar42 (XXX) is failing an extremely
>> large
>> amount of circuits. This could indicate a route manipulation attack,
>> extreme network overload, or a bug. Success counts are 60/241. Use
>> counts are 453/453. 60 circuits completed, 0 were unusable, 0
>> collapsed,
>> and 0 timed out. For reference, your timeout cutoff is 91 seconds.
>> Dec 03 XXX [notice] We'd like to launch a circuit to handle a
>> connection, but we already have 32 general-purpose client circuits
>> pending. Waiting until some finish.
>> Dec 03 XXX [notice] Your Guard TorKuato (XXX) is failing more circuits
>> than usual. Most likely this means the Tor network is overloaded.
>> Success counts are 105/151. Use counts are 81/81. 105 circuits
>> completed, 0 were unusable, 0 collapsed, and 2063 timed out. For
>> reference, your timeout cutoff is 60 seconds.
>> (after 5 minutes)
>> Dec 03 XXX [notice] We'd like to launch a circuit to handle a
>> connection, but we already have 32 general-purpose client circuits
>> pending. Waiting until some finish. [749254 similar message(s)
>> suppressed in last 600 seconds]
>> (after another 5 minutes)
>> Dec 03 XXX [notice] We'd like to launch a circuit to handle a
>> connection, but we already have 32 general-purpose client circuits
>> pending. Waiting until some finish. [398 similar message(s) suppressed
>> in last 600 seconds]
>> (after 20 minutes)
>> Dec 03 XXX [notice] We'd like to launch a circuit to handle a
>> connection, but we already have 32 general-purpose client circuits
>> pending. Waiting until some finish. [6 similar message(s) suppressed
>> in
>> last 600 seconds]
>>
>> We are running all this on a small VPS with 1GB of RAM.
>> The RAM finishes quickly but there is plenty of free swap.
>> The service was running fine with much more hidden services, on the
>> same
>> hardware.
>> We could increase the RAM but doubt that's the problem.
>>
>> Again .. any Tor developer willing to dig into this ?
>> Ours is an experimental project, there is nothing business-oriented
>> here.
>> If we are pushing the limits of hidden services capabilities, or
>> really
>> are under a low-level Tor based attack, isn't this a good chance to
>> improve Tor's hidden service code
>> or just improve Tor's hidden service _logging_ code ?
>>
>> Perhaps we should post this on tor-dev ?
>
> --
> CYRUSERV Onionland Hosting: http://cyruservvvklto2l.onion/
> PGP public key: http://cyruservvvklto2l.onion/contact
> This email is just for mailing lists and private correspondence.
> Please use cyrus_the_great at lelantos.org for business inquiries.
More information about the tor-talk
mailing list