[tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits
Mike Perry
mikeperry at torproject.org
Wed May 13 21:00:40 UTC 2020
Hi tevador,
On 5/8/20 2:53 PM, tevador wrote:
> Including the effort has 2 benefits:
>
> 1. In case the Intro Priority Queue is full, the service doesn't need to
> waste time verifying PoW solutions that have effort lower than the last
> element in the queue. While an attacker can always lie about the actual
> effort of their nonce value, I think this field can still save some CPU
> cycles when the service is under heavy load.
>
> 2. The service can conveniently verify the reported effort with
> the following inequality:
>
> POW_EFFORT * pow_function(POW_NONCE, SEED) <= MAX_RESULT
>
> where MAX_RESULT is the highest possible output from the pow_function.
> In the case of Example A, that would be:
>
> 165 * 6317 = 1042305 <= 1048576
>
>
>
>> Similar to how our cell scheduler works, the onion service subsystem will
>> poll the priority queue every 100ms tick and process the first 20 cells from
>> the priority queue (if they exist). The service will perform the rendezvous
>> and the rest of the onion service protocol as normal.
>
> I suggest to use a different selection method rather than always taking
> the first 20 requests.
>
> Selecting cells from the front of the queue actually minimizes the effort that
> an attacker needs to expend to completely block access to the service.
> The attacker can always submit the minimum effort required to occupy the first
> 20 slots in each 100 ms window. This can be done by submitting just 200 requests
> per second and observing how many circuits are successfully open and adjusting
> the effort until no other request can go through. This is the "Total overwhelm
> strategy" described in ยง 5.1.1.
Hrm, you seem to have read the original proposal and missed some of the
follow-on threads.
We moved away from a 100ms tick-based system into a top-half and
bottom-half handler design, which updates the difficulty target as well.
I tried to describe this in the top-half and bottom-half handler steps
in my reply:
https://lists.torproject.org/pipermail/tor-dev/2020-April/014219.html
In short, we let the queue grow at a faster rate than we serve, and we
trim it occasionally. Those steps set the descriptor difficulty a
property of based on what the service can actually serve from the queue
and based on the queue trim point. We allow clients to pick a higher
difficulty arbitrarily to jump to the head of the queue, if they notice
they are still not getting service based on the descriptor difficulty.
This also eliminates the need for a "default" difficulty.
So in order for the attacker to "total overwhelm" that system, don't
they have to submit not just 200 requests per second, but *continuously*
send requests with higher difficulty than anyone else in the queue, in
order to fully deny service?
Are there other reasons to do stochastic sampling over a priority queue,
given this top-half and bottom-half design?
--
Mike Perry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.torproject.org/pipermail/tor-dev/attachments/20200513/5d817ca7/attachment-0001.sig>
More information about the tor-dev
mailing list