Filename: 324-rtt-congestion-control.txt
Title: RTT-based Congestion Control for Tor
Author: Mike Perry
Created: 02 July 2020
Status: Finished
0. Motivation [MOTIVATION]
This proposal specifies how to incrementally deploy RTT-based congestion
control and improved queue management in Tor. It is written to allow us
to first deploy the system only at Exit relays, and then incrementally
improve the system by upgrading intermediate relays.
Lack of congestion control is the reason why Tor has an inherent speed
limit of about 500KB/sec for downloads and uploads via Exits, and even
slower for onion services. Because our stream SENDME windows are fixed
at 500 cells per stream, and only ~500 bytes can be sent in one cell,
the max speed of a single Tor stream is 500*500/circuit_latency. This
works out to about 500KB/sec max sustained throughput for a single
download, even if circuit latency is as low as 500ms.
Because onion services paths are more than twice the length of Exit
paths (and thus more than twice the circuit latency), onion service
throughput will always have less than half the throughput of Exit
throughput, until we deploy proper congestion control with dynamic
windows.
Proper congestion control will remove this speed limit for both Exits
and onion services, as well as reduce memory requirements for fast Tor
relays, by reducing queue lengths.
The high-level plan is to use Round Trip Time (RTT) as a primary
congestion signal, and compare the performance of two different
congestion window update algorithms that both use RTT as a congestion
signal.
The combination of RTT-based congestion signaling, a congestion window
update algorithm, and Circuit-EWMA will get us the most if not all of
the benefits we seek, and only requires clients and Exits to upgrade to
use it. Once this is deployed, circuit bandwidth caps will no longer be
capped at ~500kb/sec by the fixed window sizes of SENDME; queue latency
will fall significantly; memory requirements at relays should plummet;
and transient bottlenecks in the network should dissipate.
Extended background information on the choices made in this proposal can
be found at:
https://lists.torproject.org/pipermail/tor-dev/2020-June/014343.html
https://lists.torproject.org/pipermail/tor-dev/2020-January/014140.html
An exhaustive list of citations for further reading is in Section
[CITATIONS].
A glossary of common congestion control acronyms and terminology is in
Section [GLOSSARY].
1. Overview [OVERVIEW]
This proposal has five main sections, after this overview. These
sections are referenced [IN_ALL_CAPS] rather than by number, for easy
searching.
Section [CONGESTION_SIGNALS] specifies how to use Tor's SENDME flow
control cells to measure circuit RTT, for use as an implicit congestion
signal. It also mentions an explicit congestion signal, which can be
used as a future optimization once all relays upgrade.
Section [CONTROL_ALGORITHMS] specifies two candidate congestion window
upgrade mechanisms, which will be compared for performance in simulation
in Shadow, as well as evaluated on the live network, and tuned via
consensus parameters listed in [CONSENSUS_PARAMETERS].
Section [FLOW_CONTROL] specifies how to handle back-pressure when one of
the endpoints stops reading data, but data is still arriving. In
particular, it specifies what to do with streams that are not being read
by an application, but still have data arriving on them.
Section [SYSTEM_INTERACTIONS] describes how congestion control will
interact with onion services, circuit padding, and conflux-style traffic
splitting.
Section [EVALUATION] describes how we will evaluate and tune our
options for control algorithms and their parameters.
Section [PROTOCOL_SPEC] describes the specific cell formats and
descriptor changes needed by this proposal.
Section [SECURITY_ANALYSIS] provides information about the DoS and
traffic analysis properties of congestion control.
2. Congestion Signals [CONGESTION_SIGNALS]
In order to detect congestion at relays on a circuit, Tor will use
circuit Round Trip Time (RTT) measurement. This signal will be used in
slightly different ways in our various [CONTROL_ALGORITHMS], which will
be compared against each other for optimum performance in Shadow and on
the live network.
To facilitate this, we will also change SENDME accounting logic
slightly. These changes only require clients, exits, and dirauths to
update.
As a future optimization, it is possible to send a direct ECN congestion
signal. This signal *will* require all relays on a circuit to upgrade to
support it, but it will reduce congestion by making the first congestion event
on a circuit much faster to detect.
To reduce confusion and complexity of this proposal, this signal has been
moved to the ideas repository, under xxx-backward-ecn.txt [BACKWARD_ECN].
2.1 RTT measurement
Recall that Tor clients, exits, and onion services send
RELAY_COMMAND_SENDME relay cells every CIRCWINDOW_INCREMENT (100) cells
of received RELAY_COMMAND_DATA.
This allows those endpoints to measure the current circuit RTT, by
measuring the amount of time between sending a RELAY_COMMAND_DATA cell
that would trigger a SENDME from the other endpoint, and the arrival of
that SENDME cell. This means that RTT is measured every 'cc_sendme_inc'
data cells.
Circuits will record the minimum and maximum RTT measurement, as well as
a smoothed value of representing the current RTT. The smoothing for the
current RTT is performed as specified in [N_EWMA_SMOOTHING].
Algorithms that make use of this RTT measurement for congestion
window update are specified in [CONTROL_ALGORITHMS].
2.1.1. Clock Jump Heuristics [CLOCK_HEURISTICS]
The timestamps for RTT (and BDP) are measured using Tor's
monotime_absolute_usec() API. This API is designed to provide a monotonic
clock that only moves forward. However, depending on the underlying system
clock, this may result in the same timestamp value being returned for long
periods of time, which would result in RTT 0-values. Alternatively, the clock
may jump forward, resulting in abnormally large RTT values.
To guard against this, we perform a series of heuristic checks on the time delta
measured by the RTT estimator, and if these heurtics detect a stall or a jump,
we do not use that value to update RTT or BDP, nor do we update any congestion
control algorithm information that round.
If the time delta is 0, that is always treated as a clock stall, the RTT is
not used, congestion control is not updated, and this fact is cached globally.
If the circuit does not yet have an EWMA RTT or it is still in Slow Start, then
no further checks are performed, and the RTT is used.
If the circuit has stored an EWMA RTT and has exited Slow Start, then every
sendme ACK, the new candidate RTT is compared to the stored EWMA RTT. If the
new RTT is 5000 times larger than the EWMA RTT, then the circuit does not
record that estimate, and does not update BDP or the congestion control
algorithms for that SENDME ack. If the new RTT is 5000 times smaller than the
EWMA RTT, then the circuit uses the globally cached value from above (ie: it
assumes the clock is stalled *only* if there was previously *also* a 0-delta RTT).
If both ratio checks pass, the globally cached clock stall state is set to
false (no stall), and the RTT value is used.
2.1.2. N_EWMA Smoothing [N_EWMA_SMOOTHING]
RTT estimation requires smoothing, to reduce the effects of packet jitter.
This smoothing is performed using N_EWMA[27], which is an Exponential
Moving Average with alpha = 2/(N+1):
N_EWMA = RTT*2/(N+1) + N_EWMA_prev*(N-1)/(N+1)
= (RTT*2 + N_EWMA_prev*(N-1))/(N+1).
Note that the second rearranged form MUST be used in order to ensure that
rounding errors are handled in the same manner as other implementations.
Flow control rate limiting uses this function.
During Slow Start, N is set to `cc_ewma_ss`, for RTT estimation.
After Slow Start, N is the number of SENDME acks between congestion window
updates, divided by the value of consensus parameter 'cc_ewma_cwnd_pct', and
then capped at a max of 'cc_ewma_max', but always at least 2:
N = MAX(MIN(CWND_UPDATE_RATE(cc)*cc_ewma_cwnd_pct/100, cc_ewma_max), 2);
CWND_UPDATE_RATE is normally just round(CWND/cc_sendme_inc), but after
slow start, it is round(CWND/(cc_cwnd_inc_rate*cc_sendme_inc)).
2.2. SENDME behavior changes
We will make four major changes to SENDME behavior to aid in computing
and using RTT as a congestion signal.
First, we will need to establish a ProtoVer of "FlowCtrl=2" to signal
support by Exits for the new SENDME format and congestion control
algorithm mechanisms. We will need a similar announcement in the onion
service descriptors of services that support congestion control.
Second, we will turn CIRCWINDOW_INCREMENT into a consensus parameter
cc_sendme_inc, instead of using a hardcoded value of 100 cells. It is
likely that more frequent SENDME cells will provide quicker reaction to
congestion, since the RTT will be measured more often. If
experimentation in Shadow shows that more frequent SENDMEs reduce
congestion and improve performance but add significant overhead, we can
reduce SENDME overhead by allowing SENDME cells to carry stream data, as
well, using Proposal 325. The method for negotiating a common value of
cc_sendme_inc on a circuit is covered in [ONION_NEGOTIATION] and
[EXIT_NEGOTIATION].
Third, authenticated SENDMEs can remain as-is in terms of protocol
behavior, but will require some implementation updates to account for
variable window sizes and variable SENDME pacing. In particular, the
sendme_last_digests list for auth sendmes needs updated checks for
larger windows and CIRCWINDOW_INCREMENT changes. Other functions to
examine include:
- circuit_sendme_cell_is_next()
- sendme_record_cell_digest_on_circ()
- sendme_record_received_cell_digest()
- sendme_record_sending_cell_digest()
- send_randomness_after_n_cells
Fourth, stream level SENDMEs will be eliminated. Details on handling
streams and backpressure is covered in [FLOW_CONTROL].
3. Congestion Window Update Algorithms [CONTROL_ALGORITHMS]
In general, the goal of congestion control is to ensure full and fair
utilization of the capacity of a network path -- in the case of Tor the spare
capacity of the circuit. This is accomplished by setting the congestion window
to target the Bandwidth-Delay Product[28] (BDP) of the circuit in one way or
another, so that the total data outstanding is roughly equal to the actual
transit capacity of the circuit.
There are several ways to update a congestion window to target the BDP. Some
use direct BDP estimation, where as others use backoff properties to achieve
this. We specify three BDP estimation algorithms in the [BDP_ESTIMATION]
sub-section, and three congestion window update algorithms in [TOR_WESTWOOD],
[TOR_VEGAS], and [TOR_NOLA].
Note that the congestion window update algorithms differ slightly from the
background tor-dev mails[1,2], due to corrections and improvements. Hence they
have been given different names than in those two mails. The third algorithm,
[TOR_NOLA], simply uses the latest BDP estimate directly as its congestion
window.
These algorithms were evaluated by running Shadow simulations, to help
determine parameter ranges, and with experimentation on the live network.
After this testing, we have converged on using [TOR_VEGAS], and RTT-based BDP
estimation using the congestion window. We leave the algorithms in place
for historical reference.
All of these algorithms have rules to update 'cwnd' - the current congestion
window, which starts out at a value controlled by consensus parameter
'cc_cwnd_init'. The algorithms also keep track of 'inflight', which is a count
of the number of cells currently not yet acked by a SENDME. The algorithm MUST
ensure that cells cease being sent if 'cwnd - inflight <= 0'. Note that this
value CAN become negative in the case where the cwnd is reduced while packets
are inflight.
While these algorithms are in use, updates and checks of the current
'package_window' field are disabled. Where a 'package_window' value is
still needed, for example by cell packaging schedulers, 'cwnd - inflight' is
used (with checks to return 0 in the event of negative values).
The 'deliver_window' field is still used to decide when to send a SENDME. In C
tor, the deliver window is initially set at 1000, but it never gets below 900,
because authenticated sendmes (Proposal 289) require that we must send only
one SENDME at a time, and send it immediately after 100 cells are received.
Implementation of different algorithms should be very simple - each
algorithm should have a different update function depending on the selected algorithm,
as specified by consensus parameter 'cc_alg'.
For C Tor's current flow control, these functions are defined in sendme.c,
and are called by relay.c:
- sendme_note_circuit_data_packaged()
- sendme_circuit_data_received()
- sendme_circuit_consider_sending()
- sendme_process_circuit_level()
Despite the complexity of the following algorithms in their TCP
implementations, their Tor equivalents are extremely simple, each being
just a handful of lines of C. This simplicity is possible because Tor
does not have to deal with out-of-order delivery, packet drops,
duplicate packets, and other network issues at the circuit layer, due to
the fact that Tor circuits already have reliability and in-order
delivery at that layer.
We are also removing the aspects of TCP that cause the congestion
algorithm to reset into slow start after being idle for too long, or
after too many congestion signals. These are deliberate choices that
simplify the algorithms and also should provide better performance for
Tor workloads.
In all cases, variables in these sections are either consensus parameters
specified in [CONSENSUS_PARAMETERS], or scoped to the circuit. Consensus
parameters for congestion control are all prefixed by cc_. Everything else
is circuit-scoped.
3.1. Estimating Bandwidth-Delay Product [BDP_ESTIMATION]
At a high-level, there are three main ways to estimate the Bandwidth-Delay
Product: by using the current congestion window and RTT, by using the inflight
cells and RTT, and by measuring SENDME arrival rate. After extensive shadow
simulation and live testing, we have arrived at using the congestion window
RTT based estimator, but we will describe all three for background.
All three estimators are updated every SENDME ack arrival.
The SENDME arrival rate is the most direct way to estimate BDP, but it
requires averaging over multiple SENDME acks to do so. Unfortunatetely,
this approach suffers from what is called "ACK compression", where returning
SENDMEs build up in queues, causing over-estimation of the BDP.
The congestion window and inflight estimates rely on the congestion algorithm
more or less correctly tracking an approximation of the BDP, and then use
current and minimum RTT to compensate for overshoot. These estimators tend to
under-estimate BDP, especially when the congestion window is below the BDP.
This under-estimation is corrected for by the increase of the congestion
window in congestion control algorithm rules.
3.1.1. SENDME arrival BDP estimation
It is possible to directly measure BDP via the amount of time between SENDME
acks. In this period of time, we know that the endpoint successfully received
'cc_sendme_inc' cells.
This means that the bandwidth of the circuit is then calculated as:
BWE = cc_sendme_inc/sendme_ack_timestamp_delta
The bandwidth delay product of the circuit is calculated by multiplying this
bandwidth estimate by the *minimum* RTT time of the circuit (to avoid counting
queue time):
BDP = BWE * RTT_min
In order to minimize the effects of ack compression (aka SENDME responses
becoming close to one another due to queue delay on the return), we
maintain a history a full congestion window worth of previous SENDME
timestamps.
With this, the calculation becomes:
BWE = (num_sendmes-1) * cc_sendme_inc / num_sendme_timestamp_delta
BDP = BWE * RTT_min
Note that because we are counting the number of cells *between* the first
and last sendme of the congestion window, we must subtract 1 from the number
of sendmes actually received. Over the time period between the first and last
sendme of the congestion window, the other endpoint successfully read
(num_sendmes-1) * cc_sendme_inc cells.
Furthermore, because the timestamps are microseconds, to avoid integer
truncation, we compute the BDP using multiplication first:
BDP = (num_sendmes-1) * cc_sendme_inc * RTT_min / num_sendme_timestamp_delta
After all of this, the BDP is smoothed using [N_EWMA_SMOOTHING].
This smoothing means that the SENDME BDP estimation will only work after two
(2) SENDME acks have been received. Additionally, it tends not to be stable
unless at least 'cc_bwe_min' sendme's are used. This is controlled by the
'cc_bwe_min' consensus parameter. Finally, if [CLOCK_HEURISTICS] have detected
a clock jump or stall, this estimator is not updated.
If all edge connections no longer have data available to send on a circuit
and all circuit queues have drained without blocking the local orconn, we stop
updating this BDP estimate and discard old timestamps. However, we retain the
actual estimator value.
Unfortunately, even after all of this, SENDME BDP estimation proved unreliable
in Shadow simulation, due to ack compression.
3.1.2. Congestion Window BDP Estimation
This is the BDP estimator we use.
Assuming that the current congestion window is at or above the current BDP,
the bandwidth estimate is the current congestion window size divided by the
RTT estimate:
BWE = cwnd / RTT_current_ewma
The BDP estimate is computed by multiplying the Bandwidth estimate by
the *minimum* circuit latency:
BDP = BWE * RTT_min
Simplifying:
BDP = cwnd * RTT_min / RTT_current_ewma
The RTT_min for this calculation comes from the minimum RTT_current_ewma seen
in the lifetime of this circuit. If the congestion window falls to
`cc_cwnd_min` after slow start, implementations MAY choose to reset RTT_min
for use in this calculation to either the RTT_current_ewma, or a
percentile-weighted average between RTT_min and RTT_current_ewma, specified by
`cc_rtt_reset_pct`. This helps with escaping starvation conditions.
The net effect of this estimation is to correct for any overshoot of
the cwnd over the actual BDP. It will obviously underestimate BDP if cwnd
is below BDP.
3.1.3. Inflight BDP Estimation
Similar to the congestion window based estimation, the inflight estimation
uses the current inflight packet count to derive BDP. It also subtracts local
circuit queue use from the inflight packet count. This means it will be strictly
less than or equal to the cwnd version:
BDP = (inflight - circ.chan_cells.n) * RTT_min / RTT_current_ewma
If all edge connections no longer have data available to send on a circuit
and all circuit queues have drained without blocking the local orconn, we stop
updating this BDP estimate, because there are not sufficient inflight cells
to properly estimate BDP.
While the research literature for Vegas says that inflight estimators
performed better due to the ability to avoid overhsoot, we had better
performance results using other methods to control overshot. Hence, we do not
use the inflight BDP estimator.
3.1.4. Piecewise BDP estimation
A piecewise BDP estimation could be used to help respond more quickly in the
event the local OR connection is blocked, which indicates congestion somewhere
along the path from the client to the guard (or between Exit and Middle). In
this case, it takes the minimum of the inflight and SENDME estimators.
When the local OR connection is not blocked, this estimator uses the max of
the SENDME and cwnd estimator values.
When the SENDME estimator has not gathered enough data, or has cleared its
estimates based on lack of edge connection use, this estimator uses the
Congestion Window BDP estimator value.
3.2. Tor Westwood: TCP Westwood using RTT signaling [TOR_WESTWOOD]
http://intronetworks.cs.luc.edu/1/html/newtcps.html#tcp-westwood
http://nrlweb.cs.ucla.edu/nrlweb/publication/download/99/2001-mobicom-0.pdf
http://cpham.perso.univ-pau.fr/TCP/ccr_v31.pdf
https://c3lab.poliba.it/images/d/d7/Westwood_linux.pdf
Recall that TCP Westwood is basically TCP Reno, but it uses BDP estimates
for "Fast recovery" after a congestion signal arrives.
We will also be using the RTT congestion signal as per BOOTLEG_RTT_TOR
here, from the Options mail[1] and Defenestrator paper[3].
This system must keep track of RTT measurements per circuit: RTT_min, RTT_max,
and RTT_current. These are measured using the time delta between every
'cc_sendme_inc' relay cells and the SENDME response. The first RTT_min can be
measured arbitrarily, so long as it is larger than what we would get from
SENDME.
RTT_current is N-EWMA smoothed over 'cc_ewma_cwnd_pct' percent of
congestion windows worth of SENDME acks, up to a max of 'cc_ewma_max' acks, as
described in [N_EWMA_SMOOTHING].
Recall that BOOTLEG_RTT_TOR emits a congestion signal when the current
RTT falls below some fractional threshold ('cc_westwood_rtt_thresh') fraction
between RTT_min and RTT_max. This check is:
RTT_current < (1−cc_westwood_rtt_thresh)*RTT_min
+ cc_westwood_rtt_thresh*RTT_max
Additionally, if the local OR connection is blocked at the time of SENDME ack
arrival, this is treated as an immediate congestion signal.
(We can also optionally use the ECN signal described in
ideas/xxx-backward-ecn.txt, to exit Slow Start.)
Congestion signals from RTT, blocked OR connections, or ECN are processed only
once per congestion window. This is achieved through the next_cc_event flag,
which is initialized to a cwnd worth of SENDME acks, and is decremented
each ack. Congestion signals are only evaluated when it reaches 0.
Note that because the congestion signal threshold of TOR_WESTWOOD is a
function of RTT_max, and excessive queuing can cause an increase in RTT_max,
TOR_WESTWOOD may have runaway conditions. Additionally, if stream activity is
constant, but of a lower bandwidth than the circuit, this will not drive the
RTT upwards, and this can result in a congestion window that continues to
increase in the absence of any other concurrent activity.
Here is the complete congestion window algorithm for Tor Westwood. This will run
each time we get a SENDME (aka sendme_process_circuit_level()):
# Update acked cells
inflight -= cc_sendme_inc
if next_cc_event:
next_cc_event--
# Do not update anything if we detected a clock stall or jump,
# as per [CLOCK_HEURISTICS]
if clock_stalled_or_jumped:
return
if next_cc_event == 0:
# BOOTLEG_RTT_TOR threshold; can also be BACKWARD_ECN check:
if (RTT_current <
(100−cc_westwood_rtt_thresh)*RTT_min/100 +
cc_westwood_rtt_thresh*RTT_max/100) or orconn_blocked:
if in_slow_start:
cwnd += cwnd * cc_cwnd_inc_pct_ss # Exponential growth
else:
cwnd = cwnd + cc_cwnd_inc # Linear growth
else:
if cc_westwood_backoff_min:
cwnd = min(cwnd * cc_westwood_cwnd_m, BDP) # Window shrink
else:
cwnd = max(cwnd * cc_westwood_cwnd_m, BDP) # Window shrink
in_slow_start = 0
# Back off RTT_max (in case of runaway RTT_max)
RTT_max = RTT_min + cc_westwood_rtt_m * (RTT_max - RTT_min)
cwnd = MAX(cwnd, cc_circwindow_min)
next_cc_event = cwnd / (cc_cwnd_inc_rate * cc_sendme_inc)
3.3. Tor Vegas: TCP Vegas with Aggressive Slow Start [TOR_VEGAS]
http://intronetworks.cs.luc.edu/1/html/newtcps.html#tcp-vegas
http://pages.cs.wisc.edu/~akella/CS740/F08/740-Papers/BOP94.pdf
http://www.mathcs.richmond.edu/~lbarnett/cs332/assignments/brakmo_peterson_vegas.pdf
ftp://ftp.cs.princeton.edu/techreports/2000/628.pdf
TCP Vegas control algorithm estimates the queue lengths at relays by
subtracting the current BDP estimate from the current congestion window.
After extensive shadow simulation and live testing, we have settled on this
congestion control algorithm for use in Tor.
Assuming the BDP estimate is accurate, any amount by which the congestion
window exceeds the BDP will cause data to queue.
Thus, Vegas estimates estimates the queue use caused by congestion as:
queue_use = cwnd - BDP
Original TCP Vegas used a cwnd BDP estimator only. We added the ability to
switch this BDP estimator in the implementation, and experimented with various
options. We also parameterized this queue_use calculation as a tunable
weighted average between the cwnd-based BDP estimate and the piecewise
estimate (consensus parameter 'cc_vegas_bdp_mix'). After much testing of
various ways to compute BDP, we were still unable to do much better than the
original cwnd estimator. So while this capability to change the BDP estimator
remains in the C implementation, we do not expect it to be used.
However, it was useful to use a local OR connection block at the time of
SENDME ack arrival, as an immediate congestion signal. Note that in C-Tor,
this orconn_block state is not derived from any socket info, but instead is a
heuristic that declares an orconn as blocked if any circuit cell queue
exceeds the 'cellq_high' consensus parameter.
(As an additional optimization, we could also use the ECN signal described in
ideas/xxx-backward-ecn.txt, but this is not implemented. It is likely only of
any benefit during Slow Start, and even that benefit is likely small.)
During Slow Start, we use RFC3742 Limited Slow Start[32], which checks the
congestion signals from RTT, blocked OR connections, or ECN every single
SENDME ack. It also provides a `cc_sscap_*` parameter for each path length,
which reduces the congestion window increment rate after it is crossed, as
per the rules in RFC3742:
rfc3742_ss_inc(cwnd):
if cwnd <= cc_ss_cap_pathtype:
# Below the cap, we increment as per cc_cwnd_inc_pct_ss percent:
return round(cc_cwnd_inc_pct_ss*cc_sendme_inc/100)
else:
# This returns an increment equivalent to RFC3742, rounded,
# with a minimum of inc=1.
# From RFC3742:
# K = int(cwnd/(0.5 max_ssthresh));
# inc = int(MSS/K);
return MAX(round((cc_sendme_inc*cc_ss_cap_pathtype)/(2*cwnd)), 1);
During both Slow Start, and Steady State, if the congestion window is not full,
we never increase the congestion window. We can still decrease it, or exit slow
start, in this case. This is done to avoid causing overshoot. The original TCP
Vegas addressed this problem by computing BDP and queue_use from inflight,
instead of cwnd, but we found that approach to have signficantly worse
performance.
Because C-Tor is single-threaded, multiple SENDME acks may arrive during one
processing loop, before edge connections resume reading. For this reason,
we provide two heuristics to provide some slack in determining the full
condition. The first is to allow a gap between inflight and cwnd,
parameterized as 'cc_cwnd_full_gap' multiples of 'cc_sendme_inc':
cwnd_is_full(cwnd, inflight):
if inflight + 'cc_cwnd_full_gap'*'cc_sendme_inc' >= cwnd:
return true
else
return false
The second heuristic immediately resets the full state if it falls below
'cc_cwnd_full_minpct' full:
cwnd_is_nonfull(cwnd, inflight):
if 100*inflight < 'cc_cwnd_full_minpct'*cwnd:
return true
else
return false
This full status is cached once per cwnd if 'cc_cwnd_full_per_cwnd=1';
otherwise it is cached once per cwnd update. These two helper functions
determine the number of acks in each case:
SENDME_PER_CWND(cwnd):
return ((cwnd + 'cc_sendme_inc'/2)/'cc_sendme_inc')
CWND_UPDATE_RATE(cwnd, in_slow_start):
# In Slow Start, update every SENDME
if in_slow_start:
return 1
else: # Otherwise, update as per the 'cc_inc_rate' (31)
return ((cwnd + 'cc_cwnd_inc_rate'*'cc_sendme_inc'/2)
/ ('cc_cwnd_inc_rate'*'cc_sendme_inc'));
Shadow experimentation indicates that 'cc_cwnd_full_gap=2' and
'cc_cwnd_full_per_cwnd=0' minimizes queue overshoot, where as
'cc_cwnd_full_per_cwnd=1' and 'cc_cwnd_full_gap=1' is slightly better
for performance. Since there may be a difference between Shadow and live,
we leave this parmeterization in place.
Here is the complete pseudocode for TOR_VEGAS with RFC3742, which is run every
time an endpoint receives a SENDME ack. All variables are scoped to the
circuit, unless prefixed by an underscore (local), or in single quotes
(consensus parameters):
# Decrement counters that signal either an update or cwnd event
if next_cc_event:
next_cc_event--
if next_cwnd_event:
next_cwnd_event--
# Do not update anything if we detected a clock stall or jump,
# as per [CLOCK_HEURISTICS]
if clock_stalled_or_jumped:
inflight -= 'cc_sendme_inc'
return
if BDP > cwnd:
_queue_use = 0
else:
_queue_use = cwnd - BDP
if cwnd_is_full(cwnd, inflight):
cwnd_full = 1
else if cwnd_is_nonfull(cwnd, inflight):
cwnd_full = 0
if in_slow_start:
if _queue_use < 'cc_vegas_gamma' and not orconn_blocked:
# Only increase cwnd if the cwnd is full
if cwnd_full:
_inc = rfc3742_ss_inc(cwnd);
cwnd += _inc
# If the RFC3742 increment drops below steady-state increment
# over a full cwnd worth of acks, exit slow start.
if _inc*SENDME_PER_CWND(cwnd) <= 'cc_cwnd_inc'*'cc_cwnd_inc_rate':
in_slow_start = 0
else: # Limit hit. Exit Slow start (even if cwnd not full)
in_slow_start = 0
cwnd = BDP + 'cc_vegas_gamma'
# Provide an emergency hard-max on slow start:
if cwnd >= 'cc_ss_max':
cwnd = 'cc_ss_max'
in_slow_start = 0
else if next_cc_event == 0:
if _queue_use > 'cc_vegas_delta':
cwnd = BDP + 'cc_vegas_delta' - 'cc_cwnd_inc'
elif _queue_use > cc_vegas_beta or orconn_blocked:
cwnd -= 'cc_cwnd_inc'
elif cwnd_full and _queue_use < 'cc_vegas_alpha':
# Only increment if queue is low, *and* the cwnd is full
cwnd += 'cc_cwnd_inc'
cwnd = MAX(cwnd, 'cc_circwindow_min')
# Specify next cwnd and cc update
if next_cc_event == 0:
next_cc_event = CWND_UPDATE_RATE(cwnd)
if next_cwnd_event == 0:
next_cwnd_event = SENDME_PER_CWND(cwnd)
# Determine if we need to reset the cwnd_full state
# (Parameterized)
if 'cc_cwnd_full_per_cwnd' == 1:
if next_cwnd_event == SENDME_PER_CWND(cwnd):
cwnd_full = 0
else:
if next_cc_event == CWND_UPDATE_RATE(cwnd):
cwnd_full = 0
# Update acked cells
inflight -= 'cc_sendme_inc'
3.4. Tor NOLA: Direct BDP tracker [TOR_NOLA]
Based on the theory that congestion control should track the BDP,
the simplest possible congestion control algorithm could just set the
congestion window directly to its current BDP estimate, every SENDME ack.
Such an algorithm would need to overshoot the BDP slightly, especially in the
presence of competing algorithms. But other than that, it can be exceedingly
simple. Like Vegas, but without putting on airs. Just enough strung together.
After meditating on this for a while, it also occurred to me that no one has
named a congestion control algorithm after New Orleans. We have Reno, Vegas,
and scores of others. What's up with that?
Here's the pseudocode for TOR_NOLA that runs on every SENDME ack:
# Do not update anything if we detected a clock stall or jump,
# as per [CLOCK_HEURISTICS]
if clock_stalled_or_jumped:
return
# If the orconn is blocked, do not overshoot BDP
if orconn_blocked:
cwnd = BDP
else:
cwnd = BDP + cc_nola_overshoot
cwnd = MAX(cwnd, cc_circwindow_min)
4. Flow Control [FLOW_CONTROL]
Flow control provides what is known as "pushback" -- the property that
if one endpoint stops reading data, the other endpoint stops sending
data. This prevents data from accumulating at points in the network, if
it is not being read fast enough by an application.
Because Tor must multiplex many streams onto one circuit, and each
stream is mapped to another TCP socket, Tor's current pushback is rather
complicated and under-specified. In C Tor, it is implemented in the
following functions:
- circuit_consider_stop_edge_reading()
- connection_edge_package_raw_inbuf()
- circuit_resume_edge_reading()
The decision on when a stream is blocked is performed in:
- sendme_note_stream_data_packaged()
- sendme_stream_data_received()
- sendme_connection_edge_consider_sending()
- sendme_process_stream_level()
Tor currently maintains separate windows for each stream on a circuit,
to provide individual stream flow control. Circuit windows are SENDME
acked as soon as a relay data cell is decrypted and recognized. Stream
windows are only SENDME acked if the data can be delivered to an active
edge connection. This allows the circuit to continue to operate if an
endpoint refuses to read data off of one of the streams on the circuit.
Because Tor streams can connect to many different applications and
endpoints per circuit, it is important to preserve the property that if
only one endpoint edge connection is inactive, it does not stall the
whole circuit, in case one of those endpoints is malfunctioning or
malicious.
However, window-based stream flow control also imposes a speed limit on
individual streams. If the stream window size is below the circuit
congestion window size, then it becomes the speed limit of a download,
as we saw in the [MOTIVATION] section of this proposal.
So for performance, it is optimal that each stream window is the same
size as the circuit's congestion window. However, large stream windows
are a vector for OOM attacks, because malicious clients can force Exits
to buffer a full stream window for each stream while connecting to a
malicious site and uploading data that the site does not read from its
socket. This attack is significantly easier to perform at the stream
level than on the circuit level, because of the multiplier effects of
only needing to establish a single fast circuit to perform the attack on
a very large number of streams.
This catch22 means that if we use windows for stream flow control, we
either have to commit to allocating a full congestion window worth
memory for each stream, or impose a speed limit on our streams.
Hence, we will discard stream windows entirely, and instead use a
simpler buffer-based design that uses XON/XOFF to signal when this
buffer is too large. Additionally, the XON cell will contain advisory
rate information based on the rate at which that edge connection can
write data while it has data to write. The other endpoint can rate limit
sending data for that stream to the rate advertised in the XON, to avoid
excessive XON/XOFF chatter and sub-optimal behavior.
This will allow us to make full use of the circuit congestion window for
every stream in combination, while still avoiding buffer buildup inside
the network.
4.1. Stream Flow Control Without Windows [WINDOWLESS_FLOW]
Each endpoint (client, Exit, or onion service) sends circuit-level
SENDME acks for all circuit cells as soon as they are decrypted and
recognized, but *before* delivery to their edge connections.
This means that if the edge connection is blocked because an
application's SOCKS connection or a destination site's TCP connection is
not reading, data will build up in a queue at that endpoint,
specifically in the edge connection's outbuf.
Consensus parameters will govern the length of this queue that
determines when XON and XOFF cells are sent, as well as when advisory
XON cells that contain rate information can be sent. These parameters
are separate for the queue lengths of exits, and of clients/services.
(Because clients and services will typically have localhost connections
for their edges, they will need similar buffering limits. Exits may have
different properties, since their edges will be remote.)
The trunnel relay cell payload definitions for XON and XOFF are:
struct xoff_cell {
u8 version IN [0x00];
}
struct xon_cell {
u8 version IN [0x00];
u32 kbps_ewma;
}
Parties SHOULD treat XON or XOFF cells with unrecognized versions as a
protocol violation.
In `xon_cell`, a zero value for `kbps_ewma` means that the stream's rate is
unlimited. Parties should therefore not send "0" to mean "do not send data".
4.1.1. XON/XOFF behavior
If the length of an edge outbuf queue exceeds the size provided in the
appropriate client or exit XOFF consensus parameter, a
RELAY_COMMAND_STREAM_XOFF will be sent, which instructs the other endpoint to
stop sending from that edge connection.
Once the queue is expected to empty, a RELAY_COMMAND_STREAM_XON will be sent,
which allows the other end to resume reading on that edge connection. This XON
also indicates the average rate of queue drain since the XOFF.
Advisory XON cells are also sent whenever the edge connection's drain
rate changes by more than 'cc_xon_change_pct' percent compared to
the previously sent XON cell's value.
4.1.2. Edge bandwidth rate advertisement [XON_ADVISORY]
As noted above, the XON cell provides a field to indicate the N_EWMA rate which
edge connections drain their outgoing buffers.
To compute the drain rate, we maintain a timestamp and a byte count of how many
bytes were written onto the socket from the connection outbuf.
In order to measure the drain rate of a connection, we need to measure the time
it took between flushing N bytes on the socket and when the socket is available
for writing again. In other words, we are measuring the time it took for the
kernel to send N bytes between the first flush on the socket and the next
poll() write event.
For example, lets say we just wrote 100 bytes on the socket at time t = 0sec
and at time t = 2sec the socket becomes writeable again, we then estimate that
the rate of the socket is 100 / 2sec thus 50B/sec.
To make such measurement, we start the timer by recording a timestamp as soon
as data begins to accumulate in an edge connection's outbuf, currently 16KB (32
cells). We use such value for now because Tor write up to 32 cells at once on a
connection outbuf and so we use this burst of data as an indicator that bytes
are starting to accumulate.
After 'cc_xon_rate' cells worth of stream data, we use N_EWMA to average this
rate into a running EWMA average, with N specified by consensus parameter
'cc_xon_ewma_cnt'. Every EWMA update, the byte count is set to 0 and a new
timestamp is recorded. In this way, the EWMA counter is averaging N counts of
'cc_xon_rate' cells worth of bytes each.
If the buffers are non-zero, and we have sent an XON before, and the N_EWMA
rate has changed more than 'cc_xon_change_pct' since the last XON, we send an
updated rate. Because the EWMA rate is only updated every 'cc_xon_rate' cells
worth of bytes, such advisory XON updates can not be sent more frequent than
this, and should be sent much less often in practice.
When the outbuf completely drains to 0, and has been 0 for 'cc_xon_rate' cells
worth of data, we double the EWMA rate. We continue to double it while the
outbuf is 0, every 'cc_xon_rate' cells. The measurement timestamp is also set
back to 0.
When an XOFF is sent, the EWMA rate is reset to 0, to allow fresh calculation
upon drain.
If a clock stall or jump is detected by [CLOCK_HEURISTICS], we also
clear the fields, but do not record them in ewma.
NOTE: Because our timestamps are microseconds, we chose to compute and
transmit both of these rates as 1000 byte/sec units, as this reduces the
number of multiplications and divisions and avoids precision loss.
4.1.3. Oomkiller behavior
A malicious client can attempt to exhaust memory in an Exits outbufs, by
ignoring XOFF and advisory XONs. Implementations MAY choose to close specific
streams with outbufs that grow too large, but since the exit does not know
with certainty the client's congestion window, it is non-trival to determine
the exact upper limit a well-behaved client might send on a blocked stream.
Implementations MUST close the streams with the oldest chunks present in their
outbufs, while under global memory pressure, until memory pressure is
relieved.
4.1.4. Sidechannel mitigation
In order to mitigate DropMark attacks[28], both XOFF and advisory XON
transmission must be restricted. Because DropMark attacks are most severe
before data is sent, clients MUST ensure that an XOFF does not arrive before
it has sent the appropriate XOFF limit of bytes on a stream ('cc_xoff_exit'
for exits, 'cc_xoff_client' for onions).
Clients also SHOULD ensure that advisory XONs do not arrive before the
minimum of the XOFF limit and 'cc_xon_rate' full cells worth of bytes have
been transmitted.
Clients SHOULD ensure that advisory XONs do not arrive more frequently than
every 'cc_xon_rate' cells worth of sent data. Clients also SHOULD ensure than
XOFFs do not arrive more frequently than every XOFF limit worth of sent data.
Implementations SHOULD close the circuit if these limits are violated on the
client-side, to detect and resist dropmark attacks[28].
Additionally, because edges no longer use stream SENDME windows, we alter the
half-closed connection handling to be time based instead of data quantity
based. Half-closed connections are allowed to receive data up to the larger
value of the congestion control max_rtt field or the circuit build timeout
(for onion service circuits, we use twice the circuit build timeout). Any data
or relay cells after this point are considered invalid data on the circuit.
Recall that all of the dropped cell enforcement in C-Tor is performed by
accounting data provided through the control port CIRC_BW fields, currently
enforced only by using the vanguards addon[29].
The C-Tor implementation exposes all of these properties to CIRC_BW for
vanguards to enforce, but does not enforce them itself. So violations of any
of these limits do not cause circuit closure unless that addon is used (as
with the rest of the dropped cell side channel handling in C-Tor).
5. System Interactions [SYSTEM_INTERACTIONS]
Tor's circuit-level SENDME system currently has special cases in the
following situations: Intropoints, HSDirs, onion services, and circuit
padding. Additionally, proper congestion control will allow us to very
easily implement conflux (circuit traffic splitting).
This section details those special cases and interactions of congestion
control with other components of Tor.
5.1. HSDirs
Because HSDirs use the tunneled dirconn mechanism and thus also use
RELAY_COMMAND_DATA, they are already subject to Tor's flow control.
We may want to make sure our initial circuit window for HSDir circuits
is set custom for those circuit types, so a SENDME is not required to
fetch long descriptors. This will ensure HSDir descriptors can be
fetched in one RTT.
5.2. Introduction Points
Introduction Points are not currently subject to any flow control.
Because Intropoints accept INTRODUCE1 cells from many client circuits
and then relay them down a single circuit to the service as INTRODUCE2
cells, we cannot provide end-to-end congestion control all the way from
client to service for these cells.
We can run congestion control from the service to the Intropoint, and probably
should, since this is already subject to congestion control.
As an optimization, if that congestion window reaches zero (because the
service is overwhelmed), then we start sending NACKS back to the clients (or
begin requiring proof-of-work), rather than just let clients wait for timeout.
5.3. Rendezvous Points
Rendezvous points are already subject to end-to-end SENDME control,
because all relay cells are sent end-to-end via the rendezvous circuit
splice in circuit_receive_relay_cell().
This means that rendezvous circuits will use end-to-end congestion
control, as soon as individual onion clients and onion services upgrade
to support it. There is no need for intermediate relays to upgrade at
all.
5.4. Circuit Padding
Recall that circuit padding is negotiated between a client and a middle
relay, with one or more state machines running on circuits at the middle
relay that decide when to add padding.
https://github.com/torproject/tor/blob/master/doc/HACKING/CircuitPaddingDevelopment.md
This means that the middle relay can send padding traffic towards the
client that contributes to congestion, and the client may also send
padding towards the middle relay, that also creates congestion.
For low-traffic padding machines, such as the currently deployed circuit
setup obfuscation, this padding is inconsequential.
However, higher traffic circuit padding machines that are designed to
defend against website traffic fingerprinting will need additional care
to avoid inducing additional congestion, especially after the client or
the exit experiences a congestion signal.
The current overhead percentage rate limiting features of the circuit
padding system should handle this in some cases, but in other cases, an
XON/XOFF circuit padding flow control command may be required, so that
clients may signal to the machine that congestion is occurring.
5.5. Conflux
Conflux (aka multi-circuit traffic splitting) becomes significantly
easier to implement once we have congestion control. However, much like
congestion control, it will require experimentation to tune properly.
Recall that Conflux uses a 256-bit UUID to bind two circuits together at
the Exit or onion service. The original Conflux paper specified an
equation based on RTT to choose which circuit to send cells on.
https://www.cypherpunks.ca/~iang/pubs/conflux-pets.pdf
However, with congestion control, we will already know which circuit has
the larger congestion window, and thus has the most available cells in
its current congestion window. This will also be the faster circuit.
Thus, the decision of which circuit to send a cell on only requires
comparing congestion windows (and choosing the circuit with more packets
remaining in its window).
Conflux will require sequence numbers on data cells, to ensure that the
two circuits' data is properly re-assembled. The resulting out-of-order
buffer can potentially be as large as an entire congestion window, if
the circuits are very desynced (or one of them closes). It will be very
expensive for Exits to maintain this much memory, and exposes them to
OOM attacks.
This is not as much of a concern in the client download direction, since
clients will typically only have a small number of these out-of-order
buffers to keep around. But for the upload direction, Exits will need
to send some form of early XOFF on the faster circuit if this
out-of-order buffer begins to grow too large, since simply halting the
delivery of SENDMEs will still allow a full congestion window full of
data to arrive. This will also require tuning and experimentation, and
optimum results will vary between simulator and live network.
6. Performance Evaluation [EVALUATION]
Congestion control for Tor will be easy to implement, but difficult to
tune to ensure optimal behavior.
6.1. Congestion Signal Experiments
Our first experiments were to conduct client-side experiments to
determine how stable the RTT measurements of circuits are across the
live Tor network, to determine if we need more frequent SENDMEs, and/or
need to use any RTT smoothing or averaging.
These experiments were performed using onion service clients and services on
the live Tor network. From these experiments, we tuned the RTT and BDP
estimators, and arrived at reasonable values for EWMA smoothing and the
minimum number of SENDME acks required to estimate BDP.
Additionally, we specified that the algorithms maintain previous congestion
window estimates in the event that a circuit goes idle, rather than revert to
slow start. We experimented with intermittent idle/active live onion clients
to make sure that this behavior is acceptable, and it appeared to be.
In Shadow experimentation, the primary thing to test will be if the OR conn on
Exit relays blocks too frequently when under load, thus causing excessive
congestion signals, and overuse of the Inflight BDP estimator as opposed
to SENDME or CWND BDP. It may also be the case that this behavior is optimal,
even if it does happen.
Finally, we should check small variations in the EWMA smoothing and minimum BDP ack
counts in Shadow experimentation, to check for high variability in these
estimates, and other surprises.
6.2. Congestion Algorithm Experiments
In order to evaluate performance of congestion control algorithms, we will
need to implement [TOR_WESTWOOD], [TOR_VEGAS], and [TOR_NOLA]. We will need to
simulate their use in the Shadow Tor network simulator.
Simulation runs will need to evaluate performance on networks that use
only one algorithm, as well as on networks that run a combinations of
algorithms - particularly each type of congestion control in combination
with Tor's current flow control. Depending upon the number of current
flow control clients, more aggressive parameters of these algorithms may
need to be set, but this will result in additional queueing as well as
sub-optimal behavior once all clients upgrade.
In particular, during live onion service testing, we noticed that these
algorithms required particularly agressive default values to compete against
a network full of current clients. As more clients upgrade, we may be able
to lower these defaults. We should get a good idea of what values we can
choose at what upgrade point, from mixed Shadow simulation.
If Tor's current flow control is so aggressive that it causes probelems with
any amount of remaining old clients, we can experiment with kneecapping these
legacy flow control Tor clients by setting a low 'circwindow' consensus
parameter for them. This will allow us to set more reasonable parameter
values, without waiting for all clients to upgrade.
Because custom congestion control can be deployed by any Exit or onion
service that desires better service, we will need to be particularly careful
about how congestion control algorithms interact with rogue implementations
that more aggressively increase their window sizes. During these
adversarial-style experiments, we must verify that cheaters do not get
better service, and that Tor's circuit OOM killer properly closes circuits
that seriously abuse the congestion control algorithm, as per
[SECURITY_ANALYSIS]. This may requiring tuning 'circ_max_cell_queue_size',
and 'CircuitPriorityHalflifeMsec'.
Additionally, we will need to experiment with reducing the cell queue limits
on OR conns before they are blocked (OR_CONN_HIGHWATER), and study the
interaction of that with treating the or conn block as a congestion signal.
Finally, we will need to monitor our Shadow experiments for evidence of ack
compression, which can cause the BDP estimator to over-estimate the congestion
window. We will instrument our Shadow simulations to alert if they discover
excessive congestion window values, and tweak 'cc_bwe_min' and
'cc_sendme_inc' appropriately. We can set the 'cc_cwnd_max' parameter value
to low values (eg: ~2000 or so) to watch for evidence of this in Shadow, and
log. Similarly, we should watch for evidence that the 'cc_cwnd_min' parameter
value is rarely hit in Shadow, as this indicates that the cwnd may be too
small to measure BDP (for cwnd less than 'cc_sendme_inc'*'cc_bwe_min').
6.3. Flow Control Algorithm Experiments
Flow control only applies when the edges outside of Tor (SOCKS application,
onion service application, or TCP destination site) are *slower* than Tor's
congestion window. This typically means that the application is either
suspended or reading too slow off its SOCKS connection, or the TCP destination
site itself is bandwidth throttled on its downstream.
To examine these properties, we will perform live onion service testing, where
curl is used to download a large file. We will test no rate limit, and
verify that XON/XOFF was never sent. We then suspend this download, verify
that an XOFF is sent, and transmission stops. Upon resuming this download, the
download rate should return to normal. We will also use curl's --limit-rate
option, to exercise that the flow control properly measures the drain rate and
limits the buffering in the outbuf, modulo kernel socket and localhost TCP
buffering.
However, flow control can also get triggered at Exits in a situation where
either TCP fairness issues or Tor's mainloop does not properly allocate
enough capacity to edge uploads, causing them to be rate limited below the
circuit's congestion window, even though the TCP destination actually has
sufficient downstream capacity.
Exits are also most vulnerable to the buffer bloat caused by such uploads,
since there may be many uploads active at once.
To study this, we will run shadow simulations. Because Shadow does *not*
rate limit its tgen TCP endpoints, and only rate limits the relays
themselves, if *any* XON/XOFF activity happens in Shadow *at all*, it is
evidence that such fairness issues can ocurr.
Just in case Shadow does not have sufficient edge activity to trigger such
emergent behavior, when congestion control is enabled on the live network, we
will also need to instrument a live exit, to verify that XON/XOFF is not
happening frequently on it. Relays may also report these statistics in
extra-info descriptor, to help with monitoring the live network conditions, but
this might also require aggregation or minimization.
If excessive XOFF/XON activity happens at Exits, we will need to investigate
tuning the libevent mainloop to prioritize edge writes over orconn writes.
Additionally, we can lower 'cc_xoff_exit'. Linux Exits can also lower the
'net.ipv[46].tcp_wmem' sysctl value, to reduce the amount of kernel socket
buffering they do on such streams, which will improve XON/XOFF responsiveness
and reduce memory usage.
6.4. Performance Metrics [EVALUATION_METRICS]
The primary metrics that we will be using to measure the effectiveness
of congestion control in simulation are TTFB/RTT, throughput, and utilization.
We will calibrate the Shadow simulator so that it has similar CDFs for all of
these metrics as the live network, without using congestion control.
Then, we will want to inspect CDFs of these three metrics for various
congestion control algorithms and parameters.
The live network testing will also spot-check performance characteristics of
a couple algorithm and parameter sets, to ensure we see similar results as
Shadow.
On the live network, because congestion control will affect so many aspects of
performance, from throughput to RTT, to load balancing, queue length,
overload, and other failure conditions, the full set of performance metrics
will be required, to check for any emergent behaviors:
https://gitlab.torproject.org/legacy/trac/-/wikis/org/roadmaps/CoreTor/PerformanceMetrics
We will also need to monitor network health for relay queue lengths,
relay overload, and other signs of network stress (and particularly the
alleviation of network stress).
6.5. Consensus Parameter Tuning [CONSENSUS_PARAMETERS]
During Shadow simulation, we will determine reasonable default
parameters for our consensus parameters for each algorithm. We will then
re-run these tuning experiments on the live Tor network, as described
in:
https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/Sponsor61/PerformanceExperiments
6.5.1. Parameters common to all algorithms
These are sorted in order of importance to tune, most important first.
cc_alg:
- Description:
Specifies which congestion control algorithm clients should
use, as an integer.
- Range: 0 or 2 (0=fixed windows, 2=Vegas)
- Default: 2
- Tuning Values: [2,3]
- Tuning Notes:
These algorithms need to be tested against percentages of current
fixed alg client competition, in Shadow. Their optimal parameter
values, and even the optimal algorithm itself, will likely depend
upon how much fixed sendme traffic is in competition. See the
algorithm-specific parameters for additional tuning notes.
As of Tor 0.4.8, Vegas is the default algorithm, and support
for algorithms 1 (Westwood) and 3 (NOLA) have been removed.
- Shadow Tuning Results:
Westwood exhibited responsiveness problems, drift, and overshoot.
NOLA exhibited ack compression resulting in over-estimating the
BDP. Vegas, when tuned properly, kept queues low and throughput
high, but even.
cc_bwe_min:
- Description:
The minimum number of SENDME acks to average over in order to
estimate bandwidth (and thus BDP).
- Range: [2, 20]
- Default: 5
- Tuning Values: 4-10
- Tuning Notes:
The lower this value is, the sooner we can get an estimate of
the true BDP of a circuit. Low values may lead to massive
over-estimation, due to ack compression. However, if this
value is above the number of acks that fit in cc_cwnd_init, then
we won't get a BDP estimate within the first use of the circuit.
Additionally, if this value is above the number of acks that
fit in cc_cwnd_min, we may not be able to estimate BDP
when the congestion window is small. If we need small congestion
windows, we should also lower cc_sendme_inc, which will get us more
frequent acks with less data.
- Shadow Tuning Results:
Regarless of how high this was set, there were still cases where
queues built up, causing BDP over-estimation. As a result, we disable
use of the BDP estimator, and only use the Vegas CWND estimator.
cc_sendme_inc:
- Description: Specifies how many cells a SENDME acks
- Range: [1, 254]
- Default: 31
- Tuning Values: 25,33,50
- Tuning Notes:
Increasing this increases overhead, but also increases BDP
estimation accuracy. Since we only use circuit-level sendmes,
and the old code sent sendmes at both every 50 cells, and every
100, we can set this as low as 33 to have the same amount of
overhead.
- Shadow Tuning Results:
This was optimal at 31-32 cells, which is also the number of
cells that fit in a TLS frame. Much of the rest of Tor has
processing values at 32 cells, as well.
- Consensus Update Notes:
This value MUST only be changed by +/- 1, every 4 hours.
If greater changes are needed, they MUST be spread out over
multiple consensus updates.
cc_cwnd_init:
- Description: Initial congestion window for new congestion
control Tor clients. This can be set much higher
than TCP, since actual TCP to the guard will prevent
buffer bloat issues at local routers.
- Range: [31, 10000]
- Default: 4*31
- Tuning Values: 150,200,250,500
- Tuning Notes:
Higher initial congestion windows allow the algorithms to
measure initial BDP more accurately, but will lead to queue bursts
and latency. Ultimately, the ICW should be set to approximately
'cc_bwe_min'*'cc_sendme_inc', but the presence of competing
fixed window clients may require higher values.
- Shadow Tuning Results:
Setting this too high caused excessive cell queues at relays.
4*31 ended up being a sweet spot.
- Consensus Update Notes:
This value must never be set below cc_sendme_inc.
cc_cwnd_min:
- Description: The minimum allowed cwnd.
- Range: [31, 1000]
- Default: 31
- Tuning Values: [100, 150, 200]
- Tuning Notes:
If the cwnd falls below cc_sendme_inc, connections can't send
enough data to get any acks, and will stall. If it falls below
cc_bwe_min*cc_sendme_inc, connections can't use SENDME BDP
estimates. Likely we want to set this around
cc_bwe_min*cc_sendme_inc, but no lower than cc_sendme_inc.
- Shadow Tuning Results:
We set this at 31 cells, the cc_sendme_inc.
- Consensus Update Notes:
This value must never be set below cc_sendme_inc.
cc_cwnd_max:
- Description: The maximum allowed cwnd.
- Range: [500, INT32_MAX]
- Default: INT32_MAX
- Tuning Values: [5000, 10000, 20000]
- Tuning Notes:
If cc_bwe_min is set too low, the BDP estimator may over-estimate the
congestion window in the presence of large queues, due to SENDME ack
compression. Once all clients have upgraded to congestion control,
queues large enough to cause ack compression should become rare. This
parameter exists primarily to verify this in Shadow, but we preserve it
as a consensus parameter for emergency use in the live network, as well.
- Shadow Tuning Results:
We kept this at INT32_MAX.
circwindow:
- Description: Initial congestion window for legacy Tor clients
- Range: [100, 1000]
- Default: 1000
- Tuning Values: 100,200,500,1000
- Tuning Notes:
If the above congestion algorithms are not optimal until an
unreasonably high percentge of clients upgrade, we can reduce
the performance of ossified legacy clients by reducing their
circuit windows. This will allow old clients to continue to
operate without impacting optimal network behavior.
cc_cwnd_inc_rate:
- Description: How often we update our congestion window, per cwnd worth
of packets
- Range: [1, 250]
- Default: 1
- Tuning Values: [1,2,5,10]
- Tuning Notes:
Congestion control theory says that the congestion window should
only be updated once every cwnd worth of packets. We may find it
better to update more frequently, but this is probably unlikely
to help a great deal.
- Shadow Tuning Results:
Increasing this during slow start caused overshoot and excessive
queues. Increasing this after slow start was suboptimal for
performance. We keep this at 1.
cc_ewma_cwnd_pct:
- Description: This specifies the N in N-EWMA smoothing of RTT and BDP
estimation, as a percent of the number of SENDME acks
in a congestion window. It allows us to average these RTT
values over a percentage of the congestion window,
capped by 'cc_ewma_max' below, and specified in
[N_EWMA_SMOOTHING].
- Range: [1, 255]
- Default: 50,100
- Tuning Values: [25,50,100]
- Tuning Notes:
Smoothing our estimates reduces the effects of ack compression and
other ephemeral network hiccups; changing this much is unlikely
to have a huge impact on performance.
- Shadow Tuning Results:
Setting this to 50 seemed to reduce cell queues, but this may also
have impacted performance.
cc_ewma_max:
- Description: This specifies the max N in N_EWMA smoothing of RTT and BDP
estimation. It allows us to place a cap on the N of EWMA
smoothing, as specified in [N_EWMA_SMOOTHING].
- Range: [2, INT32_MAX]
- Default: 10
- Tuning Values: [10,20]
- Shadow Tuning Results:
We ended up needing this to make Vegas more responsive to
congestion, to avoid overloading slow relays. Values of 10 or 20
were best.
cc_ewma_ss:
- Description: This specifies the N in N_EWMA smoothing of RTT during
Slow Start.
- Range: [2, INT32_MAX]
- Default: 2
- Tuning Values: [2,4]
- Shadow Tuning Results:
Setting this to 2 helped reduce overshoot during Slow Start.
cc_rtt_reset_pct:
- Description: Describes a percentile average between RTT_min and
RTT_current_ewma, for use to reset RTT_min, when the
congestion window hits cwnd_min.
- Range: [0, 100]
- Default: 100
- Shadow Tuning Results:
cwnd_min is not hit in Shadow simulations, but it can be hit
on the live network while under DoS conditions, and with cheaters.
cc_cwnd_inc:
- Description: How much to increment the congestion window by during
steady state, every cwnd.
- Range: [1, 1000]
- Default: 31
- Tuning Values: 25,50,100
- Tuning Notes:
We are unlikely to need to tune this much, but it might be worth
trying a couple values.
- Shadow Tuning Results:
Increasing this negatively impacted performance. Keeping it at
cc_sendme_inc is best.
cc_cwnd_inc_pct_ss:
- Description: Percentage of the current congestion window to increment
by during slow start, every cwnd.
- Range: [1, 500]
- Default: 50
- Tuning Values: 50,100,200
- Tuning Notes:
On the current live network, the algorithms tended to exit slow
start early, so we did not exercise this much. This may not be the
case in Shadow, or once clients upgrade to the new algorithms.
- Shadow Tuning Results:
Setting this above 50 caused excessive queues to build up in
Shadow. This may have been due to imbalances in Shadow client
allocation, though. Values of 50-100 will be explored after
examining Shadow Guard Relay Utilization.
6.5.2. Westwood parameters
Westwood has runaway conditions. Because the congestion signal threshold of
TOR_WESTWOOD is a function of RTT_max, excessive queuing can cause an
increase in RTT_max. Additionally, if stream activity is constant, but of
a lower bandwidth than the circuit, this will not drive the RTT upwards,
and this can result in a congestion window that continues to increase in the
absence of any other concurrent activity.
For these reasons, we are unlikely to spend much time deeply investigating
Westwood in Shadow, beyond a simulaton or two to check these behaviors.
cc_westwood_rtt_thresh:
- Description:
Specifies the cutoff for BOOTLEG_RTT_TOR to deliver
congestion signal, as fixed point representation
divided by 1000.
- Range: [1, 1000]
- Default: 33
- Tuning Values: [20, 33, 40, 50]
- Tuning Notes:
The Defenestrator paper set this at 23, but did not justify it. We
may need to raise it to compete with current fixed window SENDME.
cc_westwood_cwnd_m:
- Description: Specifies how much to reduce the congestion
window after a congestion signal, as a fraction of
100.
- Range: [0, 100]
- Default: 75
- Tuning Values: [50, 66, 75]
- Tuning Notes:
Congestion control theory started out using 50 here, and then
decided 70-75 was better.
cc_westwood_min_backoff:
- Description: If 1, take the min of BDP estimate and westwood backoff.
If 0, take the max of BDP estimate and westwood backoff.
- Range: [0, 1]
- Default: 0
- Tuning Notes:
This parameter can make the westwood backoff less agressive, if
need be. We're unlikely to need it, though.
cc_westwood_rtt_m:
- Description: Specifies a backoff percent of RTT_max, upon receipt of
a congestion signal.
- Range: [50, 100]
- Default: 100
- Tuning Notes:
Westwood technically has a runaway condition where congestion can
cause RTT_max to grow, which increases the congestion threshhold.
This has not yet been observed, but because it is possible, we
include this parameter.
6.5.3. Vegas Parameters
cc_vegas_alpha_{exit,onion,sbws}:
cc_vegas_beta_{exit,onion,sbws}:
cc_vegas_gamma_{exit,onion,sbws}:
cc_vegas_delta_{exit,onion,sbws}:
- Description: These parameters govern the number of cells
that [TOR_VEGAS] can detect in queue before reacting.
- Range: [0, 1000] (except delta, which has max of INT32_MAX)
- Defaults:
# OUTBUF_CELLS=62
cc_vegas_alpha_exit (3*OUTBUF_CELLS)
cc_vegas_beta_exit (4*OUTBUF_CELLS)
cc_vegas_gamma_exit (3*OUTBUF_CELLS)
cc_vegas_delta_exit (5*OUTBUF_CELLS)
cc_vegas_alpha_onion (3*OUTBUF_CELLS)
cc_vegas_beta_onion (6*OUTBUF_CELLS)
cc_vegas_gamma_onion (4*OUTBUF_CELLS)
cc_vegas_delta_onion (7*OUTBUF_CELLS)
- Tuning Notes:
The amount of queued cells that Vegas should tolerate is heavily
dependent upon competing congestion control algorithms. The specified
defaults are necessary to compete against current fixed SENDME traffic,
but are much larger than neccessary otherwise. These values also
need a large-ish range between alpha and beta, to allow some degree of
variance in traffic, as per [33]. The tuning of these parameters
happened in two tickets[34,35]. The onion service parameters were
set on the basis that they should increase the queue until as much
queue delay as Exits, but tolerate up to 6 hops of outbuf delay.
Lack of visibility into onion service congestion window on the live
network prevented confirming this.
- Shadow Tuning Results:
We found that the best values for 3-hop Exit circuits was to set
alpha and gamma to the size of the outbufs times the number of
hops. Beta is set to one TLS record/sendme_inc above this value.
cc_sscap_{exit,onion,sbws}:
- Description: These parameters describe the RFC3742 'cap', after which
congestion window increments are reduced. INT32_MAX disables
RFC3742.
- Range: [100, INT32_MAX]
- Defaults:
sbws: 400
exit: 600
onion: 475
- Shadow Tuning Results:
We picked these defaults based on the average congestion window
seen in Shadow sims for exits and onion service circuits.
cc_ss_max:
- Description: This parameter provides a hard-max on the congestion
window in slow start.
- Range: [500, INT32_MAX]
- Default: 5000
- Shadow Tuning Results:
The largest congestion window seen in Shadow is ~3000, so this was
set as a safety valve above that.
cc_cwnd_full_gap:
- Description: This parameter defines the integer number of
'cc_sendme_inc' multiples of gap allowed between inflight and
cwnd, to still declare the cwnd full.
- Range: [0, INT16_MAX]
- Default: 4
- Shadow Tuning Results:
Low values resulted in a slight loss of performance, and increased
variance in throughput. Setting this at 4 seemed to achieve a good
balance betwen throughput and queue overshoot.
cc_cwnd_full_minpct:
- Description: This paramter defines a low watermark in percent. If
inflight falls below this percent of cwnd, the congestion window
is immediately declared non-full.
- Range: [0, 100]
- Default: 25
cc_cwnd_full_per_cwnd:
- Description: This parameter governs how often a cwnd must be
full, in order to allow congestion window increase. If it is 1,
then the cwnd only needs to be full once per cwnd worth of acks.
If it is 0, then it must be full once every cwnd update (ie:
every SENDME).
- Range: [0, 1]
- Default: 1
- Shadow Tuning Results:
A value of 0 resulted in a slight loss of performance, and increased
variance in throughput. The optimal number here likely depends on
edgeconn inbuf size, edgeconn kernel buffer size, and eventloop
behavior.
6.5.4. NOLA Parameters
cc_nola_overshoot:
- Description: The number of cells to add to the BDP estimator to obtain
the NOLA cwnd.
- Range: [0, 1000]
- Default: 100
- Tuning Values: 0, 50, 100, 150, 200
- Tuning Notes:
In order to compete against current fixed sendme, and to ensure
that the congestion window has an opportunity to grow, we must
set the cwnd above the current BDP estimate. How much above will
be a function of competing traffic. It may also turn out that
absent any more agressive competition, we do not need to overshoot
the BDP estimate.
6.5.5. Flow Control Parameters
As with previous sections, the parameters in this section are sorted with
the parameters that are most impportant to tune, first.
These parameters have been tuned using onion services. The defaults are
believed to be good.
cc_xoff_client
cc_xoff_exit
- Description: Specifies the outbuf length, in relay cell multiples,
before we send an XOFF.
- Range: [1, 10000]
- Default: 500
- Tuning Values: [500, 1000]
- Tuning Notes:
This threshold plus the sender's cwnd must be greater than the
cc_xon_rate value, or a rate cannot be computed. Unfortunately,
unless it is sent, the receiver does not know the cwnd. Therefore,
this value should always be higher than cc_xon_rate minus
'cc_cwnd_min' (100) minus the xon threshhold value (0).
cc_xon_rate
- Description: Specifies how many full packed cells of bytes must arrive
before we can compute a rate, as well as how often we can
send XONs.
- Range: [1, 5000]
- Default: 500
- Tuning Values: [500, 1000]
- Tuning Notes:
Setting this high will prevent excessive XONs, as well as reduce
side channel potential, but it will delay response to queuing.
and will hinder our ability to detect rate changes. However, low
values will also reduce our ability to accurately measure drain
rate. This value should always be lower than 'cc_xoff_*' +
'cc_cwnd_min', so that a rate can be computed solely from the outbuf
plus inflight data.
cc_xon_change_pct
- Description: Specifies how much the edge drain rate can change before
we send another advisory cell.
- Range: [1, 99]
- Default: 25
- Tuning values: [25, 50, 75]
- Tuning Notes:
Sending advisory updates due to a rate change may help us avoid
hitting the XOFF limit, but it may also not help much unless we
are already above the advise limit.
cc_xon_ewma_cnt
- Description: Specifies the N in the N_EWMA of rates.
- Range: [2, 100]
- Default: 2
- Tuning values: [2, 3, 5]
- Tuning Notes:
Setting this higher will smooth over changes in the rate field,
and thus avoid XONs, but will reduce our reactivity to rate changes.
6.5.6. External Performance Parameters to Tune
The following parameters are from other areas of Tor, but tuning them
will improve congestion control performance. They are again sorted
by most important to tune, first.
cbtquantile
- Description: Specifies the percentage cutoff for the circuit build
timeout mechanism.
- Range: [60, 80]
- Default: 80
- Tuning Values: [70, 75, 80]
- Tuning Notes:
The circuit build timeout code causes Tor to use only the fastest
'cbtquantile' percentage of paths to build through the network.
Lowering this value will help avoid congested relays, and improve
latency.
CircuitPriorityHalflifeMsec
- Description: The CircEWMA half-life specifies the time period after
which the cell count on a circuit is halved. This allows
circuits to regain their priority if they stop being bursty.
- Range: [1, INT32_MAX]
- Default: 30000
- Tuning Values: [5000, 15000, 30000, 60000]
- Tuning Notes:
When we last tuned this, it was before KIST[31], so previous values may
have little relevance to today. According to the CircEWMA paper[30], values
that are too small will fail to differentiate bulk circuits from interactive
ones, and values that are too large will allow new bulk circuits to keep
priority over interactive circuits for too long. The paper does say
that the system was not overly sensitive to specific values, though.
CircuitPriorityTickSecs
- Description: This specifies how often in seconds we readjust circuit
priority based on their EWMA.
- Range: [1, 600]
- Default: 10
- Tuning Values: [1, 5, 10]
- Tuning Notes:
Even less is known about the optimal value for this parameter. At a
guess, it should be more often than the half-life. Changing it also
influences the half-life decay, though, at least according to the
CircEWMA paper[30].
KISTSchedRunInterval
- If 0, KIST is disabled. (We should also test KIST disabled)
6.5.7. External Memory Reduction Parameters to Tune
The following parameters are from other areas of Tor, but tuning them
will reduce memory utilization in relays. They are again sorted by most
important to tune, first.
circ_max_cell_queue_size
- Description: Specifies the minimum number of cells that are allowed
to accumulate in a relay queue before closing the circuit.
- Range: [1000, INT32_MAX]
- Default: 50000
- Tuning Values: [1000, 2500, 5000]
- Tuning Notes:
Once all clients have upgraded to congestion control, relay circuit
queues should be minimized. We should minimize this value, as any
high amounts of queueing is a likely violator of the algorithm.
cellq_low
cellq_high
- Description: Specifies the number of cells that can build up in
a circuit's queue for delivery onto a channel (from edges)
before we either block or unblock reading from streams
attached to that circuit.
- Range: [1, 1000]
- Default: low=10, high=256
- Tuning Values: low=[0, 2, 4, 8]; high=[16, 32, 64]
- Tuning Notes:
When data arrives from edges into Tor, it gets packaged up into cells
and then delivered to the cell queue, and from there is dequeued and
sent on a channel. If the channel has blocked (see below params), then
this queue grows until the high watermark, at which point Tor stops
reading on all edges associated with a circuit, and a congestion
signal is delivered to that circuit. At 256 cells, this is ~130k of
data for *every* circuit, which is far more than Tor can write in a
channel outbuf. Lowering this will reduce latency, reduce memory
usage, and improve responsiveness to congestion. However, if it is
too low, we may incur additional mainloop invocations, which are
expensive. We will need to trace or monitor epoll() invocations in
Shadow or on a Tor exit to verify that low values do not lead to
more mainloop invocations.
- Shadow Tuning Results:
After extensive tuning, it turned out that the defaults were optimal
in terms of throughput.
orconn_high
orconn_low
- Description: Specifies the number of bytes that can be held in an
orconn's outbuf before we block or unblock the orconn.
- Range: [509, INT32_MAX]
- Default: low=16k, high=32k
- Tuning Notes:
When the orconn's outbuf is above the high watermark, cells begin
to accumulate in the cell queue as opposed to being added to the
outbuf. It may make sense to lower this to be more in-line with the
cellq values above. Also note that the low watermark is only used by
the vanilla scheduler, so tuning it may be relevant when we test with
KIST disabled. Just like the cell queue, if this is set lower, congestion
signals will arrive sooner to congestion control when orconns become
blocked, and less memory will occupy queues. It will also reduce latency.
Note that if this is too low, we may not fill TLS records, and we may
incur excessive epoll()/mainloop invocations. Tuning this is likely
less beneficial than tuning the above cell_queue, unless KIST is
disabled.
MaxMemInQueues
- Should be possible to set much lower, similarly to help with
OOM conditions due to protocol violation. Unfortunately, this
is just a torrc, and a bad one at that.
7. Protocol format specifications [PROTOCOL_SPEC]
TODO: This section needs details once we close out other TODOs above.
7.1. Circuit window handshake format
TODO: We need to specify a way to communicate the currently seen
cc_sendme_inc consensus parameter to the other endpoint,
due to consensus sync delay. Probably during the CREATE
onionskin (and RELAY_COMMAND_EXTEND).
TODO: We probably want stricter rules on the range of values
for the per-circuit negotiation - something like
it has to be between [cc_sendme_inc/2, 2*cc_sendme_inc].
That way, we can limit weird per-circuit values, but still
allow us to change the consensus value in increments.
7.2. XON/XOFF relay cell formats
TODO: We need to specify XON/XOFF for flow control. This should be
simple.
TODO: We should also allow it to carry stream data, as in Prop 325.
7.3. Onion Service formats
TODO: We need to specify how to signal support for congestion control
in an onion service, to both the intropoint and to clients.
7.4. Protocol Version format
TODO: We need to pick a protover to signal Exit and Intropoint
congestion control support.
7.5. SENDME relay cell format
TODO: We need to specify how to add stream data to a SENDME as an
optimization.
7.6. Extrainfo descriptor formats
TODO: We will want to gather information on circuitmux and other
relay queues, as well as XON/XOFF rates, and edge connection
queue lengths at exits.
8. Security Analysis [SECURITY_ANALYSIS]
The security risks of congestion control come in three forms: DoS
attacks, fairness abuse, and side channel risk.
8.1. DoS Attacks (aka Adversarial Buffer Bloat)
The most serious risk of eliminating our current window cap is that
endpoints can abuse this situation to create huge queues and thus DoS
Tor relays.
This form of attack was already studied against the Tor network in the
Sniper attack:
https://www.freehaven.net/anonbib/cache/sniper14.pdf
We had two fixes for this. First, we implemented a circuit-level OOM
killer that closed circuits whose queues became too big, before the
relay OOMed and crashed.
Second, we implemented authenticated SENDMEs, so clients could not
artificially increase their window sizes with honest exits:
https://gitweb.torproject.org/torspec.git/tree/proposals/289-authenticated-sendmes.txt
We can continue this kind of enforcement by having Exit relays ensure that
clients are not transmitting SENDMEs too often, and do not appear to be
inflating their send windows beyond what the Exit expects by calculating a
similar estimated receive window. Note that such an estimate may have error
and may become negative if the estimate is jittery.
Unfortunately, authenticated SENDMEs do *not* prevent the same attack
from being done by rogue exits, or rogue onion services. For that, we
rely solely on the circuit OOM killer. During our experimentation, we
must ensure that the circuit OOM killer works properly to close circuits
in these scenarios.
But in any case, it is important to note that we are not any worse off
with congestion control than we were before, with respect to these kinds
of DoS attacks. In fact, the deployment of congestion control by honest
clients should reduce queue use and overall memory use in relays,
allowing them to be more resilient to OOM attacks than before.
8.2. Congestion Control Fairness Abuse (aka Cheating)
On the Internet, significant research and engineering effort has been
devoted to ensuring that congestion control algorithms are "fair" in
that each connection receives equal throughput. This fairness is
provided both via the congestion control algorithm, as well as via queue
management algorithms at Internet routers.
One of the most unfortunate early results was that TCP Vegas, despite
being near-optimal at minimizing queue lengths at routers, was easily
out-performed by more aggressive algorithms that tolerated larger queue
delay (such as TCP Reno).
Note that because the most common direction of traffic for Tor is from
Exit to client, unless Exits are malicious, we do not need to worry
about rogue algorithms as much, but we should still examine them in our
experiments because of the possibility of malicious Exits, as well as
malicious onion services.
Queue management can help further mitigate this risk, too. When RTT is
used as a congestion signal, our current Circuit-EWMA queue management
algorithm is likely sufficient for this. Because Circuit-EWMA will add
additional delay to loud circuits, "cheaters" who use alternate
congestion control algorithms to inflate their congestion windows should
end up with more RTT congestion signals than those who do not, and the
Circuit-EWMA scheduler will also relay fewer of their cells per time
interval.
In this sense, we do not need to worry about fairness and cheating as a
security property, but a lack of fairness in the congestion control
algorithm *will* increase memory use in relays to queue these
unfair/loud circuits, perhaps enough to trigger the OOM killer. So we
should still be mindful of these properties in selecting our congestion
control algorithm, to minimize relay memory use, if nothing else.
These two properties (honest Exits and Circuit-EWMA) may even be enough
to make it possible to use [TOR_VEGAS] even in the presence of other
algorithms, which would be a huge win in terms of memory savings as well
as vastly reduced queue delay. We must verify this experimentally,
though.
8.3. Side Channel Risks
Vastly reduced queue delay and predictable amounts of congestion on the
Tor network may make certain forms of traffic analysis easier.
Additionally, the ability to measure RTT and have it be stable due to
minimal network congestion may make geographical inference attacks
easier:
https://www.freehaven.net/anonbib/cache/ccs07-latency-leak.pdf
https://www.robgjansen.com/publications/howlow-pets2013.pdf
It is an open question as to if these risks are serious enough to
warrant eliminating the ability to measure RTT at the protocol level and
abandoning it as a congestion signal, in favor of other approaches
(which have their own side channel risks). It will be difficult to
comprehensively eliminate RTT measurements, too.
On the plus side, Conflux traffic splitting (which is made easy once
congestion control is implemented) does show promise as providing
defense against traffic analysis:
https://www.comsys.rwth-aachen.de/fileadmin/papers/2019/2019-delacadena-splitting-defense.pdf
There is also literature on shaping circuit bandwidth to create a side
channel. This can be done regardless of the use of congestion control,
and is not an argument against using congestion control. In fact, the
Backlit defense may be an argument in favor of endpoints monitoring
circuit bandwidth and latency more closely, as a defense:
https://www.freehaven.net/anonbib/cache/ndss09-rainbow.pdf
https://www.freehaven.net/anonbib/cache/ndss11-swirl.pdf
https://www.freehaven.net/anonbib/cache/acsac11-backlit.pdf
Finally, recall that we are considering ideas/xxx-backward-ecn.txt
[BACKWARD_ECN] to use a circuit-level cell_t.command to signal
congestion. This allows all relays in the path to signal congestion in
under RTT/2 in either direction, and it can be flipped on existing relay
cells already in transit, without introducing any overhead. However,
because cell_t.command is visible and malleable to all relays, it can
also be used as a side channel. So we must limit its use to a couple of
cells per circuit, at most.
https://blog.torproject.org/tor-security-advisory-relay-early-traffic-confirmation-attack
9. Onion Service Negotiation [ONION_NEGOTIATION]
Onion service requires us to advertise the protocol version and congestion
control parameters in a different way since the end points do not know each
other like a client knows all the relays and what they support. Additionally,
we cannot use ntorv3 for onion service negotiation, because it is not
supported at all rendezvous and introduction points.
To address this, this is done in two parts. First, the service needs to
advertise to the world that it supports congestion control, and its view of
the current cc_sendme_inc consensus parameter. This is done through a new
line in the onion service descriptor, see section 9.1 below.
Second, the client needs to inform the service that it wants to use congestion
control on the rendezvous circuit. This is done through the INTRODUCE cell as
an extension, see section 9.2 below.
9.1. Onion Service Descriptor
We propose to add a new line to advertise the flow control protocol version,
in the encrypted section of the onion service descriptor:
"flow-control" SP version-range SP sendme-inc NL
The "version-range" value is the same as the protocol version FlowCtrl
that relay advertises which is defined earlier in this proposal. The
current value is "1-2".
The "sendme-inc" value comes from the service's current cc_sendme_inc
consensus parameter.
Clients MUST ignore additional unknown versions in "version-range", and MUST
ignore any additional values on this line.
Clients SHOULD use the highest value in "version-range" to govern their
protocol choice for "FlowCtrl" and INTRODUCE cell format, as per Section 9.2
below.
If clients do not support any of the versions in "version-range", they SHOULD
reject the descriptor. (They MAY choose to ignore this line instead, but doing
so means using the old fixed-window SENDME flow control, which will likely be
bad for the network).
Clients that are able to parse this line and know the protocol version
MUST validate that the "sendme-inc" value is within a multiple of 2 of the
"cc_sendme_inc" in the consensus that they see. If "sendme-inc" is not within
range, they MUST reject the descriptor.
If their consensus also lists a non-zero "cc_alg", they MAY then send in the
INTRODUCE1 cell congestion control request extention field, which is detailed
in the next section.
A service should only advertise its flow control version if congestion control is
enabled. It MUST remove this line if congestion control is disabled.
If the service observes a change in 'cc_sendme_inc' consensus parameter since
it last published its descriptor, it MUST immediately close its introduction
points, and publish a new descriptor with the new "sendme-inc" value. The
additional step of closing the introduction points ensures that no clients
arrive using a cached descriptor, with the old "sendme-inc" value.
9.2 INTRODUCE cell extension
We propose a new extension to the INTRODUCE cell which can be used to send
congestion control parameters down to the service. It is important to mention
that this is an extension to be used in the encrypted setion of the cell and
not its readable section by the introduction point.
If used, it needs to be encoded within the ENCRYPTED section of the INTRODUCE1
cell defined in rend-spec-v3.txt section 3.3. The content is defined as follow:
EXT_FIELD_TYPE:
[01] -- Congestion Control Request.
This field is has zero payload length. Its presence signifies that the client wants to
use congestion control. The client MUST NOT set this field, or use
ntorv3, if the service did not list "2" in the "FlowCtrl" line in the
descriptor. The client SHOULD NOT provide this field if the consensus parameter
'cc_alg' is 0.
The service MUST ignore any unknown fields.
9.3 Protocol Flow
First, the client reads the "flow-control" line in the descriptor and gets the
maximum value from that line's "version-range" and the service supports. As an
example, if the client supports 2-3-4 and the service supports 2-3, then 3 is
chosen.
It then sends that value along its desired cc_sendme_inc value in the
INTRODUCE1 cell in the extension.
The service will then validate that is does support version 3 and that the
parameter cc_sendme_inc is within range of the protocol. Congestion control is
then applied to the rendezvous circuit.
9.4 Circuit Behavior
If the extension is not found in the cell, the service MUST NOT use congestion
control on the rendezvous circuit.
Any invalid values received in the extension should result in closing the
introduction circuit and thus not continuing the rendezvous process. An invalid
value is either if the value is not supported or out of the defined range.
9.5 Security Considerations
Advertising a new line in a descriptor does leak that a service is running at
least a certain tor version. We believe that this is an acceptable risk in
order to be able for service to take advantage of congestion control. Once a
new tor stable is released, we hope that most service upgrades and thus
everyone looks the same again.
The new extension is located in the encrypted part of the INTRODUCE1 cell and
thus the introduction point can't learn its content.
10. Exit negotiation [EXIT_NEGOTIATION]
Similar to onion services, clients and exits will need to negotiate the
decision to use congestion control, as well as a common value for
'cc_sendme_inc', for a given circuit.
10.1. When to negotiate
Clients decide to initiate a negotiation attempt for a circuit if the
consensus lists a non-zero 'cc_alg' parameter value, and the protover line
for their chosen exit includes a value of 2 in the "FlowCtrl" field.
If the FlowCtrl=2 subprotocol is absent, a client MUST NOT attempt negotiation.
If 'cc_alg' is absent or zero, a client SHOULD NOT attempt
negotiation, or use ntorv3.
If the protover and consensus conditions are met, clients SHOULD negotiate
with the Exit if the circuit is to be used for exit stream activity. Clients
SHOULD NOT negotiate congestion control for one-hop circuits, or internal
circuits.
10.2. What to negotiate
Clients and exits need not agree on a specific congestion control algorithm,
or any aspects of its behavior. Each endpoint's management of its congestion
window is independent. However, because the new algorithms no longer use
stream SENDMEs or fixed window sizes, they cannot be used with an endpoint
expecting the old behavior.
Additionally, each endpoint must agree on the the SENDME increment rate, in
order to synchronize SENDME authentication and pacing.
For this reason, negotiation needs to establish a boolean: "use congestion
control", and an integer value for SENDME increment.
No other parameters need to be negotiated.
10.3. How to negotiate
Negotiation is performed by sending an ntorv3 onionskin, as specified in
Proposal 332, to the Exit node. The encrypted payload contents from the
clients are encoded as an extension field, as in the onion service INTRO1
cell:
EXT_FIELD_TYPE:
[01] -- Congestion Control Request.
As in the INTRO1 extension field, this field is has zero payload length.
Its presence signifies that the client wants to use congestion control.
Again, the client MUST NOT set this field, or use ntorv3, if this exit did not
list "2" in the "FlowCtrl" version line. The client SHOULD NOT set this to 1
if the consensus parameter 'cc_alg' is 0.
The Exit MUST ignore any additional unknown extension fields.
The server's encrypted ntorv3 reply payload is encoded as:
EXT_FIELD_TYPE:
[02] -- Congestion Control Response.
If this flag is set, the extension should be used by the service to learn
what are the congestion control parameters to use on the rendezvous
circuit.
EXT_FIELD content payload is a single byte:
sendme_inc [1 byte]
The Exit MUST provide its current view of 'cc_sendme_inc' in this payload if it
observes a non-zero 'cc_alg' consensus parameter. Exits SHOULD only include
this field once.
The client MUST use the FIRST such field value, and ignore any duplicate field
specifiers. The client MUST ignore any unknown additional fields.
10.5. Client checks
The client MUST reject any ntorv3 replies for non-ntorv3 onionskins.
The client MUST reject an ntorv3 reply with field EXT_FIELD_TYPE=02, if the
client did not include EXT_FIELD_TYPE=01 in its handshake.
The client SHOULD reject a sendme_inc field value that differs from the
current 'cc_sendme_inc' consensus parameter by more than +/- 1, in
either direction.
If a client rejects a handshake, it MUST close the circuit.
10.6. Managing consenus updates
The pedantic reader will note that a rogue consensus can cause all clients
to decide to close circuits by changing 'cc_sendme_inc' by a large margin.
As a matter of policy, the directory authorities MUST NOT change
'cc_sendme_inc' by more than +/- 1.
In Shadow simulation, the optimal 'cc_sendme_inc' value to be ~31 cells, or
one (1) TLS record worth of cells. We do not expect to change this value
significantly.
11. Acknowledgements
Immense thanks to Toke Høiland-Jørgensen for considerable input into all
aspects of the TCP congestion control background material for this proposal,
as well as review of our versions of the algorithms.
12. Glossary [GLOSSARY]
ACK - Acknowledgment. In congestion control, this is a type of packet that
signals that the endpoint received a packet or packet set. In Tor, ACKs are
called SENDMEs.
BDP - Bandwidth Delay Product. This is the quantity of bytes that are actively
in transit on a path at any given time. Typically, this does not count packets
waiting in queues. It is essentially RTT*BWE - queue_delay.
BWE - BandWidth Estimate. This is the estimated throughput on a path.
CWND - Congestion WiNDow. This is the total number of packets that are allowed
to be "outstanding" (aka not ACKed) on a path at any given time. An ideal
congestion control algorithm sets CWND=BDP.
EWMA - Exponential Weighted Moving Average. This is a mechanism for smoothing
out high-frequency changes in a value, due to temporary effects.
ICW - Initial Congestion Window. This is the initial value of the congestion
window at the start of a connection.
RTT - Round Trip Time. This is the time it takes for one endpoint to send a
packet to the other endpoint, and get a response.
SS - Slow Start. This is the initial phase of most congestion control
algorithms. Despite the name, it is an exponential growth phase, to quickly
increase the congestion window from the ICW value up the path BDP. After Slow
Start, changes to the congestion window are linear.
XOFF - Transmitter Off. In flow control, XOFF means that the receiver is
receiving data too fast and is beginning to queue. It is sent to tell the
sender to stop sending.
XON - Transmitter On. In flow control, XON means that the receiver is ready to
receive more data. It is sent to tell the sender to resume sending.
13. [CITATIONS]
1. Options for Congestion Control in Tor-Like Networks.
https://lists.torproject.org/pipermail/tor-dev/2020-January/014140.html
2. Towards Congestion Control Deployment in Tor-like Networks.
https://lists.torproject.org/pipermail/tor-dev/2020-June/014343.html
3. DefenestraTor: Throwing out Windows in Tor.
https://www.cypherpunks.ca/~iang/pubs/defenestrator.pdf
4. TCP Westwood: Bandwidth Estimation for Enhanced Transport over Wireless Links
http://nrlweb.cs.ucla.edu/nrlweb/publication/download/99/2001-mobicom-0.pdf
5. Performance Evaluation and Comparison of Westwood+, New Reno, and Vegas TCP Congestion Control
http://cpham.perso.univ-pau.fr/TCP/ccr_v31.pdf
6. Linux 2.4 Implementation of Westwood+ TCP with rate-halving
https://c3lab.poliba.it/images/d/d7/Westwood_linux.pdf
7. TCP Westwood
http://intronetworks.cs.luc.edu/1/html/newtcps.html#tcp-westwood
8. TCP Vegas: New Techniques for Congestion Detection and Avoidance
http://pages.cs.wisc.edu/~akella/CS740/F08/740-Papers/BOP94.pdf
9. Understanding TCP Vegas: A Duality Model
ftp://ftp.cs.princeton.edu/techreports/2000/628.pdf
10. TCP Vegas
http://intronetworks.cs.luc.edu/1/html/newtcps.html#tcp-vegas
11. Controlling Queue Delay
https://queue.acm.org/detail.cfm?id=2209336
12. Controlled Delay Active Queue Management
https://tools.ietf.org/html/rfc8289
13. How Much Anonymity does Network Latency Leak?
https://www.freehaven.net/anonbib/cache/ccs07-latency-leak.pdf
14. How Low Can You Go: Balancing Performance with Anonymity in Tor
https://www.robgjansen.com/publications/howlow-pets2013.pdf
15. POSTER: Traffic Splitting to Counter Website Fingerprinting
https://www.comsys.rwth-aachen.de/fileadmin/papers/2019/2019-delacadena-splitting-defense.pdf
16. RAINBOW: A Robust And Invisible Non-Blind Watermark for Network Flows
https://www.freehaven.net/anonbib/cache/ndss09-rainbow.pdf
17. SWIRL: A Scalable Watermark to Detect Correlated Network Flows
https://www.freehaven.net/anonbib/cache/ndss11-swirl.pdf
18. Exposing Invisible Timing-based Traffic Watermarks with BACKLIT
https://www.freehaven.net/anonbib/cache/acsac11-backlit.pdf
19. The Sniper Attack: Anonymously Deanonymizing and Disabling the Tor Network
https://www.freehaven.net/anonbib/cache/sniper14.pdf
20. Authenticating sendme cells to mitigate bandwidth attacks
https://gitweb.torproject.org/torspec.git/tree/proposals/289-authenticated-sendmes.txt
21. Tor security advisory: "relay early" traffic confirmation attack
https://blog.torproject.org/tor-security-advisory-relay-early-traffic-confirmation-attack
22. The Path Less Travelled: Overcoming Tor’s Bottlenecks with Traffic Splitting
https://www.cypherpunks.ca/~iang/pubs/conflux-pets.pdf
23. Circuit Padding Developer Documentation
https://github.com/torproject/tor/blob/master/doc/HACKING/CircuitPaddingDevelopment.md
24. Plans for Tor Live Network Performance Experiments
https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/Sponsor61/PerformanceExperiments
25. Tor Performance Metrics for Live Network Tuning
https://gitlab.torproject.org/legacy/trac/-/wikis/org/roadmaps/CoreTor/PerformanceMetrics
26. Bandwidth-Delay Product
https://en.wikipedia.org/wiki/Bandwidth-delay_product
27. Exponentially Weighted Moving Average
https://corporatefinanceinstitute.com/resources/knowledge/trading-investing/exponentially-weighted-moving-average-ewma/
28. Dropping on the Edge
https://www.petsymposium.org/2018/files/papers/issue2/popets-2018-0011.pdf
29. https://github.com/mikeperry-tor/vanguards/blob/master/README_TECHNICAL.md#the-bandguards-subsystem
30. An Improved Algorithm for Tor Circuit Scheduling.
https://www.cypherpunks.ca/~iang/pubs/ewma-ccs.pdf
31. KIST: Kernel-Informed Socket Transport for Tor
https://matt.traudt.xyz/static/papers/kist-tops2018.pdf
32. RFC3742 Limited Slow Start
https://datatracker.ietf.org/doc/html/rfc3742#section-2
33. https://people.csail.mit.edu/venkatar/cc-starvation.pdf
34. https://gitlab.torproject.org/tpo/core/tor/-/issues/40642
35. https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/49