Dynamic MultiPath RoutingIllinois Institute of Technology10W 31 StreetStuart BuildingChicagoIL60565USIllinois Institute of Technology10W 31 StreetStuart BuildingChicagoIL60565US+1 312 567 5329kapoor@iit.eduhttp:www.cs.iit.edu/~kapoor
General
RoutingMultiPathI-DInternet-DraftIn this draft we consider dynamic multipath routing
and introduce two methods that use additive increase and
multiplicative decrease for flow control, similar to TCP.
Our first method allows for congestion control and re-routing
flows as users join in or leave the network.
As the number of applications and services supported by the Internet grows,
bandwidth requirements increase dramatically so
it is imperative to design methods to ensure not only that network throughput
is maximized but also to ensure a level of fairness in
network resource allocation.
Our second method provides fairness over multiple streams of traffic.
We drive the multiplicative decrease part of the
algorithm with link queue occupancy data provided by an enhanced routing protocol.
Internet packet traffic keeps growing as the number of applications and services it supports as well as their
bandwidth requirements explode.
It has then become necessary to find ways to ensure that network throughput is maximized.
In this draft we propose dynamic multi-path routing to improve network throughput.
Multipath routing is important, not only for throughput but also
for reliability and security.
In multipath routing, improvements in performance are achieved by
utilizing more than one feasible path . This approach to
routing makes for more effective network resource utilization.
Various research on multipath routing have addressed network
redundancy, congestion, and QoS issues .
Prior work on multipath routing includes work on bounding delays as well as delay variance
.
The prior work is primarily from the viewpoint
of static network design but, in practice,
congestion control is necessary to prevent some user
flows from being choked due to link bottlenecks.
Single path routing implementations of TCP achieve that by rate control on specified paths.
TCP is able to handle elastic traffic from applications and establishes
a degree of fairness by reducing the rate of transmission rapidly upon detecting
congestion.
Regular TCP has been shown to provide Pareto-optimal
allocation of resources .
However, unlike the single path approach of TCP, we consider multipath routing with associated issues of
path selection and congestion.
We may note that multipath TCP (MPTCP) has been studied extensively
with a number of
IETF proposals .
Prior work on multipath TCP is defined over a specific set of paths and
the choice of paths or the routing is independent of congestion control; determining
the right number of paths thus becomes a problem.
The variation of throughput with the number of paths has been illustrated in
Along with consideration of congestion,
we also need to ensure a level of fairness in network resource allocation.
Factoring fairness into the protocol is important in order to prevent some user's flows from suffering due to bottlenecks in some links.
Based on mathematical optimization formulations, we consider
route determination methods that ensure fairness where all users can achieve at least a minimum percentage of their demand.
We introduce an algorithm that uses additive increase and multiplicative decrease for flow control and
we have experiments to illustrate its stability and convergence.
The algorithm may be considered as a generalization of TCP.
We have performed an extensive set of simulations using the NS-3 simulation environment.
In our implementation we drive the multiplicative decrease part of the algorithm using
queue occupancy data at each outgoing network link, with that data provided by an enhanced routing protocol.
For a more in-depth evaluation of the algorithm's performance we simulated not only the fairness algorithm
but also a version of the same without the fairness component.
We also performed and compared simulations using standard TCP and TCP with ECMP enabled.
The Joint Routing and Congestion Control utilizes the Link State of the network.
The algorithm utilizes a price variable that models congestion at each link and
a variable that models the fairness coefficient.
The fairness coefficient is used to establish the same percentage of traffic
is being routed for multiple source-sink pairs.
Each edge of the network has a price function associated with it, referred to as P(e).
The price function measures the congestion on the link.
The price function lies in the interval [0,1] and is 0 if the edge occupancy is low and
1 if the edge occupancy is high.
In theory the edge occupancy is given by f(e)/C(e) where f(e) is the amount of traffic on
the link and C(e) is the capacity of the link. In practice, the edge occupancy is measured by the
congestion in the queue serving the link.
The price function increases as the congestion grows. This function's values will be referred
to by a price variable on link e which is denoted by PBQ(e)
The price function is complemented by an "increase" function, i.e. a variable
that regulates the amount of traffic changes based on the fraction of traffic of the commodity
that has been routed.
This function, th values represented by the variable PBF(s-t), is used to model the fairness.
This variable is related to
the source-destination pair, denoted by s-t, whose requirements are being satisfied.
The variable PBF(s-t) starts with an initial value and goes down to zero as the
requirement of s-t is being increasingly met.
The increase in PBF is dictated by a fairness co-efficient, Gamma.
The formula for PBF(s-t) is 1- y(s-t)/[Gamma *d(s-t)]
where Gamma is the fairness co-efficient, d(s-t) is the demand and y(s-t) is
the amount of requirement that is being met.
We present the details of the algorithms below
Let T be the time interval used to increment or modify routing.
We use two coefficients for each path Pi
Additive increase coefficient:
A positive value ai by which we increment the flow on a path at each iteration
xi(t) = xi(t−T ) + ai
Multiplicative decrease coefficient:
The coefficient bi that we apply to decrement flows:
xi(t) = (1−bi)xi(t−T )
where x, t, T are the same as above.
We utilize multiple methods for calculating bi:
METHOD 1: bi may be computed as follows:
bi =0 if no edge on the path Pi is congested. bi =0.5 if one edge on the path Pi is congested. bi= 1.0 if more than one edge on the path Pi is congested
METHOD 2: bi may be computed as follows:
bi = 1-1/2^c where c is the number of congested edges.
We propose two routing mechanisms. The first, presented here,is a basic mechanism that is primarily based
on multiplicative decrease and additive increase
If the number of paths used is excessive then no new paths need be generated.
In order to ensure that different source-sink pairs are treated fairly, the
coefficient bi for path Pi is chosen as PBQ − PBF with two
components: a congestion component PBQ and a
fairness component PBF .
PBQ is calculated as bi before;
PBF is calculated using the formula
PBF(s-t) = 1-Total_current_FLOW(S-t)/(Gamma* Demand(s-t))
where Gamma is the fairness parameter and DEMAND(s-t) is the demand.
If the number of paths become excessive then they can be curtailed. At that stage no additional
flow is pushed until congestion is relieved.
We implemented the two algorithms using
the NS-3 simulation environment.
We modeled the network topology on the network of a large service provider, with link capacities proportional to capacities in the actual physical network.
In our implementation we used, for routing, a combination of link-state routing protocol and source routing.
For the link-state part we augmented the NS-3 implementation of the OSPF routing protocol
by adding link queue occupancy to the data exchanged by nodes in Link State Advertisement (LSA) messages,
a minimal increase in LSA data.
That allows for more sophisticated monitoring of network status: if the queue occupancy for one of more links of a path exceeds a given threshold we conclude that the path is experiencing congestion
and that the multiplicative decrease has to be applied to adjust the allocation of flow to the paths.
The additive increase is applied at each iteration, if demand is not met, to augment the sending rate.
Details of congestion measurement are as follows:
In a node-to-node connection (data link) both the source (originating) node and the sink (destination) node
have a PointToPointNetDevice. The PointToPointNetDevice at the originating node is associated with a
queue function of type DropTailQueue. The queue function stores the number of packets in the
queue (waiting to be sent to the destination node).
A queueOccupancy parameter is calculated: numPacketsInQueue/queueMaxSize and compared to a queueThreshold parameter.
If queueOccupancy> queueThreshold the path that uses the data link is flagged as congested.
queueThreshold is an input parameter declared at the start of the simulation.
At the same time path delay is also calculated.
The source node uses OSPF to find the shortest path to the destination and,
based on available network data, builds a source-routing vector that is inserted in the packet
and used by intermediate nodes to route the packet to the destination.
To implement the source-routing function we augmented the NS-3 Nix-Vector protocol that builds
the source-routing vector from the list of nodes to be traversed, list that is obtained from OSPF.
The main process is iterative as we refresh LSA at a fixed interval: for our simulations
we experimented with updating LSAs every 50 ms and 500 ms.
Our results for the throughput improvement are presented below and compared with throughput results
for the standard TCP and TCP using ECMP. We show the average throughput over multiple runs.
In conclusion we found that our algorithm with fairness provides throughput
improvement over both regular TCP and TCP with ECMP.
In addition, its ability to discover additional path dynamically eliminates the
need to set a preselected set of paths, allowing the spreading of the traffic load
amongst a wider but still reasonable set of paths.
Further results may be found at www.cs.iit.edu/~kapoor/papers/reducerate.pdf .
Dispersity Routing Analysis of multi-path routing Fast bandwidth reservation with multiline and multipath routing in atm networks Minimizing path delay in multipath networks Concurrent multipath routing over bounded paths: Minimizing delay
variance
Mptcp is not pareto-optimal: performance issues and a possible
solution
Data center networking with multipath tcp. Design, implementation and evaluation of congestion control for
multipath tcp. Experimenting with multipath tcp. Non-renegable selective acknowledgments (nr-sacks) for mptcp. Coupled congestion control for multipath transport protocols
Multipath tcp (mptcp) application interface considerations
Tcp extensions for multipath operation with multiple addresses
Architectural guidelines for multipath tcp development