My research interests are in the area of computer networking, concentrated around the theme of traffic management and high performance wireless networking. In particular, I work on topics of congestion control, reliability, connectionless traffic engineering, quality of service (QoS), last-mile community wireless networks, low-cost free-space-optical networks, automated network management using online simulation, multicast, multimedia networking (including peer-to-peer multimedia systems), and performance analysis. My special interest lies in developing the interdisciplinary connections between network architecture and fields like control theory, economics, scalable simulation technologies, video compression and optoelectronics.
We also focus on impact of our projects by partnering with universities in student exchange programs and use of remote resources (eg: UC Berkeley, INRIA Sophia-Antipolis, Utah Emulab and Planetlab projects), with research labs for internships and collaboration (AT&T Research , Intel Research Council and IXA Program) with broader industry and startups (eg: Nortel Networks, Reuters, Packeteer, etc), with standards bodies (Wimax Forum, IETF, ATM Forum, IEEE wireless standards) and by exploiting entrepreneurial and technology transfer opportunities.
There are three broad sub-themes within our traffic management activities: reduce demand during overloads (eg: congestion control for unicast, multicast), provision capacity, services and routes (eg: new QoS service architectures, VPNs, traffic engineering) and increase network capacity at lower costs (eg: last-mile community wireless networks and cheap free-space-optical networks). In unreliable wireless and p2p overlay networks, we also need to design reliability mechanisms at the link and transport layer to handle high loss rates and effectively transport multimedia traffic.
Congestion control is the problem of managing "traffic jams," or situations where demand exceeds capacity. It is a core infrastructural problem stemming from the packet switched and statistically multiplexed nature of the internet. Like other such core problems (routing etc), it has a fundamental bearing on the stability and manageability of the Internet.
Why does congestion occur ? Consider a shared network or bottleneck shared by m users, where the users do not have explicit limits on consumption, and providers provision below peak demand. If an individual user increases his or her demand by N%, he or she stands to gain nearly N% more of the resource. Yet, if all users increase their demands by N%, the total demand may well exceed the carrying capacity of the resource resulting in little net gain, or even a collapse. This is similar to the "tragedy of the commons" in economics and public policy. A user's narrowly construed self-interest conflicts with the collective interest of all users. Further, due to the fragmented nature of the internet administration, your traffic may encounter overloads at access hierarchies and transit/peering points. In other words, end-to-end performance is limited by the characteristics of this single end-to-end path.
The conceptual solution framework is simple: admit packets into the network from the user only when the network has capacity (bandwidth and buffers) to get the packet across. It is known as a "perfect callback." However, this abstraction is hard to implement because:
The sub-projects in this area include (with some overlaps):
Edge-Based Traffic Management: QoS Architectures, VPN Provisioning, Traffic Engineering and Pricing : In the congestion control problem, we assumed that all users are equal. This model is not sustainable from an economic point of view because ISPs cannot sustain huge sunk cost investments to build the next generation infrastructure. The need therefore is to develop differentiated services and offer it to different customer segments which value it differently. QoS services also need to be provisioned rapidly. The problem of providing "Quality of Service" (QoS) involves two key functions:
The former function has been traditionally been achieved through
mechanisms such as signaling, admission control, QoS routing and
provisioning/policy. The latter function is achieved through
components such as schedulers, traffic conditioners (shapers, markers
etc) and buffer management. In general these schemes require support
throughout the network to offer quantitative guarantees, which slows
down deployment. A recent trend in traffic management architecture is
a demand for a simpler core network focussing on packet forwarding,
and moving complex TM functions to the edge of the network. Taking
advantage of this trend, we have made several novel contributions
edge-based traffic management in areas such as pricing, dynamic
provisioning, traffic engineering and multi-path routing. These
techniques address the resourcing problem dynamically at different
time-scales. In a new, ongoing project with AT&T Labs Research, we are taking a quantitative, resource-based and
cost-based perspective on the widely publicized network neutrality debate.
PhD Students: Murat Yuksel, Satish Raghunath, Hema Tahilramani Kaur.
The following are the specific sub-projects in this area:
Our sub-projects include:
Our Papers in Wireless Networking
Performance analysis, especially on the large-scale and capturing myriad interactions is difficult. Moreover, as articulated by Sally Floyd and Vern Paxson, the ever-changing nature of the internet makes it harder. The goal of performance analysis is to discover invariants or slowly varying features of the underlying system, and understand sensitivities to parameters or the nature and impact of component interactions.
Our interest is in the areas of large-scale and online performance analysis. In a series of DARPA-funded programs, we showed how large-scale experiment design using a recursive random search (RRS) method can be used to find good results fast for a variety of online network management (parameter optimization) problems. Specifically, assuming that network protocol performance is sensitive to parameter settings, the online simulation searches the parameter state space to see if a better parameter setting would allow improved performance. The complexity of the problem arises because the state space has a high dimensionality, and each point in the state space is a large simulation. Moreover, the simulation requires online models which estimate current workload conditions in a quasi-stationary manner. PhD Students: Ye Tao, David harrison, Hema Tahilramani Kaur.
We have combined this technique with a new large-scale simulation platform (ROSS) in an NSF NeTS-NR project to start systematically exploring performance interactions between large-scale protocols using large-scale measurement-driven inputs. Our RRS technique has also found application in a bioimaging application (automated segmentation of retina images). Our DARPA projects also led to the formation of a startup company, Premonitia Inc., on proactive network management. These projects also significantly influenced the content of a recent course on experimental networking. PhD Students: David Bauer, Ye Tao, Garrett Yuan.
Our Papers in Performance Analysis Tools and Network Management
Given the dramatic changes in the areas of bandwidth services at both a technical and economic level, a natural question is: "How do multimedia applications leverage these capabilities to make distributed multimedia a reality?"
Phil Chou's (Microsoft Research) position paper establishes a nice case for joint-source-channel-coding. Shannon's source/channel coding theorem that communication with distortion D(R) can be achieved over a channel with capacity C whenever R is less than C. This communication can also be achieved with a block code partitioned into a source-code producing R' > R bits per second and a block channel code consuming R' < C bits per second. The caveat is that as R approaches C the block length increases; nevertheless such partitioning of source/channel coding has been common: video sources compress using a source code, and the channel adds FEC separately. However, when the channel is time-varying, or when the source/channel is unknown, or when there are multiple channels (and a variety of other cases), joint source/channel methods can achieve greater gains.
Our work has involved collaboration with ECSE researchers in the fields of video compression and biomedical imaging. Particular contributions include the joint design of congestion control, packetization, FEC techniques, multicast and multi-path techniques with scalable video coding. Applications of our work include multimedia in overlay, peer-to-peer and wireless networks. We have also applied our recursive random search (RRS) scheme (see the performance analysis tools section) to bioimaging problems as part of the NSF Engineering Research Center to investigate the broad area of Sub-surface Sensing and Imaging. PhD Students: Omesh Tickoo, Ivan Bajic, Yufeng Shan, Su Yi, Ye Tao, Muhammad Amri Abdul Karim.
Our Papers in Multimedia Networking
Congestion Control | Related Papers | Our Papers |
Our work in the area of ATM explicit rate congestion control (during my PhD with Prof. Raj Jain) has influenced the ATM ABR international standards. In particular, selected ATM Forum contributions and the ERICA scheme have been incorporated into ATM ABR standards documents (see ATM Forum Contributions List ).
A recent trend in traffic management architecture is a demand for a simpler core network focussing on packet forwarding, and moving complex TM functions to the edge of the network. However, the design of these edge-based mechanisms is non-trivial, involving a mix of control theory, estimation techniques, measurement and tomography. Congestion control contributions to this area include our work on overlay QoS using closed-loop control, emulation of AQM functions (randomization, queue control) from the edge, edge-based control of uncooperative users, and TCP rate control (with Packeteer). PhD Students: Yong Xia, David Harrison, Satish Raghunath, Karthikeya Chandrayana, Xingzhe fan.
We have developed an edge-to-edge congestion control architecture that can be transparently applied to ISP domains. The basic architecture works at the network (IP, ATM or frame-relay) layer and involves pushing congestion back from the interior of a network, distributing it across edge nodes where the smaller congestion problems can be handled with flexible, sophisticated and cheaper methods. The edge-to-edge virtual link thus created can be used as the basis of several applications. These applications include controlling TCP and non-TCP flows, improving buffer management scalability, developing simple differentiated services, and isolating bandwidth-based denial-of-service attacks. The edge-to-edge ideas were an outgrowth of an effort to apply the lessons learnt in designing rate-based congestion control for ATM networks and studying TCP/IP over ATM performance in my PhD dissertation . This work is possible due to generous funding from NSF interdisciplinary grants ANI9806660, ANI9819112 and grants from Intel Corp and Nortel Networks.
An interesting thing happens once congestion points are consolidated at the edges without packet losses in the network interior . The edge-to-edge congestion-control architecture forms the basis for edge-based service provisioning, dynamic congestion-based pricing, overlay QoS etc. Such edge-based congestion-sensitive services lie at the intersection of congestion control and QoS. PhD Students: David Harrison, Yong Xia, Satish Raghunath, Murat Yuksel.
Our work in wireless congestion control is centered around LT-TCP, a loss tolerant TCP that incorporates adaptive FEC and adaptive segment sizing and ECN to operate over very high loss rate paths. We are currently extending this framework to support cross-layer optimization of congestion control and reliability functions. PhD Students: Omesh Tickoo and Vijay Subramanian.
Our Papers in Congestion Control
Control-theoretic Analysis | Related Papers | Our Papers (1) | Our Papers (2) |
Congestion control lends itself to multiple modeling approaches: stochastic, optimization and control-theoretic. We have made contributions using all these approaches. Stochastic models include TCP SACK stochastic models, end-system randomization and AQM analysis. Optimization framework modeling has been used for accumulation-based congestion control, QoS emulation, uncooperative congestion control and for a 2-bit congestion control design for large bandwidth-delay product networks.
There is growing interest in the theoretical foundations of congestion control. Static optimization based formulations have been made by Kelly et al and Low et al. However a full understanding of congestion control in a dynamic optimization framework is lacking. This is primarily because the control methods developed over the years are non-linear in nature which makes it hard to prove robust stability, performance convergence over a wide range of parameter uncertainties. Over the last few years, control-theorists have started to partner with networking researchers to study this problem. Our work (in collaboration with Prof. Hitay Özbay and funded by an NSF interdisciplinary grant (ANI9806660)) is one such example, as we attempt to apply H-infinity control techniques to model and study the robust stability of a large class of existing congestion control schemes.
More recently, I am collaborating with the RPI Controls Group (with Profs. John Wen and Murat Arcak) who have developed a new unifying passivity-based framework for congestion control that captures prior static optimization frameworks as special cases. This framework also allows substantial flexibility and tools to aid the design of non-linear congestion controllers. Our activities include the use of non-linear control techniques (two-timescale design, small gain approaches) to model the dynamics and robustness of uncooperative congestion control and edge-based AQM designs. We are also considering a mix of information- and control-theoretic models to investigate the impact of information increase or distortion in feedback, building on our work in explicit rate control and our ACM SIGCOMM paper (``one more bit is enough'' in collaboration with Ion Stoica, UC Berkeley). These techniques become important as we push the internet to scale to large bandwidth delay product systems, move functions to edges and depend upon unreliable estimation at small time-scales, or deal with disruptions and performance volatility in multi-hop wireless networks at multiple time-scales. PhD Students: Yong Xia, Xingzhe Fan, Karthikeya Chandrayana, Stephen Fitzhugh.
Multicast Congestion Control | Related Papers | Our Papers |
Reliable multicast transport (RMT) protocol standard development today is at the state TCP was in the 1980s. One key IETF requirement is that congestion control must be specified before any formal RMT standard is approved. The new problems in multicast congestion control arise as we move from managing a "congested path" to managing a "large dynamic tree with multiple congestion points." A tree has multiple, possibly independent congestion points which affects parameter estimation procedures (eg: loss rate, RTT) in congestion control. The challenge is to develop a good filtering technique to gather the information required without requiring support at too many points in the network ( "generic" schemes" ). At the same time, it is important to be fair to the base of existing unicast TCP applications. We have actively working with the Reliable Multicast Transport (RMT) group at the IETF. This project was initially funded by Reuters, and is now integrated with our work on multimedia networking. Our contributions include superior end-to-end feedback generation and filtering schemes for single-rate control (captured in a series of schemes: LE-SBCC, MCA, ERMCC), and a generalized method (GMCC) for building multi-rate controllers using single-rate controllers as the basis. PhD Students: Jiang Li and Murat Yuksel (post-doc)
TCP Performance and Enhancements: | Related Papers | Our Papers |
This project addresses the question of how good is best-effort service from the point-of-view of TCP Performance ? Best-effort performance is defined in terms of "provider" and "user" metrics. Then the goal is to examine TCP performance problems in terms of these metrics and design buffer management or congestion management solutions for them. We are following two tracks in this work. The first track involves modeling and we have developed an analytical model for TCP performance for short transfers which also generalizes to long-transfers. The second track with several sub-projects to enhance TCP performance under various conditions: asymmetry, lossy wireless conditions, lack of AQM in networks, and enhanced fair sharing using TCP rate control.
TCP intertwines its loss-recovery and congestion control goals. Packet loss, especially burst loss or when TCP windows are small, therefore causes a complex set of side-effects. We have partnered with a company, Packeteer, to research a loss-free and explicit method of controlling TCP called TCP rate control and show that loss-avoidance and explicit rate control improves "best-effort" service significantly. This work also derives from my PhD work on explicit rate control in ATM networks.
The edge-to-edge architecture described earlier also addresses the problem of "better" best effort by taking a simple "divide-and-conquer" strategy. Congestion at the interior nodes is broken up and the smaller problems are consolidated at the edges. This effectively increases the bandwidth-delay product during congestion periods. This combination of closed-loop edge-to-edge control and TCP rate control at the edge can deliver zero-loss services for TCP provided the traffic does not crossover to other underprovisioned domains. It also complements existing buffer management schemes by improving their scalability dramatically. Other edge-based or TCP-based schemes include edge-based AQM and Randomized TCP, essentially trying to emulate AQM capabilities over a network that does not provide them. PhD Students: David Harrison, Yong Xia, Karthikeya Chandrayan and Xingzhe Fan.
We have also investigated the use of "open-loop distributed buffer mananagement" by exploiting another dimension in buffer management - packet marking. With the combination of TCP-friendly traffic conditioners and differentiated packet dropping algorithms, we can improve best-effort performance and decrease drop rate/timeouts seen by TCP.
As new technologies for broadband access emerge, it is important to ensure TCP/IP performance over these technologies. In a Pulsecom supported project, we have modeled TCP/IP over ADSL performance and designed improvements in buffer management schemes at the ADSL components. With AT&T research, we have also developed a version of TCP, LT-TCP to handle heavy loss rates in wireless channels. A key problem here is that packet loss is confused with congestion. Moreoever, even if this distinction were made, the reliability mechanisms need to be re-designed to reduce the risk of timeouts and raise effective goodput realized. Our LT-TCP scheme uses adaptive FEC and adaptive segment sizing techniques and has attracted the interest of AFOSR and DARPA (ongoing). PhD Students: Omesh Tickoo and Vijay Subramanian (ongoing).
BANANAS: A Connectionless Traffic Engineering Framework | Related Papers | Our Papers |
Edge-based QoS: Closed-Loop, Statistical and Point-to-Set QoS | Related Papers | Our Papers |
Overlay QoS using Closed-Loop Building Blocks: Traditionally QoS data-plane building blocks are open-loop (schedulers, shapers etc). When we move functions to the edge, we can introduce a new building block: closed-loop congestion control techniques. Basically, we convert a network of FIFO queues to appear like a network of schedulers (but only in the steady state), and modulate the fairness characteristics of closed-loop schemes at the edge to achieve QoS objectives. We have shown how to scalably emulate assured services (like in IETF diff-serv) using this model. This model also allows flexibility in handling admission control, and instead allowing graceful degradation of lower-priority classes when the sum of assured rates exceeds bottleneck bandwidths. PhD Students: David Harrison and Yong Xia.
Dynamic Provisioning, Point-to-set QoS and VPNs:
Traditionally services are point-to-point services because the
underlying technologies used to implement them were
connection-oriented. IP is connectionless and hence IP services could
potentially be point-to-anywhere in nature. However, point-to-anywhere
services is a nightmare to statically provision for. We have studied
prediction and estimation techniques to see if short-term dynamic
provisioning can reduce the wasteful provisioning headroom in this
system. Statistical provisioning of point-to-set services allows the
avoidance of dynamic estimation by trading off some multiplexing
gains. We have then studied provisioning problems for VPNs based upon
the hose model in collaboration with Dr. K.K. Ramakrishnan (AT&T
Research). This last piece of work involved issues in measurements,
tomography and QoS provisioning. PhD Student: Satish Raghunath.
Dynamic QoS Services: | Related Papers | Our Papers |
Congestion-sensitive pricing: From an economic perspective, marginal cost of bandwidth is zero except during congestion. The use of edge-to-edge control brings the knowledge of congestion cost in a timely manner to the edge-systems allowing the setup of short-term bidding or contracting systems for premium bandwidth. The pricing system can also be viewed as a traffic management mechanism on a longer time-scale because price is also a mechanism to match demand and supply. With T. Ravichandran (Lally School of Mgmt, RPI), we have developed a "Dynamic capacity contracting" framework as the basis for such a system and studying the interdisciplinary implications of pricing. PhD student: Murat Yuksel
Spot and Options Pricing: In collaboration with and Aparna Gupta (Dept of Decision Sciences, RPI), we have extended the short-term contracting framework to multiple ISPs (i.e. building end-to-end QoS contracts). We have modeled and developed spot/options pricing techniques to manage the risk and congestion-costs of such end-to-end composite QoS contracts. PhD Student: Lingyi Zhang (advised by Aparna).