Note! This project has already ended (31.12.2004)

FIT - Future Internet: Traffic Handling and Performance Analysis (2001-2004)

Research tasks

Congestion control and avoidance in the Internet

Traditionally congestion control in the Internet has been handled with the use of TCP, which is currently used to transmit a majority of the data (e.g. HTML, FTP, e-mail traffic). As TCP traffic volumes are expected to increase in the future, it is important to gain a thorough understanding of its dynamics. However, TCP alone is not enough to guarantee the stability of the network, and for this reason IETF has promoted the use of active queue management (AQM) methods. These attempt to inhibit the build up of congestion in the buffers by signaling the traffic sources to reduce their sending rates before the buffer becomes overloaded.

One of the most prominent active queue management methods, also implemented in commercial routers, is the Random Early Detection (RED) algorithm. It has been shown by using simulations that the algorithm is able to reduce the global synchronization effect and to increase fairness during congestion. In the literature, numerous variants of the RED algorithm have been proposed to deal with the problem of non-TCP flows (non-responsive flows).

In the COM2 project (see publications 14, 17, 18, 23, and 29), the mathematical analysis of the RED algorithm was initiated and a differential equation model was developed to describe the dynamics of a buffer controlled by the RED algorithm. Later the model has been extended to cover the interaction of the RED controlled buffer and an idealized TCP population. However, there are still significant possibilities for further analysis and generalizations, notably with regard to the stability and improved user adaptation algorithms.

QoS through traffic handling and pricing

There have been numerous attempts to bring Quality of Service (QoS) into the Internet. At present, the aim is to develop models for end-to-end QoS by combining the advantages of IntServ and DiffServ. The benefit of DiffServ, especially in the ISP core networks, is that the network only sees a discrete set of service classes or aggregates. Although a variety of DiffServ mechanisms and service models exist, the applicability to users and implementation by service providers are still open issues.

Pricing mechanisms have been proposed, as such or on top of differentiation mechanisms, to provide QoS. Here again, a variety of schemes exist, but they have been considered more as an alternative, rather than as means of combining differentiation mechanisms and economical incentives. Congestion pricing, developed and advocated by F. Kelly and his coworkers, is one of the prominent pricing schemes. It is a kind of ECN (Explicit Congestion Notification) scheme, where the network signals, using marks coupled with a price, the sources the impact that their traffic has on the congestion. Rational users attempt to maximize the difference between their utility and cost. As a whole, the system consisting of the network and users maximize, in a distributed fashion, the total welfare. The objective in this study is to develop new marking mechanisms (based, e.g., on using Markov Decision Theoretical models) and study their behavior with user algorithms with regard to the accuracy of the optimization of the system and fairness between different users.

The proposed study on QoS mechanisms is a broad research topic including results from ongoing and planned research by the HUT group and elaborating on research on scheduling and pricing mechanisms. The objective is to evaluate and compare the proposed DiffServ mechanisms and models as well as pricing schemes. The mathematical models considered include packet level models for the scheduling mechanisms together with queue management and TCP sources, and flow level models for QoS mechanisms to study the division of bandwidth and fairness. Consequently, the study will give service providers guidelines to both the mechanisms to choose and the traits to look for given a specific customer profile.

Performance bounds by stochastic network calculus

Network calculus is a new tool based on min-plus algebra to analyze queuing problems arising in communication networks. In particular, network calculus allows one to derive deterministic bounds to quantities such as maximum queue length and end-to-end delay over a whole network. The concepts were introduced in the seminal work by Cruz (1991) and have later been developed by several researchers (e.g. Baccelli, Chang and Le Boudec). Important attributes here are that the bounds are strict, and that they apply end-to-end. It is required that the incoming traffic streams are policed so that the traffic is upper bounded by an arrival curve and that the service in each node is provisioned so that the service received by each stream is lower bounded by a service curve. Application of network calculus has led to many important results and has provided insight to the service quality issues (e.g. delay bounds of the EF service in the DiffServ architecture).

As important as it is to have strict bounds, this approach, however, has the weakness that the bounds may not be tight, and may indeed be far from the typical behavior. Recently, there have been attempts to develop methods that would provide statistical bounds, i.e. bounds that are exceeded only with some (small) probability, starting from statistical service definition. The notion of effective envelopes was introduced by Boorstyn et al (2000) but that work still calls for an extension to a real network context, i.e., development of stochastic network calculus. A central tool here is Large Deviation Theory (LDT).

Traffic engineering for multicast transmissions

Multicast is the most natural and efficient platform for group communications, such as a/v distribution, software upgrading, multimedia conferencing, or group collaboration. In its basic form multicast allows one sender to transmit data simultaneously to many receivers, say N, using the multicast distribution tree. In principle, this could be done by a bunch of ordinary unicast (i.e. point-to-point) transmissions. But, at worst, the same data should be sent N times compared to a single copy needed for multicast transmission.

IP Multicast is said to be "a requirement, not an option, if the Internet is going to scale." This observation is supported by the inclusion of multicast capabilities into the next generation Internet protocol, IPv6. Already today, IP Multicast is possible with the experimental multicast backbone, Mbone, relying on the plain best effort service. Thus there are no guarantees on the QoS of the multicast transmission. One solution to this is the use of advance resource reservations, as proposed by the IntServ architecture. As soon as advance reservations are made, the blocking probability (i.e., the probability that there are not enough resources available at the time of a request) becomes one of the main performance measures.

The HUT group has studied multicast networks for a couple of years both within the COM2 and COST257 projects (see the publications therein). The focus has been on the development of exact methods to calculate the blocking probabilities for distribution type multicast applications in multiservice circuit-switched networks. In this study, the objectives are to further generalize the existing multicast network model and derive exact algorithms for computing the blocking probabilities. Approximations on the blocking probabilities based on simulation techniques (importance sampling) or reduced load approximations can also be developed. Another task is to apply Markov Decision Theory for solving the optimal routing problem.

Tietoverkkolaboratorio on nyt osa Tietoliikenne- ja tietoverkkotekniikan laitosta. Tällä sivulla oleva tieto voi olla vanhentunutta.

Tämän sivun sisällöstä vastaavat ja Webmaster.
Sivua on viimeksi päivitetty 11.02.2005 11:32.
[ TKK > Sähkö- ja tietoliikennetekniikan osasto > Tietoverkkolaboratorio > Tutkimus ]