Flow-aware scheduling and resource sharing in wireless cellular networks
Circuit-switched service offering a constant bit-rate to the applications is handled in 3G systems by controlling the transmission power of the active users. However, for so called elastic applications tolerating variations in the offered bit rate of the service, it has been shown that it is more efficient to send the data in a time-slotted manner, which also effectively eliminates the intra-cell interference between different users. Elimination of inter-cell interference is not considered in present day 3G standards. For example, HDR (High Data Rate) and HSPD (High Speed Packet Data), defined in 3GPP2 and 3GPP, respectively, offer high-speed wireless data services, where in the downlink direction the base station always sends to one active user at a time. The problem then is to determine the optimal scheduling policy among competing users. It has been shown that for a fixed number of users an opportunistic scheduling scheme that reasonably balances system performance and fairness is the so called PF (Proportionally Fair) policy, where the right to transmit is given to the user with the highest data rate relative to its current rate.
Much of the traditional performance analysis of HDR/HSPD systems relies on the assumption of a fixed number of active flows. In reality, the number of flows varies randomly in time. New models have recently been introduced by Borst and Bonald and Proutiere, which take into account the statistical nature of the traffic at the flow level (hence the name flow-level models). The models are essentially PS (processor sharing) models, which possess, from robustness point of view, the very appealing insensitivity property, i.e., the performance depends only on the mean values of the traffic characteristics. The impact of the scheduling policy employed at the packet level is that the scheduling can either increase or decrease the cell capacity via the so-called scheduling gain factor. The influence of user mobility within a cell has been studied by Bonald et al.
Performance analysis yields insights not directly apparent just by qualitative inspection. Additionally, accurate and robust models are essential, for example, for the proper dimensioning of HDR/HSPD systems. As seen from above, PS models have proven to be a powerful class of models even for modelling wireless systems. Our research group has already experience in applying these models (also the balanced fairness-concept is related to PS models), and we are hence in a good position to pursue this research direction.
Insensitive traffic engineering - applications of the balanced fairness principle
In queueing theory, insensitivity refers to a property a system may have, viz. that the state distribution, and consequently all performance metrics, are independent of all detailed traffic characteristics and only depend on the different traffic loads. As said before, this is a very attractive property as it makes traffic engineering of the system robust. The idea of insensitivity has appeared in the traffic theory literature for a long time. Indeed, the celebrated Erlang formula was the first example of an insensitive result. Since then, the concept has every now and then appeared in different contexts, and understanding of the underlying principles and conditions has developed over the years.
Recently Bonald and Proutière have found that it is possible to define a bandwidth sharing between competing flows (file transfers) in the Internet such that the resulting system is insensitive and yet the network resources are shared in an efficient way. Their bandwidth sharing scheme, Balanced Fairness (BF), is based on the theory of balanced Processor Sharing type queueing networks, known as Whittle networks.
Besides providing a robust basis for traffic engineering, BF has another attractive property: the performance of a network under BF in a dynamic setting is amenable to analysis, in contrast to better-known concepts such as max-min fairness for which an analytical solution for anything else but trivial toy networks is beyond reach even under simplifying Markovian assumptions. An efficient computational algorithm for evaluating the performance of BF in certain type of network topologies was presented by Virtamo with co-authors. It turns out that the performance of BF is not very different from that of, e.g., max-min fairness, which means that BF provides an approximate computational tool for, largely speaking, any "fair" network. BF also allows one to derive upper and lower performance bounds for many sensitive systems.
Penttinen and Virtamo introduced an extension of the BF principle, which allows an efficient insensitive bandwidth sharing also in systems with more complex constraints than those set by fixed capacity links, e.g. wireless networks with interfering links. Similar extensions are possible for systems where traffic flows are split over a number of routes.
In insensitive traffic engineering, the performance is independent of detailed traffic characteristics but depends on the traffic matrix, defining the traffic demands between all the origin-destination (OD) pairs. So the traffic matrix is the primary input to all traffic engineering. Unfortunately, in a large network gaining full knowledge of the traffic matrix is difficult. What is usually available are the link load measurements over certain intervals. However, inferring traffic matrix from these measurements is a heavily under-determined problem. In order to estimate the traffic matrix from these, one has to utilize prior information or assumptions, maybe augmented with collection of link load statistics, including covariance data, and use Bayesian inference methods. This is an important and interesting field of research and work in this area was already started within the FIT project.
Peer-to-peer networking - performance and traffic issues
Peer-to-peer (P2P) technology is a new paradigm that has already challenged the traditional client/server applications in the Internet. For example, over half of the Internet backbone traffic today is generated by the peer-to-peer file sharing applications such as Kazaa and Gnutella. In the peer-to-peer approach, networking is extended to the application layer. The network layer forms a basis on which an overlay is constructed at the application layer to convey the service requests.
In the traditional client/server approach the application layer entities are either transitory clients consuming the services or permanent servers producing them, and the conveying task is left to the network layer. Thus, the more clients, the more workload. In the peer-to-peer approach the autonomous and transitory application layer entities, now called peers, are at the same time consumers and producers of the services, and even conveyors of the service requests. Thus, the more peers, the more workload but also the more service capacity. In fact, the relation is not that simple, since, in addition to decent peers taking care of their service tasks, there are typically lots of free-riders just acting as consumers in the peer-to-peer systems due to lack of any centralized control.
Another prominent feature distinguishing the peer-to-peer file sharing systems from, e.g., the traditional client/server model based WWW is the magnitude of file sizes: typical Kazaa files are either audio files of the order of a few MB's or video files of the order of hundreds of MB's, while the typical web pages are in the range of a few kB's or tens of kB's. On the other hand, once an audio or video file is transferred to a user, there is no need for the user to repeat the request in the peer-to-peer file sharing systems, while the web pages, even static ones, are reloaded again and again by a user (at least the more rarely fetched ones that are not cached by the client).
The performance of such peer-to-peer systems is an extremely important but still to a great extent unexplored area of research. Central general concepts here are connectivity, reliability, scalability and service availability. Regarding file sharing applications, different delays both in the file lookup and the file transfer phases have an important role, as well as the probability that a certain file is ever found. These performance parameters are affected, e.g., by the ways the neighbourhood relations are defined both in the file lookup and the file transfer phases. Due to the enormous scale of such systems, simulation techniques are hard to apply. Thus, simple but still illuminative models are needed for the performance analysis. Until now, only few such models have been presented in the literature. Ge et al. present a multi-class closed queueing network model that offers a global but rather complicated view of peer-to-peer file sharing systems. Zou and Ammar have a different, local approach focusing on a Markovian model describing the evolution of a single shared file among the user population. Clevenot and Nain propose a fluid approach for modelling peer-to-peer content distribution systems where content is replaced by a fluid.
Tietoverkkolaboratorio on nyt osa Tietoliikenne- ja tietoverkkotekniikan laitosta. Tällä sivulla oleva tieto voi olla vanhentunutta.
Tämän sivun sisällöstä vastaavat ja
Sivua on viimeksi päivitetty 08.02.2005 11:00.
[ TKK > Sähkö- ja tietoliikennetekniikan osasto > Tietoverkkolaboratorio > Tutkimus ]