Download PDF

Dynamic Resource Allocation and Self-Organizing Signalling Optimisation in LTE-A Downlink

Publication date: 2015-10-09

Author:

Chiumento, Alessandro

Abstract:

Modern cellular networks present many interesting challenges to the telecommunication engineers of today. The idea of a static configuration with clearly defined borders of older networks is no longer representative of the current situations, and most certainly, will not be for the next generations of communication technologies. Future mobile networks will involve a high number of base stations with various capabilities and make use of a plethora of communication technologies and access media; they will be able to recognize the dynamically shifting network conditions and requirements. The first real step towards an ubiquitous, high performance network, is represented by the 3GPP LTE technology, now widespread as 4G in many countries. The successive iterations of such technology, such as LTE-A, have permitted (and will bring) an additional increase in performance by increasing the network density and allowing self-organisation and self-healing. The two main challenges addressed in this work, for the modern and future network, are represented by, firstly,the interference management and self-organisation of heterogeneous networks and, secondly, the minimisation of all the signalling control information necessary for the correct functioning of the network. First, a heterogeneous LTE-A downlink network is analysed. The various components of the downlink network are discussed and the effects of resource allocation within each cell are analysed. Novel proposed scheduling methods show that there is still improvement possible compared to the state of the art and, by taking into consideration the practical limitations of a real network, additional gains can be achieved. Second, a low-complexity, distributed and cooperative interference mitigation method, which is aware of network load and propagation conditions, is conceived and discussed. The proposed method is fully scalable and addresses the interference received by the different layers composing the network separately. Finally, the impact that the channel state information has on the network's performance is studied. The channel state information of the users’ channels is necessary in order for the base station to assign frequency resources. On the other hand, this feedback information comes at a cost of uplink bandwidth which is traditionally not considered. The impact that reduced user feedback information has on an LTE network, in time and frequency is studied. A model which considers the trade-off between downlink performance and uplink overhead is presented and novel feedback allocation strategies, which follow the same structure as the ones in the LTE standard, are presented in order to improve the overall performance. Intelligent machine learning solutions are proposed to adapt the base station feedback choice based on the users conditions and requirements. This way, the network can choose how much information it requires from its users, in both the time and the frequency domains, to minimise the control information overhead.