The distant ancestors of the complex and high performance 4G and 5G broadband wireless systems we know today were the very first generation of cellular systems. These 1G networks were characterized not only by impenetrable system designations – such as AMPS in the Americas, TACS in the UK, and NMT in the Nordic countries – but also by a myriad of static and manually configured network design parameters. These included frequency planning of channel assignments to cells, base station transmission power levels, handoff thresholds and procedures, antenna beamwidths, pointing angles and downtilt, plus many, many more arcane but critical network parameters. Needless to say, as these early networks evolved and scaled to the first, very large, 2G digital GSM deployments, network maintenance required a small army of network operations staff. Maintaining and tuning networks was an art of the blackest kind.
The resulting, massive network operational expense was supportable at a time of frenzied cellular service growth and proliferating use of wireless communications. Nevertheless, as the industry matured, as mobile penetration rates approached and even passed 100%, and as increases in average revenue per user (ARPU) stalled, a phalanx of field technicians was no longer an option. So, what was the humble network operator to do in the face of oppressive operational expenses and flattening revenues?
The answer to this question began to emerge with the first 4G LTE systems in the late 2000’s. It took the form of so-called self-optimizing networks or SON. SON allowed, for the first time, network engineers to delegate selected elementary network design, optimization, and self-healing tasks to the network itself. The network might, for example, automatically plan, and hence maximally separate in space, the cell site identifiers used to drive handoff decisions. Or it might detect an interference overload at a particular base station and adjust handoff thresholds to drive user traffic into a neighboring cell. Or the network might recruit mobile devices passing through a cell to as unwitting agents to send measurement reports on serving and interfering cells to allow the network to adjust base station radiated power levels or antenna tilting.
Despite these advances, however, SON, while an exciting first step – at least for network engineers – had some serious limitations. Perhaps most concerning was the overwhelming number of network parameters that had to be self-optimized. Often these numbered in the thousands, and this was more than the simple rule-based SON designs of the 2010’s could comprehend and manage. To the rescue of SON, however, came contemporaneous breakthroughs in the fields of big data and deep machine learning. As a result, today, a new frontier of intelligent network operation has emerged that offers significant innovation and investment opportunities.
So, what trends have driven this new phase of network design innovation? First, is the introduction of artificial intelligence and machine learning (AIML) techniques to network management. Initially, this focused on classical AIML methods to network event classification and assist SON; methods with exist names such as support vector machines or random forests were brought to bear. Recently, however, a much broader and richer set of AIML algorithms have come into play, including convolutional neural networks (CNN’s) from the image classification domain, graph neural networks (GNN’s) from knowledge management and social networks, and deep reinforcement learning (DRL) and deep federated learning (DFL) from autonomous vehicles and financial analysis. These algorithms, when combined with contemporary compute engines have delivered very promising results in a variety of network tasks including, scheduling transmissions to mobile users, selecting base station antenna array beams, managing ‘slices’ of network routing and transport resources to guarantee quality of service (QoS), and detecting the anomalous flows within the network that might indicate a security vulnerability.
A second key trend is the emergence of standardized methods to systematically organize and expose the operational state of the network. In other words, the data used to train and deploy the AIML algorithms. Often, these initiatives – hosted by standards bodies such as 3GPP and others – are identified by obscure acronyms such as network data analytics function (NWDAF) or management data analytics function (MDAF). Regardless, they have a common objective: to make available to AIML functions operating within the network the necessary data, and to support the implementation of policies and control schemes made available by the same functions. The emergence of network virtualization methods has only hastened their deployment.
A third and final key trend is the emergence of AIML software frameworks and associated acceleration hardware to host, execute and manage the deployment life cycle of the AIML algorithms. This has included open-source initiatives such as the Linux Foundation’s Open Network Automation Platform (ONAP) and Acumos AIML model management project. Also significant is the work of the Open Radio Access Network (ORAN) project, whose Non-Real-Time and Near-Real-Time RAN Intelligent Controller specifications are attempting to lay the foundation for an ecosystem of hosted 3rd-party AIML models coined xApps and rApps.
These three trends offer the tantalizing and compelling prospect of a fully autonomous network. That is, a network which self-manages both to improve performance and radically reduce the operational expenditure that can consume as much as 80% of network OPEX.
So where do the challenges and opportunity for innovation and investment lie here? First, the design and delivery of high-performance data management architectures to capture and deliver the network state, with low latency, to the AIML framework remains open. The hardening and commercialization of the tools to drive AIML model lifecycle management is also a critical opportunity, as is the development of high performance AIML model and applications to perform the myriad of automated tasks that the autonomous network will require. Further opportunity for innovation and new products lies in the need to make the decisions and policies that the AIML models deliver – which are often difficult or impossible for human observers to understand – both observable and explainable to the much smaller team of engineering staff tasked with operating the network. The combination of these intelligent network management tools with systems used for planning and deploying networks represents yet more opportunity, since the configuration and provisioning of network compute, switching and radio subsystems lies at the root of their subsequent management. And then there is then enablement of 6G systems, targeted for deployment at the end of this decade, and which are being designed today with AIML models and data structures profoundly embedded in the fabric of the network design.
The 1G pioneers of today’s 5G networks were familiar with the fundamental concepts of machine learning but lacked access to the graphics processing units (GPU), dedicated compute architectures and the million- and billion-parameter deep neural networks that are becoming commonplace today. This is changing the game for 5G and 6G systems. The road to the autonomous network is finally coming into view, and in that journey lies opportunity for innovation and investment.