I thought it might be interesting to zoom in on what AI means for 5G networks and the challenges that need tackling to realise the benefits, other challenges and on-going work on standardisation. Inspiration for this post comes from Yue Wang, Samsung UK.
Smart connectivity will combine AI with IoT and 5G, with potentially significant impacts on the production of future intelligent services and products. The convergence of these technologies is expected to transform how industries innovate and operate, including transport systems, smart cities, health monitoring and entertainment. Many AI applications will rely on 5G in the future, from virtual reality (VR) and augmented reality (AR) to autonomous vehicles and robotics.
However, we are still some way off, as Yue notes in her article, Key Factors Driving the Adoption of AI for 5G and Beyond.
When #AI in network becomes large scale, we need to consider not only how to use AI to enhance network efficiency but also how to efficiently use #AI. The reusability of data and AI modules, the synergy among them as well as with the network - scalable and deployable #AI.
In summary, the 3 major challenges are:
- The data challenge - lack of relevant and mature data sets for AI in the network. This challenge requires the industry to adopt a unified approach with a common language as key to the correct interpretation of data sets from the large-scale 5G infrastructure.
- The reliability challenge - lack of confidence in the reliability of the AI solution. This challenge needs a benchmark for assessing the various AI solutions, as well as validation and integration across the network end-to-end.
- The deployability challenge - lack of scalability and deployability of existing AI solutions. This challenge needs the validation, integration and network deployment of AI solutions that are scalable and use unified data sets.
The key to the market using AI for 5G and beyond network operation and management is the deployability of the AI solutions IN the network. Because existing solutions are designed for specific parts of the network, problems and applications, they work in isolation and lack scalability. The industry, therefore, needs to work towards a strategy for AI solution development and scalability with unified data sets, validated, integrated and deployed in the network. Such an approach is key to improving both network efficiency and efficiently using AI with re-usable data and AI modules, creating a synergy across them and with the network. This takes us back to the need for uniform data use across the network, the use of common tools and the availability of platforms for validation and integration.
On the research front, we need to focus on AI to develop solutions dealing with safety, privacy, security and trustworthiness. Pressing issues within AI include compliance with privacy regulations, tackling bias in algorithms, mitigating risks and threats with suitable techniques and methods.
We also need to tackle the societal challenges that AI poses, including compliance with privacy regulations, bias in AI algorithms, transparency, right of verification, risks and ethical issues, safety, privacy, security and trustworthiness.
In this respect, future research and innovation (R&I) actions need to look at societal readiness levels, and not just technology and market readiness levels. This is a key point also raised in SpeakNGI.eu discussions with European stakeholders working on robotics while also noting the fascination of small children (aged 5-10) on robots during the International Robotics Festival (September 2018, Pisa). In our view, more work is needed to understand the societal readiness levels across age groups, class systems and countries.
Besides this, AI needs industry collaboration across domains for applications in transport, medicine, finance, robotics, manufacturing and others should become a top priority. Other synergies could come from AI technologies in terms of market segments, and benefits from AI, analytics, big data, the Internet of Things (IoT), among others.
On the standards front, we are engaging with ETSI and the chair of the Experimental Networked Intelligence group (ETSI ISG ENI), which is defining a cognitive network management architecture, using AI techniques and context-aware policies to adjust user offered services based on changes in user needs, environmental conditions, and business goals. ENI ISG is developing standards for a cognitive network management system aimed at delivering a metric for the optimisation and adjustment of the operator experience over time by taking advantage of machine learning and reasoning. Using the ‘monitor-analyse-plan-execute’ control model will enable the system to adjust the offered services based on changing conditions. The group is also considering a gap analysis of work on context-aware and policy-based standards with other Standards Developing Organisations to re-use existing standardised solutions for legacy and evolving network functions wherever possible. Its work plan also includes adding closed-loop AI mechanisms based on context-aware, metadata-driven policies to more quickly recognise and incorporate new and changed knowledge, and hence, make actionable decisions, in day-to-day-operations, as well as security and a closed loop learning policy-model.
One of our Early Adopters, Ray Walshe, is helping drive AI standardisation within ISO, the international standards organisation, and IEC, the International Electrotechnical Commission (IEC). Here, work is undertaken in JTC/SC42, which has set up a systems integration committee offering guidance on AI applications to IEC, ISO and JTC1 committee. It draws on the support of committees looking into horizontal and vertical areas. As AI matures, JTC/SC42 is adopting a broad approach looking at the full AI ecosystem and beyond traditional interoperability. On top of this, it is running several projects on big data, foundational AI, AI trustworthiness that also encompasses use cases and AI governance applications.
Thanks for reading and commenting!