I get drawn these days into discussions about the soft spot of AI. What is the best use of AI/ML, its utility in generative AI and its use in network automation, optimisation and autonomic functions.
In many cases, these discussions stumble upon misconceptions about the mechanics of statistics and their applications.
To put it simply, many do not distinguish between complexity and complication, which has a great effect on expectations of problem solving, automation and outcome prediction. A complex problem is an assembly of problems that can be broken down in subsets until simple unique problems can be identified, tagged, troubleshooted and resolved. These problems are ideal targets for automation. No matter how complex the task, if it can be broken down, if a method of procedure (mop) can be written for each subtask and eventually for the whole problem, it can be measured, automated, predicted and efficiency gains can be achieved.
Complicated problems are a different animal altogether. They might have sub task that can be identified and broken down, but other parts that have a large level of unknown and uncertainty.
Large Language Models can try to reduce the uncertainty by having larger samples, enabling even outlier patterns to emerge and be identified, but in many cases, complicated problems have dependencies that cannot be easily resolved from a pure mathematical standpoint.
This is where domain expertise comes in. In many cases, whenever issues arise in a telecoms network, it is not necessarily identified immediately from the source of the issue. Troubleshooting in many case requires knowledge of network topology, call flows, protocols, and multi domain expertise across core, transport, access, peering point, connectivity, data centers...
It is not possible to automate what you do not operate well. You cant operate well a system that you can't measure well and you can't measure well a system without a consolidated data storage and management strategy. In many cases, telco systems still produce logs in a proprietary format, on siloed systems and collecting, cleaning, exporting, processing, storing these data in a fully integrated data system is still in its infancy. This is however the very first step before even the categorization into complex or complicated issues can take place.
In many casse, data literacy need to pervade the entire organization to ensure that a data-driven strategy can be enacted, let alone moving to automation, autonomic or AI predictive systems.
It becomes therefore very important to try and isolate complex from complicated systems and issues and try to apply as much data science and automation to the former, before trying to force AI/ML to the latter. As a rule of thumb, as the number of tasks or variables and the complexity increases, one can move from optimization, using scripting to automation, using scripting + ML, to prediction using AI / ML. As the number of unknowns and complication increases, one has to use subject matter experts and domain experts, to multi domain experts with end to end view of the system.
As complications and tasks increase, the possibility to achieve autonomous systems decrease, as human expertise and manual intervention increase. Data science becomes less an operator than an attendant or an assistant to detect, automate the subset of tasks with identified outcome and patterns, accelerating resolution of the more complicated problem.