How does the increased adoption of GenAI impact data security for telcos? 


At MWC this year, we caught up with Ari Banerjee, Senior Vice President at Netcracker Technology, to discuss the importance of data security as telcos increasingly adopt Generative AI (GenAI) solutions 

Undoubtedly, the biggest topic at this year’s MWC was GenAI and the many ways telcos are looking to incorporate it into their businesses, both to increase efficiency as well as generating new revenues. But in order for telcos to fully embrace AI effectively, a successful digital transformation of the telcos themselves is paramount. 

“What we’re seeing with our customers is that first you need to really digitally transform yourself. You need to have the right data,” says Banerjee.  

“So, data transformation becomes a precursor to any AI ML (machine learning) strategy, because at the end of the day, if you have garbage data – duplicate data and old legacy data – it just doesn’t match up with services. You are not going to be able to use AI in the right way.” 

Even when a digital transformation has successfully been undertaken, the issue of data security still looms. As more and more companies adopt the use of GenAI, data security will become more of a problem. EY’s Global head of telecoms declared the issue the biggest risk in the entire telco sector for 2024, because the rise of GenAI is putting a strain on data governance. For example, much of the data fed into GenAI models is highly sensitive and cannot be shared to the public cloud. 

Netcracker are tackling this complexity through their GenAI Telco Solution, which was launched last September. Functioning alongside the telco GenAI models (such as large language models [LLMs]), GenAI users, and the telco BSS/OSS databases, this solution supplements the GenAI model with real-time instructions to elicit the most relevant responses, and protects sensitive customer data from public models. 

Thus, Netcracker are playing the key role of an integrator, allowing telcos to make use multiple LLMs and SLMs (small language models), each specialised for a specific purpose. 

“Somebody in the middle needs to be able to take the best parts of it and then interface that and use that with the information from the network information databases […] and provide the right contextual information, whether to the internal team who’s dealing with let’s say, BSS/OSS operations, or the external teams, which is your customer,” Banerjee explained. 

“This is one of the most exciting areas for this new technology,” said Banerjee. “Providing the right contextual offer to the customer through an automated channel.”   

You can check out our full interview with Ari Banerjee, Senior Vice President at Netcracker from the link below: 

Recent Posts