AI

Network GDT: GenAI-Based Digital Twin for Automated Network Performance Evaluation

By Sukhdeep Singh Samsung R&D Institute India-Bangalore
By Swaraj Kumar Samsung R&D Institute India-Bangalore
By Peter Moonki Hong Samsung Research
By Ashish Jain Samsung R&D Institute India-Bangalore
By Madhan Raj Kanagarathinam Samsung R&D Institute India-Bangalore

1. Introduction

Telecommunication networks are evolving rapidly, moving beyond 5G (B5G) into a world of hyper-connectivity, intelligent automation, and immersive services. With billions of connected devices and surging data traffic, operators must constantly introduce new AI/ML-based features to keep networks stable and efficient.

But here lies the challenge: How do we evaluate new features reliably before deploying them in live networks? Traditional methods rely heavily on manual testing or direct field trials. These approaches are slow, costly, and risky, with potential service disruptions, unmet SLAs, and ballooning operational expenses.

This is where Network GDT (Generative AI-based Digital Twin) comes into play: an automated, scalable, and intelligent evaluation platform that leverages the power of Generative AI and Digital Twin technology to ensure robust AI/ML solutions for future networks.

2. Problem Statement

Traditional performance evaluation methods in telecom face several bottlenecks:

·
High costs - manual KPI tracking can increase OPEX and CAPEX by up to 30%.
·
Time-intensive processes - operators must sift through thousands of KPIs for every new feature.
·
Poor adaptability - existing digital twins require retraining for every update, making them unsuitable for fast-paced B5G innovation.
·
Risk of instability - deploying untested AI/ML models directly into live networks can degrade Quality of Service (QoS) and Quality of Experience (QoE).

3. Motivation

As the industry shifts to B5G, the need for intelligent automation has never been greater. The O-RAN Alliance has already paved the way by embedding AI/ML into RAN operations. However, operators still lack a scalable, automated platform that can:

·
Simulate real-world scenarios (like congestion, failures, and heavy traffic).
·
Test new AI features before live deployment.
·
Reduce costs, risks, and delays.


This motivated the development of Network GDT, a next-generation framework that unites Generative AI with dynamic digital twins for proactive performance evaluation.

4. Proposed Solution: Network GDT

To solve these challenges, we propose Network GDT, a platform that integrates conditional Generative Adversarial Networks (cGANs) with a novel Digital Twin Augmenting Condition (DTAC) mechanism.

Figure 1. System Model for evaluating AI models using Network GDT: This figure illustrates the stages of the Network GDT solution

A. System Model

The platform works as a bridge between AI model development and real-world deployment.

·
Data Collection: Real field KPIs form the baseline.
·
AI Server: Models are developed and trained.
·
Network GDT Module: A cGAN powered digital twin simulates performance under varied conditions.
·
Service Orchestration: Validated models are deployed safely into live networks.


B. GenAI-Powered Digital Twin

The cGAN engine lies at the heart of the system:

·
Generator: Produces synthetic KPI data based on noise + DTAC conditions.
·
Discriminator: Validates realism by distinguishing real from generated data.
·
DTAC: Encodes both environment conditions (traffic loads, cell capacity, failures) and new AI features (network functions, algorithms).


This ensures adaptability, so the digital twin can remain relevant as networks evolve without starting training from scratch.

C. Training Workflow

I.
Data Augmentation: Collect and enrich real KPI datasets.
II.
Feature Selection: Identify critical KPIs such as PRB utilization, throughput, and RRC users.
III.
DTAC Encoding: Embed environment conditions + new feature information.
IV.
Adversarial Training: Generator and Discriminator iteratively refine performance.
V.
Simulation & Evaluation: The GenAI twin simulates realistic scenarios and evaluates the AI model.


Figure 2. Network GDT Block Diagram: This diagram outlines the key components of the Network GDT process, including data augmentation, feature selection, GenAI training, and the final generation of the dynamic digital twin (GDT)

D. Dual-Phase Evaluation Strategy

Network GDT ensures safe and reliable deployment through two phases:

·
Phase 1 - Simulation: AI/ML models are rigorously tested under controlled yet realistic scenarios.
·
Phase 2 - Deployment: Only validated models are integrated into live networks by the Service Management and Orchestration (SMO).


This strategy minimizes downtime, optimizes performance, and reduces risks.

5. Experimental Setup

Dataset:

·
Generated using an in-house 5G simulator.
·
60,000 samples across 5 environments.
·
Included congestion scenarios with one gNB (16 cells, 70 users capacity + 40–50 extra users for overload).
·
Key KPIs: Downlink PRB utilization, RRC users, IP throughput.


Training:

·
Conducted on high-performance servers.
·
Used cGAN with DTAC vectors of size 15.
·
Iterative adversarial training ensured accurate, condition-aware data generation.

6. Results

·
Use Case: Congestion prediction.
·
KPIs Evaluated: PRB utilization, throughput, and RRC users.
·
Performance:
-
Achieved 97.34% accuracy (Cell 4) and 96.83% accuracy (Cell 15).
-
Predictions on synthetic data matched real data trends, proving the reliability of Network GDT.


Figure 3. Comparison of Original Data, GenAI Engine Generated Data, Predictions on Original Data, and Predictions on GenAI Engine Generated Data

This demonstrates that AI models tested on GDT generated data can be safely trusted for real deployments.

7. Conclusion

The proposed Network GDT platform transforms how telecom operators evaluate new features. By combining Generative AI with Digital Twins, it delivers:

·
Automated evaluations (replacing manual processes).
·
Scalable adaptability via DTAC encoding.
·
High reliability by simulating realistic network conditions.
·
Business value through reduced costs, minimized downtime, and improved QoS/QoE.


As B5G networks expand, solutions like Network GDT will be key to enabling fast, safe, and intelligent innovation in telecom. Network GDT makes future networks smarter, safer, and more efficient, before they even go live.

References

[1] W. Wang, X. Chen, and J. Zhu, “AI in Wireless Communications: A Financial Perspective,” IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 1976–1988, 2020.

[2] A. A. Khan, A. A. Laghari, A. M. Baqasah, R. Alroobaea, T. R. Gadekallu, G. A. Sampedro, and Y. Zhu, “ORAN-B5G: A next generation open radio access network architecture with machine learning for beyond 5G in industrial 5.0,” IEEE Transactions on Green Communications and Networking, 2024.

[3] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” in Proceedings of the 32nd International Conference on Machine Learning (ICML 2014), 2014.

[4] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.