Professor Zhi-Li Zhang

CSENG Computer Science & Eng
College of Science & Engineering
Twin Cities
Project Title: 
NextGPT: Large-Scale Pretrained Model for Next-Gen Mobile Networks

This group is beginning a study that utilizes a pre-trained model to optimize next-generation (NextG) mobile networks. Their idea is driven by real measurement data, contains AI-based cross-layer designs, and makes the NextG mobile network “environment aware.” Concretely, this project integrates three main thrusts to enhance the quality of service for next-generation networks:

  • Dynamic real-world environment perception and prediction: Preliminary 5G measurement studies revealed that 5G network performance is significantly impacted by the characteristics of the surrounding deployment environment. These challenging environments with many moving objects and potential obstructions can result in massive fluctuations in 5G throughput performance. Especially for directionally oriented radios with high-band frequency (such as mmWave 5G), even pedestrians moving around may block 5G operational signals. Meanwhile, the unavoidable transmission delay has detrimental consequences for latency-sensitive application scenarios, such as teleoperated vehicles, where remote drivers give driving instructions based on the streaming video and sensor feedback of the target vehicle’s environment.
    • The researchers argue that understanding the dynamics of an environment is important to accurately predict 5G performance and improve the service quality of many 5G downstream applications. With the potential wide deployment of sensors alongside 5G base stations and user equipment, they seek to take full advantage of this intelligent ecosystem and tackle the following multi-modal machine learning problem: perceive and predict the environment using data captured from GPS, camera, Lidar, and Radar sources.
  • Digital twin construction and AI radio interface with real-time channel estimation: The quality of radio channels is a pivotal factor influencing the performance of 5G. Mistakenly overestimating or underestimating the current channel quality will lead to a reduction in data transmission efficiency. Current channel estimation predominantly relies on the so-called “limited feedback” mechanism, where the reference signal will be dispersed across the entire spectral resource grid to measure the channel quality, resulting in the "wastage" of a significant portion of spectral resources. Meanwhile, techniques grounded in signal propagation simulation, such as ray tracing, often suffer from the drawback of computing complexity and fail to satisfy the requirements of real-time channel estimation.

    • These researchers seek to construct digital twins based on the environment captured and predicted in Thrust 1, and adopt AI models (deep neural networks, such as transformers) to efficiently generate the spatial distribution of such channel quality. The generated radio propagation channel can be directly used for modulation and coding, thereby enhancing the efficiency of the PHY layer.

  • Spectrum resource scheduling and configuration: The scheduler at the MAC layer orchestrates the resource allocation and the use of advanced technology such as carrier aggregation, traffic steering, etc. Current scheduling primarily relies on thresholds-based rules or some linear algorithms, which should be manually configured by hardware providers (e.g., Nokia, Ericsson, or Samsung) or operators (e.g., ATT, Verizon, or T-Mobile) based on their operational expertise. However, real traffic has considerable variability. For instance, distinct regions (urban and rural) or even the same region during different times (day and night) exhibit distinct characteristics. Inadequate configurations or delayed updates in the algorithm will diminish overall network efficiency.
    • These researchers advocate cooperative cross-layer optimizations for NextG mobile networks. Leveraging  predicted future environment changes outlined in Thrust 1 and estimated channel modeling in Thrust 2, they employ the generated AI model(s) to dynamically allocate the spectrum resources and generate site-specific configurations.

The researchers will utilize self-collected data to develop such big learning model(s), which integrate the above thrusts (multiple learning tasks) and thereby cross-layer optimize the NextG mobile network. This requires substantial computing resources. 

Project Investigators

Xinyue Hu
Steven Sleder
Evan Way
Junhan Wu
Wei Ye
Professor Zhi-Li Zhang
Qixin Zhang
 
Are you a member of this group? Log in to see more information.