TY - JOUR
T1 - Toward Practical Fluid Antenna Systems
T2 - Co-Optimizing Hardware and Software for Port Selection and Beamforming
AU - Xu, Sai
AU - Wong, Kai Kit
AU - Du, Yanan
AU - Hong, Hanjiang
AU - Chae, Chan Byoung
AU - Liu, Baiyang
AU - Tong, Kin Fai
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - This paper proposes a hardware-software co-design approach to efficiently optimize beamforming and port selection in fluid antenna systems (FASs). To begin with, a fluid-antenna (FA)-enabled downlink multi-cell multiple-input multiple-output (MIMO) network is modeled, and a weighted sum-rate (WSR) maximization problem is formulated. Second, a method that integrates graph neural networks (GNNs) with random port selection (RPS) is proposed to jointly optimize beamforming and port selection, while also assessing the benefits and limitations of random selection. Third, an instruction-driven deep learning accelerator based on a field-programmable gate array (FPGA) is developed to minimize inference latency. To further enhance efficiency, a scheduling algorithm is introduced to reduce redundant computations and minimize the idle time of computing cores. Simulation results demonstrate that the proposed GNN-RPS approach achieves competitive communication performance. Furthermore, experimental evaluations indicate that the FPGA-based accelerator maintains low latency while simultaneously executing beamforming inference for multiple port selections.
AB - This paper proposes a hardware-software co-design approach to efficiently optimize beamforming and port selection in fluid antenna systems (FASs). To begin with, a fluid-antenna (FA)-enabled downlink multi-cell multiple-input multiple-output (MIMO) network is modeled, and a weighted sum-rate (WSR) maximization problem is formulated. Second, a method that integrates graph neural networks (GNNs) with random port selection (RPS) is proposed to jointly optimize beamforming and port selection, while also assessing the benefits and limitations of random selection. Third, an instruction-driven deep learning accelerator based on a field-programmable gate array (FPGA) is developed to minimize inference latency. To further enhance efficiency, a scheduling algorithm is introduced to reduce redundant computations and minimize the idle time of computing cores. Simulation results demonstrate that the proposed GNN-RPS approach achieves competitive communication performance. Furthermore, experimental evaluations indicate that the FPGA-based accelerator maintains low latency while simultaneously executing beamforming inference for multiple port selections.
KW - FAS
KW - FPGA
KW - GNN
KW - WSR
UR - https://www.scopus.com/pages/publications/105024119399
U2 - 10.1109/TWC.2025.3637390
DO - 10.1109/TWC.2025.3637390
M3 - Article
AN - SCOPUS:105024119399
SN - 1536-1276
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
ER -