background
logo
ArxivPaperAI

Full-Stack End-to-End Sub-THz Simulations at 140 GHz using NYUSIM Channel Model in ns-3

Author:
Hitesh Poddar, Akhileswar Chowdary, Theodore S. Rappaport, Marwa Chafii
Keyword:
Computer Science, Information Theory, Information Theory (cs.IT)
journal:
--
date:
2023-12-26 00:00:00
Abstract
The next generation of wireless communication, is expected to harness the potential of the sub-THz bands to achieve exceptional performance and ubiquitous connectivity. However, network simulators such as ns-3 currently lack support for channel models above 100 GHz. This limits the ability of researchers to study, design, and evaluate systems operating above 100 GHz. Here, we show that drop based NYUSIM channel model can be used to simulate channels above 100 GHz in all 3GPP scenarios including urban microcell (UMi), urban macrocell (UMa), rural macrocell (RMa), indoor hotspot (InH), and indoor factory (InF). We evaluate the full stack downlink end-to-end performance (throughput, latency and packet drop) experienced by single user equipment (UE) connected to a Next Generation Node B (gNB) operating in the sub-THz bands for three gNB-UE antenna configurations: 8x8-4x4, 16x16-4x4, and 64x64-8x8 by using NYUSIM channel model at 140 GHz in the ns-3 mmWave module. Additionally, it is found that determining the exact number of realizations required to obtain statistically significant results using simulation platforms like ns-3 remains challenging, as end-to-end performance metrics vary strongly with the number of realizations. Hence, we show the variation in throughput vs number of realizations and find the optimal number of realizations required to obtain statistically significant results. We strongly encourage researchers worldwide to adopt a similar approach, as it enables the readers to assess the accuracy and reliability of the reported results and enhance the finding's overall interpretability.
PDF: Full-Stack End-to-End Sub-THz Simulations at 140 GHz using NYUSIM Channel Model in ns-3.pdf
Empowered by ChatGPT