background
logo
ArxivPaperAI

On Achieving High-Fidelity Grant-free Non-Orthogonal Multiple Access

Author:
Haoran Mei, Limei Peng, Pin-Han Ho
Keyword:
Computer Science, Information Theory, Information Theory (cs.IT), Networking and Internet Architecture (cs.NI)
journal:
--
date:
2024-01-10 00:00:00
Abstract
Grant-free access (GFA) has been envisioned to play an active role in massive Machine Type Communication (mMTC) under 5G and Beyond mobile systems, which targets at achieving significant reduction of signaling overhead and access latency in the presence of sporadic traffic and small-size data. The paper focuses on a novel K-repetition GFA (K-GFA) scheme by incorporating Reed-Solomon (RS) code with the contention resolution diversity slotted ALOHA (CRDSA), aiming to achieve high-reliability and low-latency access in the presence of massive uncoordinated MTC devices (MTCDs). We firstly defines a MAC layer transmission structure at each MTCD for supporting message-level RS coding on a data message of $Q$ packets, where a RS code of $KQ$ packets is generated and sent in a super time frame (STF) that is composed of $Q$ time frames. The access point (AP) can recover the original $Q$ packets of the data message if at least $Q$ out of the $KQ$ packets of the RS code are successfully received. The AP buffers the received MTCD signals of each resource block (RB) within an STF and exercises the CRDSA based multi-user detection (MUD) by exploring signal-level inter-RB correlation via iterative interference cancellation (IIC). With the proposed CRDSA based K-GFA scheme, we provide the complexity analysis, and derive a closed-form analytical model on the access probability for each MTCD as well as its simplified approximate form. Extensive numerical experiments are conducted to validate its effectiveness on the proposed CRDSA based K-GFA scheme and gain deep understanding on its performance regarding various key operational parameters.
PDF: On Achieving High-Fidelity Grant-free Non-Orthogonal Multiple Access.pdf
Empowered by ChatGPT