background
logo
ArxivPaperAI

Friendly Attacks to Improve Channel Coding Reliability

Author:
Anastasiia Kurmukova, Deniz Gunduz
Keyword:
Computer Science, Information Theory, Information Theory (cs.IT), Machine Learning (cs.LG)
journal:
--
date:
2024-01-25 00:00:00
Abstract
This paper introduces a novel approach called "friendly attack" aimed at enhancing the performance of error correction channel codes. Inspired by the concept of adversarial attacks, our method leverages the idea of introducing slight perturbations to the neural network input, resulting in a substantial impact on the network's performance. By introducing small perturbations to fixed-point modulated codewords before transmission, we effectively improve the decoder's performance without violating the input power constraint. The perturbation design is accomplished by a modified iterative fast gradient method. This study investigates various decoder architectures suitable for computing gradients to obtain the desired perturbations. Specifically, we consider belief propagation (BP) for LDPC codes; the error correcting code transformer, BP and neural BP (NBP) for polar codes, and neural BCJR for convolutional codes. We demonstrate that the proposed friendly attack method can improve the reliability across different channels, modulations, codes, and decoders. This method allows us to increase the reliability of communication with a legacy receiver by simply modifying the transmitted codeword appropriately.
PDF: Friendly Attacks to Improve Channel Coding Reliability.pdf
Empowered by ChatGPT