Loading web-font TeX/Math/Italic
Deep Recurrent Neural Network Based Detector for OFDM With Index Modulation | IEEE Journals & Magazine | IEEE Xplore

Deep Recurrent Neural Network Based Detector for OFDM With Index Modulation


The proposed deep RNN-based LSTM-IM model architecture and working flow process.

Abstract:

Index modulation (IM) leads to a decrease in power consumption and transmitter complexity compared to classical orthogonal frequency division multiplexing (OFDM) systems....Show More

Abstract:

Index modulation (IM) leads to a decrease in power consumption and transmitter complexity compared to classical orthogonal frequency division multiplexing (OFDM) systems. The overall bit-error rate (BER) performance of the OFDM with IM (OFDM-IM) system is significantly influenced by the accuracy of index-bit detection. To take advantage of IM, in this paper, we propose a recurrent neural network-based signal detection scheme for OFDM-IM. In addition, we introduce a more effective long-short-term memory (LSTM)-based detection technique to improve the BER performance of the OFDM-IM system. The Adam optimization algorithm is utilized to reduce the total system loss. Before entering the network, the received signal and channel matrix are pre-processed based on domain knowledge to enhance the performance of the proposed system. At first, the model is trained in offline to minimize BER using the simulation dataset, and then the trained model is employed in the online phase to detect the OFDM-IM signal. We compare the performance of the proposed LSTM-based detector with traditional detectors and other deep learning (DL) detectors. The simulation outcomes show that our proposed detector outperforms conventional detectors and other DL detectors under perfect and imperfect channel conditions.
The proposed deep RNN-based LSTM-IM model architecture and working flow process.
Published in: IEEE Access ( Volume: 12)
Page(s): 89538 - 89547
Date of Publication: 26 June 2024
Electronic ISSN: 2169-3536

Funding Agency:

References is not available for this document.

SECTION I.

Introduction

Orthogonal frequency division multiplexing (OFDM), presented by Robert W. Chang of Bell Labs in 1966, operates as a FDM system [1]. In OFDM, the complete channel is subdivided into numerous narrow-band subchannels, which are concurrently transmitted to uphold high-data-rate transmission [2]. Moreover, OFDM effectively mitigates intersymbol interference (ISI) induced by the frequency selectivity of the wireless channel. Due to this capability, OFDM has emerged as the predominant multicarrier transmission technique in wireless communications and has been widely adopted as an essential component of IEEE 802.16 standards [3]. However, the orthogonality of subchannels is compromised due to the rapid fluctuations of the wireless channel throughout the OFDM block transmission for the frequency-selective fading channels where mobile terminals reach high vehicular speeds. Therefore, it is a challenging task to design an OFDM system that operates efficiently under high mobility conditions, as mobility support stands out as a crucial feature in next-generation broadband wireless communication systems [4], [5]. Due to its low spectral decay rate, traditional OFDM is inadequate for accommodating asynchronous access, customized subcarrier spacing, spectrum aggregation, and symbol period, all of which are anticipated as requirements for the fifth-generation (5G) wireless networks [6].

OFDM with index modulation (OFDM-IM) seems to be an attractive candidate for next-generation wireless communication to transmit information compared to traditional communication systems because of their appealing benefits in terms of high energy efficiency, superior bit error rate (BER) performance, and less hardware complexity [7], [8], [9]. It uses the corresponding communication systems’ building block indices to transmit extra information bits [10]. IM can map information bits by adjusting the ON/OFF state of the transmission entities. OFDM-IM schemes possess the capability to transmit information in a more energy-efficient way by selectively deactivating specific elements of the system compared to traditional OFDM systems [11].

Many studies related to OFDM-IM were researched in the past few years [3], [12], [13], [14], [15], [16], [17], [18], [19]. In [3], low complex near-optimal detectors for OFDM-IM were introduced to address the challenge posed by the very high computational complexity of maximum likelihood (ML) detection, primarily attributed to the presence of index bits. For the family of OFDM subcarrier IM systems, the authors provided a signaling technique supported by compressed sensing (CS) in [18]. After considering the joint ML detection, they suggested a low-complexity detection approach for the CS-assisted index-modulated (CSIM) and the conventional APM symbols. Despite its high complexity, this algorithm is called the iterative residual check (IRC)-based detector. In [13], the authors presented a greedy detector (GD) utilizing energy detection to estimate the performance of OFDM-IM with very low complexity. However, it has failed to achieve near-optimal performance. The log-likelihood ratio (LLR) detector was introduced in [14]. It can achieve near-ML performance, but it comes with the drawback that the received signal noise power spectral density must be known.

Recently, numerous studies have been conducted utilizing deep learning (DL) in the field of wireless communication [20], [21], [22], [23], [24], [25], [26]. DL-based detectors have been explored in various experiments to decrease complexity and attain near-optimal performance in OFDM-IM systems [27], [28], [29], [30], [31], [32]. In [27], the author proposed a deep learning-based detector called DeepIM for OFDM-IM. This DeepIM consists of a fully connected neural network (FNN). The authors used both Rectified Linear Units (ReLU) and Hyperbolic Tangents (Tanh) as activation functions to detect the information bits. The performance of DeepIM is near optimal for imperfect channel state information (CSI), and it outperforms the GD method. Although this model provides very low complexity and a short runtime, it has some performance gaps with ML for perfect CSI. To minimize the performance disparity, a bi-directional long short-term memory (Bi-LSTM)-based Y-BLSTM detector was proposed in [28]. The Y-BLSTM architecture employs two parallel sub-neural networks to independently learn the constellation and information regarding active indices, respectively. In [29], the authors introduced a detector based on convolutional neural networks (CNN), known as CNN-IM. The received symbols in the CNN-IM framework are converted to polar coordinates to help the neural network determine the indices of the subcarriers that have been activated. Both the CNN and Y-BLSTM models outperformed the GD and Deep-IM detectors. However, they are not able to attain a bit error rate (BER) performance that is nearly optimal. A transformer-based detector known as TransIM was proposed in [33]. TransIM functions with a mid-level modulation order of 16QAM produce soft probabilities for various transmitted symbols. Due to the structure of the transformer, the complexity of TransIM is high, but it demonstrates enhanced BER performance compared to DeepIM and CNN-IM. A deep-based detector named IMNet was proposed in [34]. The proposed IMNet model is applied to detect the transmitted signal in an OFDM-IM for multiple input, multiple output (MIMO) systems. The IMNet model contains two CNNs referred to as the antenna detection (AD) subnet and signal detection (SD) subnet, respectively. The AD subnet consists of four layers of CNN, and they choose a state-of-the-art denoising network commonly used in image processing fields as their SD subnet. The BER performance of IMNet is better than ML and LLR, but the model complexity is higher than other DL models. A dual mode (DM) OFDM-IM detector was proposed in [30] called DeepDM. Both CNN and deep neural network (DNN) models are utilized in this paper, where CNN is used as IndexNet to detect the index bit and DNN is used as CarrierNet to detect the carrier bit. The authors in [35], introduced a TSIMNet detector-based two-stage index-modulated- universal filtered multi-carrier (TSIM-UFMC) system aimed at improving performance and introducing the emerging UFMC technology to underwater acoustic (UWA) communications.

LSTM is a recurrent neural network (RNN) designed to process sequence data with long-term dependencies [36]. Many studies have been done by utilizing LSTM in wireless communication [37], [38], [39]. LSTM architectures add gating mechanisms, such as the forget gate, which enable them to regulate the information flow and gradients throughout the network. This aids in preventing gradients from diminishing significantly during the training process [40]. Also, LSTMs can effectively tackle the vanishing gradient problem generated by backpropagation [41]. Due to the sequential nature of subcarrier activations, OFDM-IM signals usually display temporal dependence. Modeling the temporal dynamics found in OFDM-IM signals is a useful use of LSTM networks because of their prowess at capturing and analyzing data flows. A lot of subcarriers and complicated channel circumstances can be present in OFDM-IM systems. Systems may be efficiently modeled thanks to LSTMs, which provide a balance between performance and complexity [42]. LSTMs are resistant to changes in channel conditions and signal characteristics because they may modify their internal states based on the input data and prior states. Operating in dynamic and noisy wireless environments, this adaptability is essential for OFDM-IM systems. So, LSTMs are well-suited for the particular difficulties presented by OFDM-IM communication systems because of their capacity to capture sequential dependencies, preserve long-term memory, manage complexity, adapt to changing conditions, and take advantage of parallel processing [43]. Motivated by the above-mentioned advantages of DL and literature, in this paper, we propose an LSTM-based detector for the OFDM-IM system. The proposed LSTM-IM detector can achieve better performance than existing manually designed detectors. The proposed model contains only one non-linear LSTM unit, which has a Tanh activation layer to detect the received signal under the Rayleigh fading channel efficiently. The key contributions of this paper can be outlined as follows:

  • An LSTM-based OFDM-IM detector is proposed in this paper, which can extract features very efficiently by capturing information from earlier time steps and retaining it for an extended period to process sequence data with long-term dependencies. Furthermore, the number of nodes in the hidden layer can be dynamically adjusted to detect the received signal with a suitable balance between complexity and performance.

  • Before being fed to the LSTM-IM, the received signal and channel data are pre-processed based on the OFDM-IM domain knowledge. This mechanism improves the detection accuracy of LSTM to identify the indices of the activated subcarriers.

  • We estimate the suggested LSTM-IM detector’s BER performance at various signal-to-noise ratios (SNR). The results confirm that the suggested LSTM-IM detector can achieve better detection performance under both perfect and imperfect channel conditions.

The remainder of the paper is structured as follows: section II presents the system modeling. Section III elaborates on the proposed model, including offline training and online testing procedures. Section IV demonstrates the simulation results, and Section VI presents the conclusions.

SECTION II.

System Model

In OFDM-IM, the information bits are not only conveyed by the standard amplitude phase modulation (APM) symbol but also by activated subcarrier indices [44]. We consider that the total transmitted bandwidth is divided into the G groups. Each group contains N subcarriers. So, the total transmitted subcarrier is N_{t} , and N={N_{t}}/G . Each OFDM-IM group’s signal processing at the transmitter is identical and unrelated to the others. Therefore, for simplicity, we focus on addressing only one group. According to the principle of OFDM-IM, only the K subcarriers will be activated, and the rest of the subcarriers (N-K ) will be zero-padded. Particularly, total p data bits are transmitted at every transmission of each group. Total bits are p={p_{1}}+{p_{2}} , {p_{1}}={K{log_{2}}M} bit carried by APM symbols and p_{2}=\lfloor {{log_{2}}C(N,K)}\rfloor bit carried by active subcarrier indices. M represents the size of the M-ary modulation scheme. The mapping from p_{1} bits to a set of K active indices can be implemented using combinatorial techniques. Consequently, by allocating K non-zero data symbols, which is the reciprocal of the K active subcarrier, the transmitted vector \mathbf {x}= [{x_{1}},\ldots ,{x_{N}}] is formed depending on the p incoming bits. So, if subcarrier i is active, then {x}_{i} is non-zero; otherwise, {x_{i}} = 0 when i = 1,\ldots ,N . This bit-to-symbol mapping is represented by the function \mathbf {x} = {f_{OFDM-IM}}{\mathbf {(b_{g})}} , where \mathbf {b_{g}} represents the incoming sequence of p bits in a single group. The OFDM-IM signal transmission system is shown in Fig. 1. The input bits are split into several group bits by the bit splitter and each group contains both an index bit and a classic bit. This index bit is set to the indices of the active subcarrier, and the classic bit is set to the active subcarrier data symbol.

FIGURE 1. - The OFDM-IM signal transmission system with its different blocks.
FIGURE 1.

The OFDM-IM signal transmission system with its different blocks.

The frequency domain received signal at the receiver is represented by \begin{equation*} \mathbf {y}=\mathbf {H}\odot \mathbf {x}+\mathbf {w}, \tag {1}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where, \mathbf {H} = [{H_{1},\ldots , H_{N}}] represents Rayleigh fading channel with {\mathbf {\mathcal {H}}_{i}} {\sim } \mathcal {CN}(0,1) , \odot denotes element-wise multiplication, and w is the additive white Gaussian noise (AWGN), where {{\mathbf {w}}_{i}} {\sim } \mathcal {CN}(0, {\sigma }) and {i}= 1,\ldots , N . We assume that the average energy of the M-ary transmitted symbol is E_{a} . So, the average SNR of the receiver is \overline {\gamma } ={E_{a}}/{{\sigma }^{2}} .

SECTION III.

LSTM-IM Based Detection Framework

First of all, we describe the structure of the LSTM-IM model in this section. Then we present the proposed model of offline training and online testing procedures with the generated data.

A. LSTM-IM Model Structure

A general structure of the proposed LSTM-IM model is shown in Fig. 2. Comparable to the existing detection methods utilized in OFDM-IM, it is presumed that the channel information will be known to the receiver. Therefore, channel H and the received signal y are regarded as the preliminary inputs to the LSTM-IM model. For the imperfect CSI conditions, we study an actual system that experiences problems due to the receiver’s imprecise CSI calculation. By using h(\hat {\alpha }) to represent the estimate of h({\alpha }) is acquired as follows:\begin{equation*} h\left ({{\alpha }}\right )=\hat {h}\left ({{\alpha }}\right )+e\left ({{\alpha }}\right ), \tag {2}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where e(\alpha) represents the channel estimation error, e(\alpha){\sim } \mathcal {CN}(0,{\epsilon }^{2}) . Similarly, \hat {h}(\alpha){\sim } \mathcal {CN}(0,{1-{\epsilon }^{2}}) , where {\epsilon }^{2} determines the error variance of CSI estimation. Particularly, we utilize the variable imperfect CSI model with a minimum mean square error (MMSE) basis, as described in [45]. In this model, the CSI error variance is dependent on the average SNR, meaning that {\epsilon ^{2}} = (1 + \overline {\gamma })^{-1} .

FIGURE 2. - The proposed deep RNN-based LSTM-IM model architecture and working flow process.
FIGURE 2.

The proposed deep RNN-based LSTM-IM model architecture and working flow process.

Domain knowledge describes how to use IM to efficiently assign subcarriers based on the data to be conveyed and to activate or deactivate subcarriers to express additional information. Furthermore, domain knowledge denotes an awareness of the properties of the communication channel, including impacts like noise, interference, and multipath fading [46]. Before feeding as input to the LSTM-IM model, we pre-process y and H based on the OFDM-IM domain knowledge. Specifically, to obtain an equalized received signal vector, the widely recognized zero-forcing (ZF) equalizer is utilized in the first place such as {\overline {\mathbf {y}}} = \mathbf {y} {\odot } \mathbf {H}^{-1} . It is anticipated that this approach will enhance the intuitive reconstruction of the active sub-carrier M-ary symbols. To create the input of the LSTM-IM decoder, the received signal energy \mathbf {y_{e}} is computed and combined with \overline {\mathbf {y}} . It should be noted that \mathbf {y_{e}} is also employed in GD to decode the active sub-carrier indices to enhance index detection. The real and imaginary parts of the \overline {\mathbf {y}} are concatenated with \mathbf {y_{e}} to form the 3N -dimensional input vector.

Our proposed model is constructed with an LSTM layer and a fully connected layer (FC) with a sigmoid activation. The pre-processed data D feeds as input to the LSTM-IM input layer. The received data is in a 3-dimensional (3D) shape. This 3D data is reshaped according to the LSTM layer input shape by the following function:\begin{equation*} \mathbf {D}= reshape (\mathbf {y}, [-1, N, 3]). \tag {3}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Then this reshaped data is fed to the LSTM hidden layer. The internal function of the LSTM layer is shown in Fig. 3. It consists of input get, forget get, and output get [47]. Each gate calculates its value utilizing a fully connected layer assisted by a sigmoid activation function. The values of each gate fall within the range of (0,1 ) due to the sigmoid activation function. These three gates can be expressed mathematically as follows:\begin{align*} \mathbf {i_{t}}& = {\sigma }_{s}({\mathbf {D}}_{t}{\mathbf {W}_{di}}+{\mathbf {h}}_{t-1}{\mathbf {W}_{hi}}+\mathbf {b}_{i}), \tag {4}\\ \mathbf {f_{t}}& = {\sigma }_{s}({\mathbf {D}}_{t}{\mathbf {W}_{df}}+{\mathbf {h}}_{t-1}{\mathbf {W}_{hf}}+\mathbf {b}_{f}), \tag {5}\\ \mathbf {o_{t}}& = {\sigma }_{s}({\mathbf {D}}_{t}{\mathbf {W}_{do}}+{\mathbf {h}}_{t-1}{\mathbf {W}_{ho}}+\mathbf {b}_{o}), \tag {6}\end{align*}

View SourceRight-click on figure for MathML and additional features. where {\mathbf {D}}_{t} is input, {\sigma }_{s} denotes the sigmoid activation function. At time step t, \mathbf {i}_{t} , \mathbf {o}_{t} , and \mathbf {f}_{t} represent the input, output, and forget gates, respectively. {\mathbf {h}}_{t-1} is the previous time hidden state. {\mathbf {W}_{hi}}, {\mathbf {W}_{hf}}, {\mathbf {W}_{ho}} and {\mathbf {W}_{di}}, {\mathbf {W}_{df}}, {\mathbf {W}_{do}} are weight parameters, and \mathbf {b}_{i}, \mathbf {b}_{f}, \mathbf {b}_{o} are bias parameters.

FIGURE 3. - The internal structure of LSTM layer with different gates.
FIGURE 3.

The internal structure of LSTM layer with different gates.

Now, the model performs the memory cell and hidden state operations by utilizing the input nodes at time step t with the following function:\begin{align*}\check {\mathbf {c_{t}}}& = tanh({\mathbf {D}}_{t}{\mathbf {W}_{dc}}+{\mathbf {h}}_{t-1}{\mathbf {W}_{hc}}+\mathbf {b}_{c}), \tag {7}\\ {\mathbf {c_{t}}}& = {\mathbf {f}}_{t}{\odot }{\mathbf {c}_{t-1}}+{\mathbf {i}}_{t}{\odot }\check {\mathbf {c_{t}}}, \tag {8}\\ {\mathbf {H_{t}}}& = {\mathbf {o}}_{t}{\odot }tanh({\mathbf {c}_{t}}), \tag {9}\end{align*}

View SourceRight-click on figure for MathML and additional features. where \mathbf {c_{t}} is the memory cell’s internal state and \mathbf {H_{t}} is the output of the hidden layer at the current state. Tanh represents the activation function within the range (−1, 1).\mathbf {W}_{dc} and \mathbf {W}_{hc} are weight parameters, and \mathbf {b}_{c} are bias parameters. \odot represents the Hadamard product for the element-wise multiplication.

The output of the LSTM hidden layer is fed as the input to the FC layer. In this layer, Sigmoid activation function \mathbf {f}_{Sigmoid}(\mathbf {x}) = 1/(1+{\mathbf {e}^{-\mathbf {x}}}) is deployed with a (0,1 ) interval to map the output vector element. The output vector \mathbf {b}_{g} of the FC layer can be represented as follows:\begin{equation*}\mathbf {b}_{g}= \mathbf {f}_{Sigmoid}({\mathbf {W}}_{s}\times {\mathbf {H}_{t}}+ \mathbf {b}_{s}), \tag {10}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where {\mathbf {W}}_{s} represents weight and \mathbf {b}_{s} represents the bias vector of the FC layer. Finally, we get the output bit \mathbf {b}_{g} from the output section.

B. Offline Training Process

Before employing the proposed LSTM-IM detector, it is necessary to train the model in the offline phase using simulation data. Specifically, a corresponding set of transmitting vectors is produced to generate multiple sequences of p bits \mathbf {b}_{g} . This vector is then sent through the Rayleigh fading channel with AWGN. Depending on the statistical models, noise and channels are also generated randomly and vary between different bit sequences. We pre-process the channel vector H and received signal y to prepare the training data set of the model whose labels correspond to the bit sequence \mathbf {b_{g}} as described in the earlier section. We take a large number of training data samples to prevent overfitting during the training period.

The parameters used to train the proposed model are shown in Table 1. In all experimental setups under consideration, the proposed LSTM-IM model is trained with 50 epochs. Each epoch contains 20 batches with a 5000 batch size. Since there are a total of 5000000 data samples used to train the model for 100000 batches. We employ the adaptive moment estimation (Adam) optimizer. This optimizer is readily implementable on many commercial DL platforms, including Tensorflow and Keras. LSTM-IM training requires careful selection of the SNR level because the model’s performance is highly dependent on it. More especially, it is essential to select the optimal learning rate so that the model can perform in different ranges of SNR. We apply different training SNRs for different training sequences. We train our model for different learning rates and compare them to evaluate the best performance. As we discussed earlier, the total subcarrier is divided into several groups. Each group contains N=4 subcarrier. Among the 4 subcarriers, only 1, 2, or 3 subcarriers are activated (i.e., K =1, 2, or 3).

TABLE 1 The Simulation Parameters for the Proposed System
Table 1- The Simulation Parameters for the Proposed System

The LSTM-IM model is trained with the collected data to minimize the disparity between the true bit and predicted bit, and the BER. In this paper, we apply the means square error function (MSE) to calculate the training loss as follows:\begin{equation*}\mathcal {L}({\mathbf {b}_{g}},{\hat {\mathbf {b}_{g}}};\theta )= \frac {1}{P} ||({\mathbf {b}_{g}}-{\hat {\mathbf {b}}_{g}})||^{2}, \tag {11}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where \theta represents the bias and weight of the model. The SGD algorithm can be used to update the model parameter \theta for randomly selected batches from the data sample as follows:\begin{equation*} \theta ^{+} \mathrel {\mathrel {\mathop :}\hspace {-0.0672em}=}\theta -\eta \nabla \mathcal {L}\left ({{\mathbf {b}_{g},\hat {\mathbf {b}_{g}};\theta }}\right ), \tag {12}\end{equation*}
View SourceRight-click on figure for MathML and additional features.
where \eta represents the learning rate and SGD step size.

The training performances of the LSTM-IM model are shown in Fig. 5 and Fig. 6 using the MSE function. The structure and depth of the LSTM hidden layers are important factors in capturing temporal relationships, and they can affect the model’s capacity to reduce the MSE loss during training. We calculate the training loss for different LSTM-IM hidden layers with 10 dB training SNR and 1 active subcarrier. From Fig. 5, we can see that the estimated loss is decreased with increasing LSTM-IM hidden layers. In Fig. 6, we compare our model loss with CNN-IM and Deep-IM models with 3 active subcarriers. To reduce the discrepancy between the model’s predictions and the actual target values, as shown by the MSE loss function, the LSTM hidden layers are adjusted by backpropagation during the training process [42]. Optimizing the model’s performance in minimizing this loss involves iteratively updating the activations and weights of the hidden layers. According to Fig. 6, at epoch 45, the DeepIM model has a loss of around 0.06, while the CNN-IM model exhibits a loss of approximately 0.045. In contrast, our proposed model achieves a lower loss of nearly 0.035. The loss of the proposed model is comparatively low, and it is stable after 40 epochs.

FIGURE 4. - Overview of the proposed LSTM-IM model training and testing process.
FIGURE 4.

Overview of the proposed LSTM-IM model training and testing process.

FIGURE 5. - The proposed LSTM-IM training loss for different numbers of Q with SNR= 10 dB and K = 1.
FIGURE 5.

The proposed LSTM-IM training loss for different numbers of Q with SNR= 10 dB and K = 1.

FIGURE 6. - The comparison of training loss for the proposed LSTM-IM with CNN-IM, and Deep-IM models for SNR = 10 dB and K = 3.
FIGURE 6.

The comparison of training loss for the proposed LSTM-IM with CNN-IM, and Deep-IM models for SNR = 10 dB and K = 3.

C. Online Testing Process

After completion of offline training, employ the model for online OFDM-IM signal detection with optimized parameters \theta and channel of interest. More precisely, without additional training for \theta , the suggested scheme can be applied to estimate the data bits under different channel fading scenarios. We test our model’s performance with 100,000 data samples under perfect and imperfect CSI conditions.

SECTION IV.

Simulation Results

The learning rate is an important hyperparameter that determines how much the model is adjusted based on the estimated error for each update of the model weights. Batch size is another important hyperparameter that has a direct impact on training efficiency. Fig. 7 (a) compares the performance of the LSTM-IM model for different batch sizes and Fig. 7 (b) compares the performance for different learning rates. The LSTM-IM model performs well at a 5000 batch size. With increasing or decreasing the batch size, performance can be decreased. In the case of Fig. 7 (b), we estimate the model performance for 0.001, 0.01, and 0.02 learning rates. The LSTM-IM model demonstrates good BER performance with a learning rate of 0.02 at lower SNR levels. However, its performance significantly is deteriorated at higher SNR levels. Specifically, the model exhibits nearly 0.9 dB worse performance with a learning rate of 0.001 compared to a learning rate of 0.01 at an SNR of 25 dB. Based on the given information, it appears that the LSTM-IM model performs well when it is trained with a learning rate of 0.01.

FIGURE 7. - Comparetive result of the LSTM-IM model for different batch size in (a) and different learning rates in (b) with Q =128 and K =1.
FIGURE 7.

Comparetive result of the LSTM-IM model for different batch size in (a) and different learning rates in (b) with Q =128 and K =1.

We estimate the BER for different values of K and different training SNRs, as shown in Fig. 8 (a) and Fig. 8(b) respectively. From Fig. 8 (a), we can see that, the LSTM-IM detectors perform very well when trained for K =1. In this study, K =1 means that only one subcarrier is activated out of 4 subcarriers. When a small number of subcarriers is activated, transmission is completed at lower data rates. Similarly, when the number of active subcarriers is increased, data transmission rates also increase. Higher data rates correspond to larger p or, equivalently, a larger number of classes involved in LSTM-IM. Consequently, the BER performance decreases with an increase in the value of K and the data rate, as observed in the figure. From Fig. 8(a), it is evident that the BER performance for K =2 and K =3 is very poor compared to the performance with K =1. Thus, it is clear from the results that the proposed model performs better with K =1, i.e., at lower data transmission rates. In the case of Fig 8 (b), the model is trained with K =1. From Fig. 8(b), it is evident that the model demonstrates good performance when trained with a 10 dB SNR. However, increasing or decreasing the value of the training SNR can lead to a decrease in performance. Specifically, in the case of a 5 dB training SNR, it performs better at lower SNR levels but exhibits very poor performance at higher SNR levels. For a training SNR of 20 dB, the LSTM-IM model exhibits approximately 0.8 dB worse performance compared to a training SNR of 10 dB at a point 20 dB SNR. Similarly, for a training SNR of 25 dB, the model demonstrates nearly 2 dB worse performance than a training SNR of 10 dB. So, it is clear that the LSTM-IM model shows better performance at 10 dB training SNR and K =1, i.e., for the (N, K, M) = (4, 1, 4) combination.

FIGURE 8. - Comparetive BER of the LSTM-IM detector for different K in (a) and different training SNR in (b).
FIGURE 8.

Comparetive BER of the LSTM-IM detector for different K in (a) and different training SNR in (b).

We investigate the proposed model’s BER performance with respect to SNR for different numbers of hidden units Q, as shown in Fig. 9. In this case, we train the model with 10 dB SNR and a (N, K, M) = (4, 1, 4) setup. From the estimated result, it is clear that the LSTM-IM detector performs well at a very small number of hidden layers, but it exhibits slightly different performance under different numbers of hidden layers. The proposed model demonstrates strong performance with 16 hidden layers, exhibiting a consistent improvement as the value of Q increases. Notably, its performance surpasses all previous results when Q reaches 128. When Q is further increased to 256, it performs better than Q =128, particularly at lower SNR levels. However, the performance of the LSTM-IM model with Q =256 shows a decline as SNR increases. Specifically, the proposed model performs better for Q =256 up to 15 dB SNR. However, beyond this point, its performance gradually diminishes, resulting in significantly weaker performance compared to Q =128. Consequently, the findings depicted in Fig. 9 lead to the conclusion that the model exhibits superior performance at Q =128.

FIGURE 9. - Simulation result of the proposed model for different numbers of Q with SNR = 10 dB and K = 1.
FIGURE 9.

Simulation result of the proposed model for different numbers of Q with SNR = 10 dB and K = 1.

In Fig. 10, we compare the performance of the LSTM-IM model with DeepIM (Relu and Tanh) [27], Y-BLSTM [28], GD, and ML models under perfect CSI conditions with the same parameter settings. This comparison is specifically designed for the (N, K, M) = (4, 1, 4) combination with a training SNR of 10 dB. It’s important to note that during the comparison of DeepIM, Y-BLSTM, GD, and ML, it is taken into account that all parameters are set to the same value for all models, i.e., DeepIM, Y-BLSTM, GD, and ML are also trained with the (N, K, M) = (4, 1, 4) combination. Moreover, the comparative results for the LSTM-IM model in Fig. 10 are obtained under the following settings: one active subcarrier (K =1), a batch size of 5000, a learning rate of 0.01, and Q =128. From the figure, it is clear that the LSTM-IM detector outperforms the DeepIM, Y-BLSTM, and GD detectors. Although the LSTM-IM detector performs near the ML detector at lower SNR, it outperforms the ML at higher SNR, as shown in Fig. 10.

FIGURE 10. - BER comparison of the proposed detector with reference detectors under the perfect CSI condition at K = 1.
FIGURE 10.

BER comparison of the proposed detector with reference detectors under the perfect CSI condition at K = 1.

Figure 11 compares the performance among LSTM-IM, CNN-IM [29], DeepIM [27], and Y-BLSTM [28] detectors for perfect CSI. In this comparison, the LSTM-IM model is trained with (N, K, M) = (4, 3, 4) . Additionally, CNN-IM, DeepIM, and Y-BLSTM detectors are trained with the same (N, K, M) = (4, 3, 4) combination. It should be noted that we calculate the BER at 15 dB SNR. From Fig. 11, we can see that LSTM-IM performs better than CNN-IM, Y-BLSTM, and DeepIM models even at higher data rates.

FIGURE 11. - Comparison of BER performance of the LSTM-IM model with reference models under the perfect CSI condition at K = 3.
FIGURE 11.

Comparison of BER performance of the LSTM-IM model with reference models under the perfect CSI condition at K = 3.

We compare the BER of LSTM-IM with other competing schemes in Fig. 12 under uncertain CSI. Specifically, we use the variable imperfect CSI model based on MMSE as described in [45]. In this model, the CSI error variance \epsilon ^{2} fluctuates based on the average SNR. In this case, we train our model with a 10 dB SNR and Q = 128 . From Fig. 12, it is clear that the LSTM-IM detector outperforms the DeepIM, GD, and ML detectors under both perfect and imperfect channel conditions. Also, it is clear that the LSTM-IM detector can easily learn and memorize the characteristics of the true channel very effectively.

FIGURE 12. - Evaluation of BER performance of the LSTM-IM detector with other detectors for the imperfect channel at K = 1.
FIGURE 12.

Evaluation of BER performance of the LSTM-IM detector with other detectors for the imperfect channel at K = 1.

SECTION V.

Computational Complexity

We calculate the computational complexity of the proposed model and compare it with other models, as shown in Table. 2. Addition, multiplication, and other real number operations are included in the total number of real floating-point operations (flops) that are counted [48]. The first layer of our proposed model is the LSTM layer. The LSTM layer takes 2 \times (\mathbf {D} \times Q) \times Q \times 2 \times 2 i.e., 16 \times (\mathbf {D} \times Q) \times Q flops. The second layer is the FC layer with a sigmoid activation function, which takes 5 \times (\mathbf {H_{t}} \times \mathbf {b_{g}}) flops. It should be noted that \mathbf {D} = 12 , \mathbf {H_{t}} = 128 , \mathbf {b_{g}} = 4 , and we calculate flops for Q =128. The LSTM-IM model needs a total of 3.148 \times 10^{6} flops for a single batch size, and the ML model needs 1.3 \times 10^{7} flops. It validates that the proposed model can reduce complexity.

TABLE 2 Computational Complexity Comparison
Table 2- Computational Complexity Comparison

SECTION VI.

Conclusion

This paper outlines our effort to develop an LSTM-based detection scheme to detect the OFDM-IM symbols in wireless communication systems. In order to effectively extract features from the OFDM-IM symbols, the LSTM layer is employed. In this paper, pre-processing is done to improve the detection accuracy based on the structure of the OFDM-IM symbol. The LSTM-IM detector can recover data bits very efficiently by utilizing the pre-processed received signal and channel vector based on domain knowledge. The simulation results validate that the proposed detector outperforms the CNN-IM, Y-BLSTM, DeepIM, GD, and ML detectors under both perfect and imperfect CSI conditions. We believe that the proposed framework will play an important role in future wireless communication systems.

Select All
1.
S. B. Weinstein, "The history of orthogonal frequency-division multiplexing [History of communications]", IEEE Commun. Mag., vol. 47, no. 11, pp. 26-35, Nov. 2009.
2.
L. Cimini, "Analysis and simulation of digital mobile channel using orthogonal frequency division multiplexing", IEEE Trans. Commun., vol. 42, no. 2, pp. 2908-2914, Jul. 1994.
3.
E. Basar, Ü. Aygölü, E. Panayirci and H. V. Poor, "Orthogonal frequency division multiplexing with index modulation", IEEE Trans. Signal Process., vol. 61, no. 22, pp. 5536-5549, Nov. 2013.
4.
H. Senol, E. Panayirci and H. V. Poor, "Nondata-aided joint channel estimation and equalization for OFDM systems in very rapidly varying mobile channels", IEEE Trans. Signal Process., vol. 60, no. 8, pp. 4236-4253, Aug. 2012.
5.
E. Panayirci, H. Senol and H. V. Poor, "Joint channel estimation equalization and data detection for OFDM systems in the presence of very high mobility", IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4225-4238, Aug. 2010.
6.
S. Venkatesan and R. A. Valenzuela, "OFDM for 5G: Cyclic prefix versus zero postfix and filtering versus windowing", Proc. IEEE Int. Conf. Commun. (ICC), pp. 1-5, May 2016.
7.
E. Basar, M. Wen, R. Mesleh, M. Di Renzo, Y. Xiao and H. Haas, "Index modulation techniques for next-generation wireless networks", IEEE Access, vol. 5, pp. 16693-16746, 2017.
8.
T. Mao, Z. Wang, Q. Wang, S. Chen and L. Hanzo, "Dual-mode index modulation aided OFDM", IEEE Access, vol. 5, pp. 50-60, 2017.
9.
K.-H. Kim and H. Park, "New design of constellation and bit mapping for dual mode OFDM-IM", IEEE Access, vol. 7, pp. 52573-52580, 2019.
10.
E. Basar, "Index modulation techniques for 5G wireless networks", IEEE Commun. Mag., vol. 54, no. 7, pp. 168-175, Jul. 2016.
11.
T. Mao, Q. Wang, Z. Wang and S. Chen, "Novel index modulation techniques: A survey", IEEE Commun. Surveys Tuts., vol. 21, no. 1, pp. 315-348, 1st Quart. 2019.
12.
R. Fan, Y. J. Yu and Y. L. Guan, "Generalization of orthogonal frequency division multiplexing with index modulation", IEEE Trans. Wireless Commun., vol. 14, no. 10, pp. 5350-5359, Oct. 2015.
13.
J. Crawford and Y. Ko, "Low complexity greedy detection method with generalized multicarrier index keying OFDM", Proc. IEEE 26th Annu. Int. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC), pp. 688-693, Aug. 2015.
14.
E. Basar, Ü. Aygölü and E. Panaylrcl, "Orthogonal frequency division multiplexing with index modulation in the presence of high mobility", Proc. 1st Int. Black Sea Conf. Commun. Netw., pp. 147-151, Jul. 2013.
15.
M. Wen, X. Cheng, M. Ma, B. Jiao and H. V. Poor, "On the achievable rate of OFDM with index modulation", IEEE Trans. Signal Process., vol. 64, no. 8, pp. 1919-1932, Apr. 2016.
16.
R. Abu-alhiga and H. Haas, "Subcarrier-index modulation OFDM", Proc. IEEE 20th Int. Symp. Pers. Indoor Mobile Radio Commun., pp. 177-181, Sep. 2009.
17.
Y. Xiao, S. Wang, L. Dan, X. Lei, P. Yang and W. Xiang, "OFDM with interleaved subcarrier-index modulation", IEEE Commun. Lett., vol. 18, no. 8, pp. 1447-1450, Aug. 2014.
18.
H. Zhang, L.-L. Yang and L. Hanzo, "Compressed sensing improves the performance of subcarrier index-modulation-assisted OFDM", IEEE Access, vol. 4, pp. 7859-7873, 2016.
19.
H. Zhang, C. Jiang, L.-L. Yang, E. Basar and L. Hanzo, "Linear precoded index modulation", IEEE Trans. Commun., vol. 67, no. 1, pp. 350-363, Jan. 2019.
20.
T. Erpek, T. J. O'Shea, Y. E. Sagduyu, Y. Shi and T. C. Clancy, "Deep learning for wireless communications", Develop. Anal. Deep Learn. Archit., vol. 867, pp. 223-266, 2020.
21.
L. Dai, R. Jiao, F. Adachi, H. V. Poor and L. Hanzo, "Deep learning for wireless communications: An emerging interdisciplinary paradigm", IEEE Wireless Commun., vol. 27, no. 4, pp. 133-139, Aug. 2020.
22.
M. A. Aziz, M. H. Rahman, M. A. S. Sejan, J.-I. Baik, D.-S. Kim and H.-K. Song, "Spectral efficiency improvement using bi-deep learning model for IRS-assisted MU-MISO communication system", Sensors, vol. 23, no. 18, pp. 7793, Sep. 2023.
23.
M. H. Rahman, M. A. S. Sejan, M. A. Aziz, J.-I. Baik, D.-S. Kim and H.-K. Song, "Deep learning based improved cascaded channel estimation and signal detection for reconfigurable intelligent surfaces-assisted MU-MISO systems", IEEE Trans. Green Commun. Netw., vol. 7, no. 3, pp. 1515-1527, Jan. 2023.
24.
M. H. Rahman, M. A. S. Sejan, M. A. Aziz, D.-S. Kim, Y.-H. You and H.-K. Song, "Deep convolutional and recurrent neural-network-based optimal decoding for RIS-assisted MIMO communication", Mathematics, vol. 11, no. 15, pp. 3397, Aug. 2023.
25.
J. Jiao, X. Sun, L. Fang and J. Lyu, "An overview of wireless communication technology using deep learning", China Commun., vol. 18, no. 12, pp. 1-36, Dec. 2021.
26.
M. H. Rahman, M. A. S. Sejan, M. A. Aziz, R. Tabassum, J.-I. Baik and H.-K. Song, "Deep learning based one bit-ADCs efficient channel estimation using fewer pilots overhead for massive MIMO system", IEEE Access, vol. 12, pp. 1-14, 2024.
27.
T. V. Luong, Y. Ko, N. A. Vien, D. H. N. Nguyen and M. Matthaiou, "Deep learning-based detector for OFDM-IM", IEEE Wireless Commun. Lett., vol. 8, no. 4, pp. 1159-1162, Aug. 2019.
28.
Y. Zhu, B. Wang, J. Li, Y. Zhang and F. Xie, "Y-shaped net-based signal detection for OFDM-IM systems", IEEE Commun. Lett., vol. 26, no. 11, pp. 2661-2664, Nov. 2022.
29.
T. Wang, F. Yang, J. Song and Z. Han, "Deep convolutional neural network-based detector for index modulation", IEEE Wireless Commun. Lett., vol. 9, no. 10, pp. 1705-1709, Oct. 2020.
30.
J. Kim, H. Ro and H. Park, "Deep learning-based detector for dual mode OFDM with index modulation", IEEE Wireless Commun. Lett., vol. 10, no. 7, pp. 1562-1566, Jul. 2021.

References

References is not available for this document.