Abstract:
In the digital era, the rapid dissemination of false information has emerged as a formidable challenge, undermining the credibility of online platforms and posing a threa...Show MoreMetadata
Abstract:
In the digital era, the rapid dissemination of false information has emerged as a formidable challenge, undermining the credibility of online platforms and posing a threat to informed public discourse. Addressing this issue, our research introduces innovative enhancements to Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) architectures, aimed at bolstering the efficiency and accuracy of false information detection mechanisms. Leveraging the inherent strengths of GRU and LSTM models in capturing temporal dependencies within data, our proposed modifications incorporate adaptive learning rates and a novel hybrid attention mechanism. These enhancements facilitate a more nuanced understanding of the contextual and stylistic nuances characteristic of false information, thereby significantly improving the models’ predictive performance. Through extensive experimentation on a diverse dataset comprising various forms of digital content, our findings reveal that the enhanced GRU and LSTM architectures outperform existing models in terms of detection accuracy, while simultaneously reducing computational overhead. This advancement in deep learning techniques represents a pivotal step towards more reliable and efficient false information detection, with potential applications spanning across social media platforms, news outlets, and beyond. Our research not only contributes to the academic discourse on natural language processing and misinformation studies but also offers practical solutions for safeguarding the integrity of information in the digital landscape.
Published in: 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS)
Date of Conference: 18-19 April 2024
Date Added to IEEE Xplore: 07 August 2024
ISBN Information: