I. Introduction
Deep Learning (DL) consistently delivers ground breaking results and systems in critical fields such as autonomy, necessitating multi-classification models for computation. This is crucial for detecting and rectifying bugs in DL techniques [1]. With advent of new digital power systems, scale, variety, and use of power data are maximum, posing serious challenges and risks of power data security [2]. However, while power system is designed for security purposes, it often remains misaligned with target objectives, and directly optimizing context and label information can lead to formatting issues that hinder language model classification [3]. The language model, classified using BERT-based techniques, incorporates Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to meet security requirements of data in multiple scenarios, thereby enhancing the accuracy of the classification algorithm [4]. This approach aims to automate the classification of power systems, improving analysis of private data, while also addressing high cost associated with transmitting raw data to a central entity through a decentralized DL approach [5]. The main contribution of the paper are outline below; - BERT’s ability to capture contextual information aids in making more accurate predictions. BERT model enhanced by incorporating context of specific words from sentences or documents.