A Machine Learning-Based Model for Stability Prediction of Decentralized Power Grid Linked with Renewable Energy Resources

electricity demand autonomously on the grid frequency. equipment the grid frequency be easily measured anywhere. Electrical grids to be stable to electricity supply and demand to ensure economically and dynamically viable grid operation. The of electricity consumed/produced (p) by each grid participant, cost-sensitivity (g), and grid participants ’ response times (tau) grid ﬀ ect the stability of the grid. Renewable energy resources are volatile on varying time scales. Due to the volatile nature of these renewable energy resources, there are more frequent ﬂ uctuations in decentralized grids integrating renewable energy resources. The decentralized grid is designed by linking real-time electricity rates to the grid frequency over a few seconds to provide demand-side control. In this study, a model has been proposed to predict the stability of a decentralized power grid. The simulated data obtained from the online machine learning repository has been employed. Data normalization has been employed to reduce the biased behavior among attributes. Various data level resampling techniques have been used to address the issue of data imbalance. The results showed that a balanced dataset outperformed an imbalanced dataset regarding classi ﬁ ers ’ performance. It has also been observed that oversampling techniques proved better than undersampling techniques and imbalanced datasets. Overall, the XGBoost algorithm outperformed all other machine learning algorithms based on performance. XGBoost has been given an accuracy of 94.7%, but while combining XGBoost with random oversampling, its accuracy prediction has been improved to 96.8%. This model can better predict frequency ﬂ uctuations in decentralized power grids and the volatile nature of renewable energy resources resulting in better utilization. This prediction may contribute to the stability of a decentralized power grid for better distribution and management of electricity.


Introduction
The majority of the global energy supplies for electricity generation are nonrenewables (oil, gas, coal, etc.) [1]. These nonrenewable energy resources are depleting quickly. They are polluting the environment and causing global warming due to the emission of various greenhouse gasses [2]. Due to these limitations of nonrenewable resources, the world's energy policies are shifting towards renewable resources for clean and sustainable energy [3]. Globally, the current share of renewable energy in electricity generation is 24%, which is expected to grow by 44% by 2030 [4,5]. Pakistan has enormous potential for renewable energy generation due to long sunshine hours, and its coastal belt has promising wind speeds. According to the Alternate and Renewable Energy Policy 2019 approved by Pakistan, the country has planned to grow its share of renewable energy in electricity generation from 4% to 30% by 2030, excluding hydropower [5]. However, the volatile nature of many renewable energy sources is a well-known challenge [6,7]. A more flexible approach is required to balance energy demand and supply linked with renewable energy resources since renewable energy resources are more susceptible to fluctuations than nonrenewable energy resources [8,9]. Various approaches to managing supply and demand have been presented for such a fluctuating power grid. The various smart grid concepts' core idea is to manage consumer demand which is a significant paradigm shift from current grid operating schemes [7].
A decentralized approach means a resource is selfdispatched with rubrics that can be defined in isolation of other resources or in coordination with them. In decentralization, the consumers regulate their electricity demand autonomously based on the grid frequency [6,10]. The decentralized grid approach was first suggested a few years ago but only recently received much attention by implementing demand response without major infrastructural changes. Electrical grids need to be stable to balance energy supply and demand to ensure economically and dynamically viable grid operation [11]. During periods of excess power, the frequency increases but decreases during periods of underproduction. Grid frequency monitoring is a low-cost and easy way to determine grid stability. The grid's frequency changes when there is an undersupply or oversupply of electricity in the grid [12]. With cheap equipment (i.e., smart meters), consumers can easily measure the grid frequency anywhere.
Resampling techniques have been used in this study in combination with ML algorithms to predict the stability of a decentralized electricity grid. This study tests the hypothesis that ML algorithms combined with resampling techniques can provide highly accurate predictions for the stability of decentralized electricity grids. ML algorithms can detect trends and anomalies in datasets and thus help grid system operators to make real-time decisions for better distribution of available electricity [38]. Different approaches were used for the stability prediction of power grids, but effective results were not achieved [11,[31][32][33][34][35][36][37]. To the best of our knowledge, the available literature concluded that resampling techniques were not used to balance the data for grid stability prediction. Furthermore, only accuracy was used as an evaluation metric in previous studies, but other important metrics like precision, recall, and receiver operating characteristic (ROC) curve were not evaluated.
Contributions were as follows: this research possesses various contributions to decentralized power grid stability prediction.
(1) Latest dataset from the University of California Irvine (UCI) Machine Learning repository has been used to build a ML model for decentralized power grid stability prediction (2) The data imbalance issue has been explored by comparing the different resampling techniques and evaluating the performance that which resampling technique has given efficient results with a ML classifier (3) Lastly, our proposed model may help better prediction of the stability of a decentralized electricity grid, which may ultimately help in better distribution. Frequency fluctuations in the decentralized grid due to renewable energy resources can be predicted with the proposed model for better utilization of these renewable energy resources. To test the effectiveness of our proposed technique, it has been verified on the Electricity Grid Dataset. It can be applied to any real-time dataset related to decentralized grid frequency The rest of the paper is structured as follows: The existing techniques for grid stability prediction are analyzed in Section 2. Section 3 includes the proposed 2 Wireless Communications and Mobile Computing solution. The evaluation metrics are described in Section 4. The results of our study have been discussed and analyzed in Section 5. The research is summed up, and the problem definition is restated in Section 6. The study's conclusion, challenges, and limitations are described, and suggestions for future improvements are discussed in section 7.

Literature Review
Previous studies have mostly worked on conventional centralized grids with few frequency fluctuations [33]. However, decentralized power grids connected with renewable energy resources involve strong fluctuations on varying time scales, including seasonal, intraday, and short-time fluctuations [7]. The previous studies used imbalanced data to predict the grid stability [11,[31][32][33][34][35][36][37]. Abu Al-Haija et al. [39] proposed a system using various ML models to classify stability records in smart grid networks. Seven machine learning architectures are specifically examined, including SVM, DT, LR, NBC, LDC, and GBDT. A recent and substantial dataset for the stability of smart grid networks (SGN Stab2018) was also used to test the system's performance, and it received high marks for classification. Breviglieri et al. [40] studied deep learning models to solve fixed inputs and equality issues in decentralize smart grid control (DSGC) system. By removing those constrictive assumptions on input values, they examined the DSGC system using several optimized deep learning models to forecast smart grid stability. Massaoudi et al. [41] proposed an accurate stacking ensemble classifier (SEC) for decentralizing smart grid control stability prediction. Using a supervised learning approach, the presented method showed a fantastic ability to categorize the grid instabilities accurately. Numerical findings validate the excellent effectiveness of the suggested model. Arzamasov et al. [11] predicted the results of decentralized grid stability using a DT algorithm. To determine the stability/instability of the grid, they solved a numerical optimization problem called the characteristics roots equation. Positive real numbers indicated instability, and negative real numbers indicated a stable grid state. Yin et al. [31] developed a KRR-XGBoost model to forecast the stability of distributed power systems and provide effective design guidelines and cost optimization for these systems. The grid stability index, the grid stability predictor (stable/unstable), and the factors affecting the grid's stable state covered the data input components. Ali et al. [8] proposed an optimization-based method to smooth voltage. To extend the lifespan of the electric vehicle (EV) battery, EV power fluctuations and their minimum preset state of charge (SOC) are considered in the proposed optimization model. Different ML models were applied to test their ability to forecast PV power output by Theocharides et al. [32]. SVR, ANN, and RT were explored, each with its own hyperparameters and features. Each model's output power prediction performance was evaluated on real-world PV generation data for one year and compared to a developed persistence model. The basic purpose was to build an association between the input features and their output. Bano et al. [33] utilized ML techniques, i.e., enhanced MLP, enhanced SVM, and enhanced LR, to forecast the electricity load. To forecast New York City's load and price, they used hourly data from 2016 to 2017. Classification and regression tree and recursive feature elimination were used for feature selection. Singular Value Decomposition was used to extract the features. Moldovan and Salomie [34] presented a feature extraction-based ML approach to predict the stability of a smart grid using the Python tsfresh package. They used ML and statistical methods to detect sources of instability and made feature selection before applying classifiers. Their study used four classifiers: LR, GBDT, RF, and MLP. Ali et al. [29] proposed an optimization approach to determine microgrids' optimal locations and sizes of photovoltaic and wind generation systems. They created a bilevel metaheuristic-based method to solve the planning model. Various simulations and study cases are run to evaluate the viability of the proposed model.
Malbasa at el. [35] predicted voltage stability in transmission systems using active machine learning. Their key contribution is applying pool-based active learning techniques to power system measurements like synchrophasor data, a tool for determining voltage stability. Experiments on synthetic data obtained from a complex power system simulation model are used to test their method. The experiments focused on margins of voltage stability prediction in a transmission network using ML techniques. Abuella et al. [36] predicted solar power ramp events to handle high renewable generation ramp rates, energy storage systems, energy management, and voltage regulator settings on distributed generation feeders. They used LDA, ANN, and NB to forecast solar power ramp events. Baltas et al. [37] proposed a response-based model to forecast a benchmark system's stability following a serious disturbance. They used ensemble-based multiple classifiers. Simulated data generated through Spider IDE was used. For the ensemble's final output, three separate approaches were considered. They used a majority voting scheme in their first method. In the second method, a variant of the boosting technique was considered that uses the weight factor and a constant. Finally, in the third approach, an all-or-nothing technique was considered. Ali et al. [30] presented an interval optimization method to schedule EV optimally. The goal was to reduce network active power losses and overall voltage magnitude deviation while considering system-wide restrictions. The best day-ahead scheduling of EV was done using the proposed method on a 33-bus distribution system. Various case studies were conducted to evaluate the viability of the suggested approach. The summary of techniques of related articles is presented Table 1.

Methodology
The proposed framework for decentralized grid stability prediction is presented in Figure 1. Our methodology is comprised of the following major steps: (1) dataset selection, (2) data normalization, (3) data resampling, (4) modeling, and (5) evaluation. The data has been pre-processed to obtain effective results. Various resampling techniques have been used to tackle data imbalance issues to get the best 3 Wireless Communications and Mobile Computing results. After data preprocessing, different ML models were used, and their results were compared.
3.1. Dataset. In this study, the Electricity Grid Simulated Dataset was employed and obtained from the UCI repository. The dataset includes 10,000 instances and 14 features in which 3620 were stable and 6380 were unstable. The dataset has also observed class imbalance issues because unstable instances are far more than stable instances. The dataset contains the reaction time of participants (tau1, tau2, tau3, tau4), nominal power produced/consumed (p1, p2, p3, p4), and gamma coefficient, i.e., price elasticity features (g1, g2, g3, g4) as shown in Table 2. Price elasticity is the measurement of the change in consumption of electricity in response to a change in its price, expressed mathematically as Formula to calculate price elasticity = %Change in electricity quantity demanded %Change in price : ð1Þ  Figure 1: Proposed methodology.

Wireless Communications and Mobile Computing
A numerical optimization problem known as the characteristics roots equation determines the target class value (stab). Positive real numbers indicated instability, and negative real numbers showed the grid's stability. The stability/instability of the system is labeled as stabf (categorical).

Data normalization.
The major issue with various features is that each numerical feature/attribute is represented differently. So, data normalization is an effective data preprocessing technique for tabular data to make comparisons between measurements more accessible while constructing a model. It rescales feature values to confront the standard normal distribution to form new inputs. The maximum and minimum values of various features often vary significantly, like the reaction time of different participants ranges from 0.5 to 0.99, power values of producers range from 1.58 to 5.86, power values of different consumers range from -1.99 to -0.5, and gamma values, i.e., price elasticity of demand of all participants ranges from 0.05 to 0.99. The target variable values resulted from optimizing characteristics roots equations, ranging from -0.08 to 0.10. The Z -score normalization technique has been employed to bring all these features into a specified range. All numerical values have been scaled within the specified range (-1.73 to +1.73). The formula of the Z-score technique is given in equation (2).
where z is the standard score, S is the standard deviation of a sample, X represents each value in the dataset, and X is the mean of all values in the dataset. It has been observed that Z-score normalization outperformed other data normalization techniques [42][43][44].

Undersampling Techniques.
Undersampling is one of the simplest methods for dealing with imbalanced data. This technique undersamples the majority class to balance it with the minority class [45]. The undersampling method can be applied if a sufficient amount of data is collected. This study used three undersampling techniques: near miss, cluster centroid, and random undersampling. When instances of two distinct classes are too close to one another, the majority of class instances are removed to increase the space between these two classes, and this aids in the process of classification. The first process, "NearMiss-1," picks samples from the majority class closest to those from the minority class. This process picks majority class samples with the smallest average distances to the three nearest minority class samples. The second "NearMiss-2" approach picks majority First consumers' reaction time in response to price change tau3 Second consumer's reaction time in response to price change tau4 Third consumer's reaction time in response to price change p1 Nominal power (positive real) produced by the producer (amount of electricity produced) p2 Nominal power (negative real) consumed by the first consumer (amount of electricity consumed by the first consumer) p3 Nominal power (negative real) consumed by the second consumer (amount of electricity consumed by the second consumer) p4 Nominal power (negative real) consumed by the third consumer (amount of electricity consumed by the third consumer) g1 (Gamma coefficient) proportional to price elasticity of producer g2 (Gamma coefficient) proportional to the price elasticity of the first consumer g3 (Gamma coefficient) proportional to the price elasticity of the second consumer g4 (Gamma coefficient) proportional to the price elasticity of the third consumer Stab Target class value real, positive shows instability or negative shows stability Stabf Target class label (categorical), i.e., stable or unstable 5 Wireless Communications and Mobile Computing class samples with the smallest average distances to the three farthest minority class samples. For each minority class sample, the third process, "NearMiss-3," takes a fixed number of the nearest majority class samples. Finally, the fourth method, "Most distant" approach, chooses the majority class samples with the highest average distances to the three nearest minority class samples.
3.3.3. Cluster Centroid. One of the main disadvantages of undersampling is that valuable information from the majority class may be lost, resulting in misclassifying samples after classification. This cannot be afforded to build a solid model. As a result, the cluster centroid method was proposed by Yen and Lee [47] to solve this problem. This technique undersamples the majority class instances by replacing majority class instances from clusters with a cluster of centroids by considering the ratio of majority class samples to minority class samples. Undersampling is accomplished using this approach, generating centroids using k-means clustering methods. The data have been grouped based on similarity. The data is equipped with a K-means algorithm, and the level of undersampling determines the number of clusters (k). The sets of cluster centroids from K-means then entirely substitutes for most cluster samples. The most representative combinations of the majority class will be visualized in the middle of the cluster of centroids. This problem was attempted to be solved by under fitting and overfitting the data and their combination. While under fitting the dataset, only the cluster centroids were considered, adapted from [47].

Oversampling Techniques.
When the number of instances in each class is not equal, any dataset can be called imbalanced. Resampling methods add a bias typically to make the dataset balanced. While classifiers may learn from imbalanced datasets, balanced datasets have more efficient results. All resampling techniques resample data until it reaches the required ratio. It also helps compare various resampling methods in the final training set for a given proportion of majority and minority class data points. The data level resampling method (oversampling and undersampling) is the optimal solution for dealing with class imbalance problems. In this study, various resampling techniques have been employed. By replicating or making new minority class samples, oversampling increases the minority class weight. Different oversampling methods are available in the literature. [48]. Four oversampling techniques have been applied in this study: random oversampling (ROS), adaptive synthetic (ADASYN), synthetic minority oversampling technique (SMOTE), and borderline-SMOTE.

Synthetic Minority Oversampling Technique.
Compared to ROS, the SMOTE is a more advanced approach. Chawla et al. [48] state that it oversamples data by generating synthetic examples. New minority instances are synthesized between existing minority instances in SMOTE. It selects the minority class at random and calculates the K -nearest neighbor for that specific point. Finally, it adds synthetic points between that chosen point and its neighbors. The instance of the x i minority class is chosen as the foundation for creating new synthetic data points. Several nearest neighbors of the same class are selected from the training set based on a distance metric. Finally, a randomized interpolation has been carried out to obtain new instances. An integer value of oversampling total amount N is determined, which can be set up to achieve a 1 : 1 class distribution or discovered using a wrapper method [49]. After that, a multistep iterative process is carried out, which works as below: first, a minority class instance is chosen randomly from the training set. Next, the K-nearest neighbors are then collected. Finally, N of these K instances are randomly selected to calculate new instances via interpolation. To complete this task, the difference between the feature vector (sample) under consideration and each of the selected neighbors is taken. After that, a random number between 0 and 1 is multiplied by the difference and added to the previous feature vector. As a result, a random point along the "line segment" between the features is selected. If there is a case of nominal attributes, one of the two values is chosen randomly.
Consider the sample ð6, 4Þ and its nearest neighbour ð4, 3Þ. The sample for which k-nearest neighbors are being identified is ð6, 4Þ, and one of its k-nearest neighbors is ð4, 3Þ: Let : a 11 = 6, a 21 = 4, a 21 − a 11 = −2, a 12 = 4, a 22 = 3, a 22 − a 12 = −1: The newly generated samples will be as given in equation (4). ran dð0 − 1Þ creates two random numbers vectors ranging from 0 to 1. The value of W varies between 0 to 1, and Δ is the number of examples in a minority class's K-nearest neighbors, which are members of the majority class.

Borderline-SMOTE.
Borderline-SMOTE is a more advanced type of SMOTE that aims to generate synthetic samples by interpolating the k-nearest neighbors of the minority instances close to the border [51]. Since these border instances are more relevant for classification, this technique only extracts synthetic instances for minority samples near the boundary of two classes. At the same time, SMOTE produces new instances for each minority sample. Possible misclassified minority class instances will undergo more training in borderline-SMOTE [52]. It first identifies borderline minority instances; then, it uses them to generate synthetic instances with their chosen k-nearest neighbors. Rather than simply replicating the existing samples, SMOTE creates new synthetic samples along the line between the minority samples and their selected nearest neighbors. However, this increases overlapping between classes since synthetic samples are generated without considering neighboring samples. Many modified techniques have been proposed to solve this constraint, with borderline-SMOTE proving the most effective in most cases. Since samples close or on the borderline are more likely to be misclassified than those farther away, borderline-SMOTE just oversamples and enhances these difficult-to-learn samples.
Only those in "DANGER" having more majority class neighbors than minority class neighbors are suggested in equation (6). This means that they represent samples from the borderline minority classes, which are the most likely to be misclassified. It is worth noting that other x i are not operating in the next step. Finally, for each sample x j in the "DANGER" set, select one of the K-nearest neighbors of x j at random that have the smallest Euclidian distance to it, multiply the corresponding feature vector difference with a random number between [0,1], and this vector difference is added to x j where x j is the selected minority samples in the "DANGER" set,x ∈ D min is one of x j ′ sK-nearest neighbors, and δ ∈ ½0, 1 is a random number. The resulting synthetic sample is, therefore, one point in the line segment between x j andx j according to equation (7). The newly generated samples are appended to the original set and used to train the classifier. The dataset class distribution before and after applying resampling techniques is presented in Table 3. [53]. It is a boosting algorithm and belongs to the supervised learning algorithms. Boosting is an ensemble technique of sequential learning. In boosting, different models are trained one after another. XGBoosting first creates a base model. We take the average number as the first prediction of the base model (also called model zero M0). Next, model M1 is fitted to minimize errors (the difference between the actual and predicted values). Until this, the procedure is the same as gradient boosting. XGBoost uses regularization parameters to avoid overfitting, and it also uses auto pruning to avoid trees not growing beyond a certain level and handles missing values. It has been used to solve various classification problems in many fields. The XGBoost algorithm assigns various levels of importance to features before deciding the weighted distance for the K-means algorithm. It combines predictions from "weak" classifiers (tree model) to get a "strong" classifier (tree model). It speeds up the learning process allowing for quicker modeling using distributed and parallel computing.

XGBoost. It is a GBDT extension introduced by Chen and Guestrin
A new tree is generated along the direction of the negative gradient of the loss function. As the number of tree models increases, the loss becomes smaller and smaller. The XGBoost computational process started from equation (8) whereŷ t−1 i is the previously generated tree model, f t ðx i Þ is the newly generated tree model,ŷ ðtÞ i is the final tree model, and t is the total number of base tree models. Both depths of the tree and the number of trees are essential parameters for the XGBoost algorithm. The problem of determining the best algorithm was changed to finding a new classifier capable of reducing the loss function, with the target loss function given in equation (9).
where y i is the actual value;ŷ ðtÞ i is the predicted value; Lðy i ,ŷ ðtÞ i Þ is the loss function, and Ωð f i Þ is the regularization term. Equation (10) could be obtained by substituting equation (8) into equation (9) and then following some deduction steps.

Wireless Communications and Mobile Computing
After that, the final target loss function was transformed to equation (11), used to train the model. where Þ are the loss function's first and second-order gradient statistics. Equation (12) calculates the regularization term Ωð f t Þ to reduce the model's complexity and increase its applicability to other datasets.
where λ and γ are coefficients with default values set as λ = 1, γ = 0, ω is the weight of the leaves, and T is the number of leaves. Both continuous and discrete variables can be used as inputs to the XGBoost algorithm, but the output variable must be discrete, excluding binary variables.

Evaluation Metrics
In every predictive modelling task, model evaluation is critical. It becomes more critical in predictive ensemble modeling, where diversity and models' relative performance must be evaluated thoroughly. Each of the evaluation metrics is based on one of 4 classifications. These classifications are true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). With the aid of the confusion matrix, accuracy is typically used to measure the efficiency of a model [54,55]. Equation (13) has been used to calculate the model's accuracy.

Accuracy = TP + TN TP + TN + FP + FN
: Precision is about out of the total predicted positives, how many of them are true positives. It means precision measures how many positive instances the classifier said were positive. The model's precision has been calculated using equation (14).
A recall is about out of the total actual positives, how many of them are true positives. Equation Equation (15) has been used to calculate the model's recall.

Recall = TP TP + FN
: F-measure is the harmonic mean of precision and recall. Precision and recall are mutually exclusive: Low recall is usually associated with higher precision. Equation Equation (16) has been used to calculate the model's F-measure.
ROC curve plot is another commonly used metric for evaluating a classifier's efficiency. ROC graph shows the performance of a classification model at all classification thresholds. It plots the false positive rate (x-axis) versus the true positive rate (y-axis) for different candidate threshold values between 0.0 and 1.0. It is used to interpret the prediction of probabilities for binary classification problems.

Results
Various classifiers have been employed to evaluate the performance of an imbalanced Electricity Grid Simulated Dataset, as shown in Table 4. The train-test split technique has been used to assess the performance of ML algorithms. The dataset has been split into two subsets, with 70% for training and 30% for testing purposes. The first subset is the training dataset used to fit the model. The second subset is not used to train the model; instead, the model is provided with the dataset's input element. Then, predictions are made, and the results are compared to the expected results. This second dataset is called the test dataset. The performance of the XGBoost method has also been compared with other ML models. The XGBoost outperformed other ML models on an imbalanced dataset. XGBoost algorithm was run in Jupyter Notebook. Table 4 shows the results, with the XGBoost model predicting the highest accuracy of 94.7% on the imbalanced Electricity Grid Simulated Dataset. Tuning parameters is a critical step in improving the efficiency of any ML algorithm. It involves determining a grid with all possible parameters and checking them to find the values that maximize classification performance. The default values are used for the parameters whose values are not defined. Various parameters are listed below: (i) Eta: It is the learning rate of the model. The default value of eta is 0.3; however, the optimal value of eta used in our experiment is 0.4. The feature weights are shrunk by eta to make the boosting procedure more prudent. Its range is from 0 to 1 (ii) Subsample: It controls the number of samples (observations) supplied to a tree. The default value of the subsample is 1. The optimal value used in our experiment is 0.8. Its range is from 0 to 1 (iii) colsample_bytree: It controls the number of features (variables) supplied to a tree. Its default value is 1, ranging from 0 to 1. The optimal value used in our experiment is 0.9 (iv) N-estimator: The number of trees (or rounds) in an XGBoost model is defined in the n_estimators. 100 is the default value for the n-estimator. Its range is from 1 to infinity. The optimal value used in our experiment is 200 To assess the performance of machine learning algorithms, various classifiers were used on undersampled datasets, as shown in Table 5. ANN proved to be the best algorithm. Classifiers were applied to three undersampled datasets. The results are shown in Table 5, in which random undersampling in combination with ANN has predicted the best accuracy of 94.5%. The random undersampling tech-nique outperformed all other undersampling techniques based on accuracy. Other models have also shown effective results on the undersampled dataset. Further, the performance of classifiers was also better on the cluster centroids based on undersampled datasets than near miss.
Oversampling techniques (ADASYN, borderline-SMOTE, ROS, and SMOTE) have also been employed to improve the performance of the classifiers. Various models were applied to oversampled datasets, as shown in Table 6. The results are presented in Table 6, in which the ROS method in combination with XGBoost has predicted the best accuracy of 96.8%. XGBoost, in combination with borderline-SMOTE and SMOTE, has shown promising results with an accuracy of 96.5% and 96.1%, respectively. ADASYN predicted an outcome of 95.9% with the GBDT. The results showed that the XGBoost outperformed other ML models based on imbalanced and oversampled datasets. XGBoost was tuned further by adjusting the values of a few parameters to improve the results. However, ANN proved to be best on an undersampled dataset. The results of the imbalanced dataset have also been compared with undersampling and oversampling techniques. Results showed that ROS-based methods outperformed all other oversampling and undersampling techniques used-the performance of the XGBoost model, along with other models. The proposed model has outperformed previous studies significantly, as shown in Table 7.
The XGBoost, in combination with ROS, outperformed all other models predicting the best accuracy of 96.8%. However, the accuracy was 94.7% on an imbalanced dataset, as shown in Figure 2. The F-measure and ROC have also shown better results with ROS as 96.7% and 99.6%, respectively, compared to F-measure and ROC on the imbalanced dataset, which showed 95.6% and 98.9%, respectively.

Discussion
Effective demand response management and control in decentralized power grids are complex. Because grid participants' consumption and production behaviors are influenced by price signals issued and responded on a seconds scale, key variables influencing the grid's stability are the volumes of electricity consumed/produced (p) by each grid 9 Wireless Communications and Mobile Computing participant and the cost-sensitivity (g), i.e., price elasticity and reaction time (tau) to price signals of the grid participants. A simulation of a decentralized grid applied to a four-node star grid.
In combination with resampling techniques, ML techniques have been used in our study to improve the decentralized grid stability prediction. The simulated four-star node dataset used in this study reflects a simple configuration of a decentralized grid. Here in this study, the grid participants are four and work have been performed on a four-node star architecture grid. The twelve independent variables have been imposed constraints of maximum and minimum values, and an absolute value of power production and consumption has been taken for simulation. As designed, this method successfully explores the feasible solution space with an evaluation of 10,000 cases. How-ever, there are likely stronger correlations between the grid participants in decentralized grids. It has been discovered that the classifier's accuracy is related to class balance. The accuracy increases when the number of minority samples increases in the dataset. As the number of instances increases, the classifier has a greater chance of learning the patterns that differentiate binary classes. The XGBoost model predicted the best accuracy with the ROS method (96.8%), followed by XGBoost with borderline-SMOTE (96.5%). Other combinations predicted slightly lesser accuracy, such as XGBoost with SMOTE-oversampling gave 96.1%, GDBT with ADASYN-oversampling 95.9%, and XGBoost with imbalanced dataset 94.7%. As shown in Table 7, our study outperformed other studies. This can help to avoid power outages and improve grid performance significantly. The different classification techniques on Electricity Grid Dataset showed different performances to identify improvement in grid stability. XGBoost has an accuracy of 94.7%, but when combining XGBoost with ROS, its accuracy improved to 96.8%. LightGBM has an accuracy of 94.6%, but when combined with ROS, its accuracy improved to 95.7%. Similarly, ANN has an accuracy of 93.4%, but when combining it with ROS, its accuracy improved to 95.7%. The fourth algorithm, GBDT, gave an accuracy of 94.1%, but while combining it with SMOTE-oversampling and ADA-SYN-oversampling, its results were improved to 95.9%, as shown in Table 6.
On the contrary, undersampling techniques predicted less accuracy than the oversampling and imbalanced dataset. The ANN with random undersampling gave results of 94.5%, ANN with cluster centroid undersampling gave 94.4%, and XGBoost with near miss undersampling has given results of 92.6%, as shown in Table 5. This model may help better predict the stability of a decentralized electricity grid, which may help better distribute and manage electricity. To test the effectiveness of our proposed technique, it is verified using the Electricity Grid Dataset. It

12
Wireless Communications and Mobile Computing can be used to evaluate any real-time dataset. LightGBM has never been used to predict grid stability to our knowledge. Rather than employing algorithmic level data resampling techniques, only data level resampling techniques have been employed in this research, limiting our work. A detailed comparison of different studies has been shown in Table 7 below for grid stability prediction.

Conclusion
A stable power grid is necessary to overcome power outages and a constant electricity supply. There must be a balance between power supply and demand for a grid to remain stable. Due to the volatile nature of renewable energy resources, the decentralized grid often destabilizes. Various ML algorithms were applied in this study to predict the stability of a decentralized power grid. Key input variables influencing the grid stability were the power production or consumption by the grid participants (p), price elasticity, i.e., the cost sensitivity of the grid participants (g) and participants' reaction time against price changes (tau). The simulated data from the UCI machine learning repository predicts the decentralized grid stability. Balanced data predicts better results than imbalanced data; so, different resampling techniques have been used to address the class imbalance issue and obtain better results. Four oversampling techniques (ROS, ADA-SYN, SMOTE, and borderline-SMOTE) and three undersampling techniques (random undersampling, near miss, and cluster centroid) were used to balance the class distribution in the dataset. After preprocessing the data, different ML models were used for results prediction, and the results were compared. Oversampling techniques predicted the best results in our experiments. In contrast, undersampling techniques were less accurate than oversampling and imbalanced datasets. It may also imply that if the number of instances increases, the classifier has a greater chance of learning the patterns that differentiate binary classes. The XGBoost algorithm outperformed all other ML algorithms to predict the decentralized electricity grid. XGBoost predicted accuracy of 94.7%, but while combining it with ROS, its accuracy prediction was improved to 96.8%. To boost the model's accuracy, four tuning parameters were applied. More complex decentralized grids can also be explored with more than four grid participants involving multiple prosumers and different grid architectures, i.e., circular and multibranched configurations, to explore the proposed model's performance further.

Data Availability
The data used to support the findings of this study are available from the corresponding authors upon request

Conflicts of Interest
The authors declare that they have no conflicts of interest