Nonintrusive load monitoring in smart microgrids aims to obtain the energy consumption of individual appliances from the aggregated energy data, which is generally confronted with the error identification of the load type for energy disaggregation in microgrid energy management system (EMS). This paper proposes a classification strategy for the nonintrusive load identification scheme based on the bilateral long-term and short-term memory network (Bi-LSTM) algorithm. The sliding window algorithm is used to extract the detected load event features and obtain the load features of data samples. In order to accurately identify these load features, the steady state information is combined as the input of the Bi-LSTM model during training. Comprising long-term and short-term memory (LSTM) network and recurrent neural network (RNN), Bi-LSTM has the advantages of stronger recognition ability. Finally, precision (P), recall (R), accuracy (A), and F1 values are used as the evaluation method for nonintrusive load identification. The experimental results show the accuracy of the Bi-LSTM identification method for load start and stop state feature matching; moreover, the method can identify relatively low-power and multistate appliances.
1. Introduction
The advancement of nonintrusive load monitoring (NILM) is hastened by the ever-increasing requirements for smart microgrid power utilization and demand side management [1]. The long-term and short-term memory NILM has disaggregated real-time energy consumption information with the load detection module installed at the microgrid EMS power input, which is one approach of demand-side management strategies in the smart grid [2]. Compared with traditional intrusive load monitoring, it has several significant advantages such as low cost, excellent data integrity, easy installation, and good practicability [3, 4]. This has become a significant direction for researchers in the microgrid energy management system (EMS).
The NILM process mainly includes data acquisition and processing, event detection, features extraction, and load identification. There are various methods for feature extraction [5] and load identification [6], including supervised classification methods [7], unsupervised clustering methods [8], and optimization methods [9]. Among them, the deep learning algorithms are widely used in the field of load identification [10, 11]. In particular, a low complexity unsupervised NILM algorithm was presented in [12], which has outperformance for event detection compared with recent existing work for unsupervised NILM considering common metrics. A practical solution for nonintrusive type II load monitoring based on deep convolutional neural networks was provided in [13]. Further, three new graph-based semisupervised multilevel load monitoring algorithms were studied in [14, 15], which only needs a small sample of observed power signals annotated with active appliances. They tackled the NILM by applying a novel graph signal processing (GSP) into both the physical signal level and data level. Additionally, a deep learning framework based on a combination of a convolutional neural network (CNN) and LSTM was proposed in [16], in which the proposed hybrid CNN-LSTM model used CNN layers for feature extraction from the input data with LSTM layers for sequence learning. Although the current identification algorithms for multistate load and low-power load identification have achieved certain recognition performants, their identification of load power and performance improvement still needs further research.
Alternatively, feature construction is also considered as the key step to achieve load monitoring technique. For example, a method based on the dynamic time warping (DTW) algorithm and template library waveform was proposed [17] to cope with the existing problems with the steady load characteristic value method and the superposition of the steady-state waveform of domestic load. In references [18, 19], researchers employed short-time Fourier transform feature extraction and LSTM auto encoder neural networks-based classification and fault detection on DC pulsed load monitoring to demonstrate the effectiveness of nonintrusive load monitoring. In ref. [20], the neural network model was developed to recognize steady-state parameters extracted from low-frequency sampling data, such as real power, current, impedance, and admittance variables. In reference [21], a new algorithm is proposed to classify events of appliance states based on modification of the cross entropy (CE) method, which relies on low-rate sampling of the active power. Nevertheless, this method cannot cope with the identification of low-power appliances.
In this paper, a nonintrusive load monitoring scheme based on Bi-LSTM algorithm is proposed to improve the performance of the nonintrusive load identification. The steady state features are constructed by the active power, reactive power, and the current harmonics features. It is applied to the load identification by combining the steady state information at the beginning and the end of the load state to construct the features data. Then, the nonlinear mapping is carried out to the output layer for the load identification. Besides, in order to address the problem of multiload feature identification, a load start state feature matching method based on the Bi-LSTM model is adopted, and the output of the optimal matching item in the Bi-LSTM model is taken as the final identification result.
The rest of the paper is organized as follows. Section 2 introduces load features construction, which contains both load power signatures and time domain features, followed by an extraction method of load features based on the sliding window algorithm. Then, an overview of the Bi-LSTM algorithm model and load identification scheme based on the Bi-LSTM model is described in Section 3. In Section 4, load feature of the microgrid EMS is trained and tested by recurrent neural network including (RNN), LSTM, and Bi-LSTM for comparison, as well as the verification experiment of identification method of load start and stop state features matching is carried out. The conclusions are drawn in Section 5.
2. Load Features Construction
In load feature construction, the power features and time domain features are mainly considered, which will be discussed in detail.
2.1. Power Feature Construction
Among the existing load features, the active power P is considered as the widely used load features since it undergoes an obvious change when the load turns on/off status switch [22]. Similarly, the reactive power Q also plays an important role in determining the type of appliances, that is, inductivity, capacitance, or resistance of electrical devices. In order to obtain these features from the electrical entrance, the P and Q are calculated by using the transient voltage and current.
Let v (t) and i (t) be the transient voltage and current at time t, and P and Q can be then defined as follows:(1)P=1T∑t=0T−1vtit,Q=1T∑t=0T−1vt+14Tit,where T represents a period of the voltage wave.
To get more performants for identification, it is necessary to find another type of features. In this paper, the series of the obtained active power and reactive power are listed as follows:(2)P=P1,P2,...,Pn1,Q=Q1,Q2,...,Qn2,where n1 and n2 are the ranks belonging to active power and reactive power, respectively. These data are ranked in a plain sequence. Furthermore, to distinguish different loads effectively, especially for the loads with small active power (usually smaller than 100 W), unequal interval segmentation is adopted. The power variation of these loads is subtle that makes difficulty for load identification. Thus, a statistic of load signatures in load database is adopted to get a more accurate interval of power [23]. The P and Q distribution coming from the dataset is illustrated as rectangles with different colors in Figure 1, where load 1 to load 11, respectively, represent bedroom appliances, basement plugs, cloth dryer, cloth washer, dining room plugs, dish washer, network equipment, fridge, heat pump, TV/PVR/AMP, and wall oven. As seen, several loads are concentrated when P belongs to 0–100 and Q belongs to 0–20. This indicates that loads in this range need to be segmented more intensively. Hence, the segmentation point pi is set as follows:(3)pi=rpi≤ε&rpi±1>ε|rpi=1nq∑i=1nKp−piq,where r (pi) is the kernel density of pi, {pi} (i = 1∼n) is a set of power points, q is the window width of the kernel density estimation function K⋅, and ε is the threshold. According to equation (3), the point where the power signatures can be divided into two groups by the kernel density is the segmentation point.
Statistic diagram of load power features.
However, for the complicated situation consisting of various loads, the features P and Q and their variation are not sufficient to identify load with similar electrical features. Therefore, the following subsection selects time domain features as dependable nonelectrical features for load identification.
2.2. Time Domain Feature Construction
Time domain features are often treated as the potential load features [24]. To get all-round information about load features in time domain, this paper introduces the features of working time length and interval, working period, and holiday character.
Working time length: For common microgrid loads, the working time length has almost regularity; i.e., it is an inner character to distinguish different loads [25]. Figure 2 illustrates the working time length counted for eleven loads as shown in Figure 1. It can be seen that most of the microgrid loads work for no more than 100 minutes in a day. A few loads work for over 1400 minutes. In addition, some loads have different working time lengths. Therefore, to construct this type feature, the segmentation method is used as follows:(4)L=L1,L2,...,Lm3,
where L denotes the length of working time, and m3 are the amounts of ranks.
Working time interval: Working time interval of loads can often reflect the users’ energy behavior. Similarly, Figure 3 illustrates time interval of the same loads counted. Different color in Figure 3 shows the different load. It can be seen that most of loads are used in the daytime from 6 : 30 to 21 : 30, while, in the night during 21 : 30–6:30, some other loads work. Thus, the time interval in this situation can be generally divided into 9 segments for quantization. The divided segments are shown in Table 1. During 6 : 30–21 : 30, every two hours belongs to one part, and from 21 : 30 to 06 : 30 on the next day, it is divided only into 2 segments, including the late-night period (21 : 30–24 : 00) and the period before dawn (0 : 00–6 : 30).
Working periodicity: Generally speaking, some of loads often work with periodicity, such as a refrigerator. On the contrary, the working time of loads without periodicity is almost uncertain. Thus, to easily distinguish the two types of loads, the mark here is just used as T = {T1, T2} with nonperiodicity T1 and periodicity T2.
Holiday character: Different behaviors of the users’ routines lead to different frequencies of load usages. For example, the probability of using traditional household electronic will increase when it is on holiday, while the usage of loads for entertainment will decline when it is a weekday since the users go to work. Thus, it is a nonnegligible character in time domain. For convenience, this feature is defined as F = {F1, F2, F3}, where F1, F2, and F3 denote loads that work every day, every weekday, and sometimes, respectively.
Statistic diagram of load working time length.
Statistical diagram of working time interval.
Segments of working time interval.
t1
t2
t3
t4
t5
t6
t7
t8
t9
0 : 00–6:30
6 : 30–8:30
8 : 30–11 : 30
11 : 30–13 : 30
13 : 30–15 : 30
15 : 30–17 : 30
17 : 30–18 : 30
18 : 30–21 : 300
21 : 30–24 : 00
On these bases, load features can be detected and extracted to build the load features database, which will provide training samples. To this end, the load events detection and feature extraction method will be presented as follows.
2.3. Load Event Detection and Feature Extraction
Load events often occurred by turning on or off the load, thus making electrical feature change. Generally, active power is the most significant since the load events happened [26]. The associated evolution curve seems like a step jump, indicating that the load event occurs. In order to obtain the feature, the sliding window algorithm is used to detect these changes and extract the feature through a different method.
For time domain features, the load features are defined as follows:(5)X=xk,k=1,2,...,When a load event occurs, the distribution of observation changes. The following hypothesis test is made to see whether there exists a change:(6)H0:X∼Nμ0,σ02,i≥1↔H1:,X∼Nμ0,σ02,1≤i≤τ,Nμ1,σ12,i>τ,whereμ0 and μ1 are divided into the average value of the observed quantities; σ0andσ1are the standard deviation of the observed quantities; τ is a positive integer to record the time of the load event. To be specific, the variation of the mean and variance increases as the sliding window algorithm contains the event occurrence point. Thus, it would be regarded as a load event if the variation of mean and standard deviation exceeds a certain threshold.
The load steady state features have the advantages of easy acquisition and excellent repeatability [27]. However, the features of load running to steady state generally show volatility due to the fluctuation of voltage and current, which lead to different load steady state feature extractions [28]. In order to fully reflect the load operation process, this paper selects the steady state feature near the occurrence point of the load event. When the load event is detected, two steady states of the load event input and output are differentiated. Let a certain length of time series as load input events and cut events be feature vector Vo and Vc, and it can form a set of feature data samples Voc = [Vo, Vc].
The sliding window algorithm is then used to determine the changes in the active power of the load event detection, and the load event is then used to extract load steady state features, enabling the performance improvement of load identification.
3. Load Identification Model3.1. Bi-LSTM Algorithm Model
The load data collected by noninvasive equipment can be regarded as time series signals. Recurrent neural network (RNN) is an artificial neural network with nodes connected and memory function, which can recognize serialized information effectively [29]. The state is transmitted one-way after going in chronological order, which can only guarantee the forward transmission of information. The improved Bi-RNN scheme adds a hidden layer to the existing RNN model. By comparison, it separates each training sequence forward and reverse as two RNNs and connects the same output layer.
To avoid the problem of gradient disappearance and long-term dependence, we further improve the Bi-RNN model as Bi-LSTM model. In this model, multiple activated neurons are used as hidden layers to selectively save or forget long-term data to satisfy the long-term data dependence requirements. The fundamental of Bi-LSTM consists of two RNN models, where one RNN model trains the data from the forward direction and the other trains the data from the backward direction. The two models then connect the output layer in the Bi-LSTM. Figure 4 shows the structure of Bi-LSTM model, which is a combination of two unidirectional LSTM.
Bi-LSTM algorithm model.
As seen in Figure 4, Bi-LSTM model hidden layer output vectors are marked asAandA′, respectively, participating in forward and reverse calculation. In forward calculation, the outputAtof the hidden layer is affected byAt−1. When it calculates in reverse, the valueAt′of the hidden layer is affected byAt+1′. Additionally, the activation function of Bi-LSTM model in Figure 4 is illustrated as follows:(7)yt=gVAt+V′At,At=fWAt−1+UXt,At′=fWAt+1′+U′Xt,where g⋅ and f⋅ represent the output layer neuron activation function and the hidden layer neuron activation function, respectively.Vis the weight matrix from the hidden layer to the output layer in forward calculation, whereas Wis the weight matrix from the input layer to the hidden layer in forward calculation. Besides, Uis the weight matrix from the input layer to the hidden layer calculation, whereas V′is the weight matrix from hidden layer to output layer for reverse calculation. Moreover, W′is the weight matrix of the hidden layer at the first moment in reverse calculation, whereasU′is the weight matrix of the input layer to the hidden layer calculation in reverse.
3.2. Load Identification Scheme Based on Bi-LSTM Model
The load identification scheme of Bi-LSTM neural network consists of training and identification stages. The significant step is to use the SoftMax function to map the data into the range [0, 1], which can be regarded as the probability that the sample data belongs to a certain class [30]. In the classification, cross entropy can represent the degree of proximity between the actual output and the expected output, which is used to calculate losses. In the training step, the neural network realizes the nonlinear fitting from input to output by adjusting the parameters of the weight matrix. Backpropagation through time (BPTT) method adjusts the weight matrix parameters along the gradient direction of error and makes the output results of the neural network approach to the actual results continuously [31]. Here, SoftMax function and cross entropy function [32] are used as the activation function and the loss function of neural network, respectively, in this paper. BPTT method is taken as a training algorithm to establish an identification algorithm model.
The flowchart of load identification scheme is to detect load events, extract input, and cut out load features according to the time of load events, as shown in Figure 5.
Flowchart of identification scheme flow based on Bi-LSTM model.
In microgrid EMS, there is much electric equipment operating at the same time, and the collected steady-state data will contain the feature vectors of multiple electrical appliances, which is not conducive to identification. In this paper, a load start state feature matching method based on the Bi-LSTM model is adopted, as shown in Figure 6.
Load start and stop feature matching principle.
As seen in Figure 6, the time of load event is first determined by load event detection, and the load characteristics of all input and cut out events are extracted. Then, the input and cut-out features of the same appliance are matched as the input of the neural network model. A cut-out (input) feature and all other input (cut-out) features are fed into the Bi-LSTM model according to the combination for identification, and a combination with the highest output probability is taken as the best match.
4. Test Results and Discussion
This work selects eleven representative loads for the test from the microgrid EMS, including the traditional household loads and modern commonly used electronic loads. Without loss of generality, all loads with small power consumption and those with multistatus are selected for demonstrating the performance of our method. Then, based on the above eleven representative loads, five typical loads are randomly selected to verify the experimental verification load start and stop feature matching principle.
4.1. Load Features Extraction of the Microgrid EMS
The sliding window algorithm is used to extract the detected load event features and obtain the load features data samples from the microgrid EMS. The features in the method are constructed by the active power, reactive power, and the fifteen odd-even current harmonics features, which were selected as the input for neural networks. The harmonic component of current can be obtained by fast Fourier transform.
In the load identification scheme for eleven different operating states in microgrid EMS, outputs can be considered as matching a load to the trained sets such that the neural network has outputs as listed in Table 2.
Neural network model outputs.
Output vector
Power consumption equipment
Load1
Load2
......
Load10
Load11
V1
1
0
......
0
0
V2
0
1
......
0
0
......
......
......
......
......
......
......
......
......
......
......
......
V10
0
0
......
1
0
V11
0
0
......
0
1
it is specified that each output is a probability value representing the probability that the input data belongs to a certain state. The value range is between 0 and 1, where 0 and 1, respectively, mean that the input data cannot and must belong to the state. The corresponding output state with the highest probability is regarded as the identification result of the input data.
4.2. Training Scenario I
In this test, five-day load features data of the microgrid EMS are picked up for training. In each case, the loads used in the microgrid EMS are not directly illustrated. For example, one day of the active and reactive power curve of one day is drawn in Figure 7.
Load power event curves of an example.
The labeled data are trained to obtain the probability of each load event by sliding window algorithm. The extracting results are shown in Figure 8. It can be seen that there are 3 loads with powers in the interval 0–30 W in Figure 8(a), 5 loads in 30–100 W in Figure 8(b), 2 loads in 100–1000 W in Figure 8(c), and only 1 load in 1000–2000 W in Figure 8(d).
Load extracting result.
For verification of the proposed scheme, there are two neural networks for comparison by using the 5-day data for the test. The precision (P), recall (R), accuracy (A), and F1 values are used as the evaluation indexes to train and test the RNN, LSTM, and Bi-LSTM neural network. The results of the average precision (P), average accuracy (A), average recall (R), and average F1 of the three neural networks are listed in Table 3.
Average accuracy, precision, recall, and f1 of the neural network.
Neural network
Average accuracy (A) (%)
Average precision (P) (%)
Average recall (R) (%)
Average F1
RNN
98.70
94.11
91.28
0.93
LSTM
99.29
96.65
95.38
0.96
Bi-LSTM
99.61
97.81
97.44
0.98
It can be seen from the four evaluation indexes of identification average precision (P), average accuracy (A), the average recall (R), and average F1 given in Table 3 that the Bi-LSTM is better than RNN and LSTM. Moreover, Bi-LSTM shows higher computational efficiency and better performance in load identification. Besides, the errors in training and testing of the three neural networks are calculated, as shown in Figures 9 and 10. As seen, the error drop speed is the fastest training in Bi-LSTM scheme than others. At the end of the training period, the three kinds of neural networks finally tend to stabilize, but the Bi-LSTM does not fluctuate since the error drops smoothly, and the stability is reached faster.
Training error results of the loads.
Testing error results of the loads.
Additionally, the error range of test samples for RNN, LSTM, and Bi-LSTM is, respectively, 0.02–0.08, 0.01–0.06, and 0.01–0.03 according to Figure 10. It can be seen that the error Bi-LSTM in the test is smaller and more concentrated.
Figure 11 shows the result of load identification. As time increases, the accuracy of the Bi-LSTM and LSTM neural network algorithm identification models increases rapidly, while the accuracy of the RNN neural network algorithm identification model fluctuates greatly. By comparison, the accuracy of the Bi-LSTM neural network algorithm identification model is about 10% larger than that of the LSTM. Therefore, the Bi-LSTM training error converges faster, and the identification error is smaller at the same training times. Additionally, the proposed scheme combining sliding window algorithm and Bi-LSTM shows the satisfactory performance for load identification in these cases.
Test results of load identification.
4.3. Training Scenario II
In this test, based on the above eleven representative loads, five typical loads are randomly selected to train. Turn on and off each load in turn, considering the interval between two starts and stops greater than 50 sampling points. Each sampling point has a time interval of 0.02 seconds. Figure 12 shows the power curve of five typical loads. As seen, the proposed load identification scheme can detect the load starting and ending state, and it has a good adaptability to small disturbances. The results of the on and off event detection are shown in Tables 4 and 5.
Results of event detection.
Test results of input events.
On event
Transition point
Partial extraction of characteristic
A : Load1 on
(180–183)
(63.7532, 63.4903, 62.2736, ...)
b : Load2 on
(325–327)
(449.4157, 449.4989, 449.5203, …)
c : Load3 on
(598–600)
(725.9656, 725.9343, 726.2512, …)
d : Load4 on
(998–1000)
(673.6530, 673.8846, 674.1954, …)
e : Load5 on
(2283–2287)
(602.8433, 603.1875, 601.6605, …)
Test results of cut out events.
Off event
Transition point
Partial extraction of characteristic
A : Load4 off
(1527–1568)
(642.3375, 647.3315, 647.4980, ...)
B : Load2 off
(3430–3432)
(447.7275, 448.5026, 442.4409, ...)
C : Load5 off
(3532–3534)
(591.7536, 591.6710, 591.6920, ...)
D : Load3 off
(3742–3744)
(720.0392, 720.0419, 720.1326, ...)
E : Load1 off
(4135–4137)
(74.0710, 74.0809, 74.0907, ...)
Five groups of load samples were obtained by combining load features, which are fed into the Bi-LSTM model according to the combination for identification, and a combination with the highest output probability is taken as the best match. The test results of load matching are shown in Table 6. As seen, the identification results are almost right, and the probability is greater than 99%. To a certain extent, the proposed experimental verification load start and stop feature matching scheme can effectively match and identify load events.
Load matching results.
Off event
Optimal matching items
Standard identification items
Best identification
Probability
A
d
Load4
Load4
0.9999604
B
b
Load2
Load2
0.9999851
C
e
Load5
Load5
0.9999831
D
c
Load3
Load3
0.9999558
E
a
Load1
Load1
0.9999988
5. Conclusions
On account of cost-effectiveness, nonintrusive load monitoring provides intelligent demand side management and power utilization for microgrid EMS. This paper proposes a Bi-LSTM based nonintrusive load monitoring method, which considers both the power features and time features. To obtain load features from the load events, the sliding window algorithm is adopted in our method. The load features, constructed by the active power, reactive power, and the features of 15 odd-even current harmonics, are selected as input variables of neural network. Besides, the precision (P), recall (R), accuracy (A), and F1 values are used as the evaluation indexes to train and test the RNN, LSTM, and Bi-LSTM neural network algorithm. And then, experiments on dataset of the five-day microgrid EMS load features verify the proposed load identification performance. Five typical loads are randomly selected in the experiments to verify the proposed main results.
Data Availability
The training data results used to support the findings of this study are included within the article. The raw single-device load data and training data used to support the findings of this study are included within the supplementary information files. All data types used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Supplementary Materials
The raw single-device load data and training data used to support the findings of this study.
LinY.TsaiM.An advanced home energy management system facilitated by nonintrusive load monitoring with automated multiobjective power scheduling2015641839185110.1109/TCYB.2020.3035587ChinJ.-X.Tinoco De RubiraT.HugG.Privacy-protecting energy management unit through model-distribution predictive control2017863084309310.1109/tsg.2017.27031582-s2.0-85037036268LuX.LaiJ.YuX.WangY.GuerreroJ. M.Distributed coordination of islanded microgrid clusters using a two-layer intermittent communication network20181493956396910.1109/tii.2017.27833342-s2.0-85038863070LuX.LaiJ.YuX.WangY.GuerreroJ. M.A novel secondary power management strategy for multiple AC microgrids with cluster-oriented two-layer cooperative framework20211721483149510.1109/tii.2020.2985905LuoF.RanziG.KongW.DongZ. Y.WangS.ZhaoJ.Non-intrusive energy saving appliance recommender system for smart grid residential users201711717861793XiaoY.HuY.HeH.ZhouD.ZhaoY.HuW.Non-intrusive load identification method based on improved KM algorithm2019715136815137710.1109/access.2019.2948079LiuH.ZouQ.ZhangZ.Energy disaggregation of appliances consumptions using HAM approach2019718597718599010.1109/access.2019.2960465SahaD.BhattacharjeeA.ChowdhuryD.HossainE.IslamM. M.Comprehensive NILM framework: device type classification and device activity status monitoring using capsule network2020817999518000910.1109/access.2020.3027664KementC. E.GultekinH.TavliB.GiriciT.UludagS.Comparative analysis of load-shaping-based privacy preservation strategies in a smart grid20171363226323510.1109/tii.2017.27186662-s2.0-85023767582LinY.-H.TsaiM.-S.Non-intrusive load monitoring by novel neuro-fuzzy classification considering uncertainties2014552376238410.1109/tsg.2014.23147382-s2.0-85027926695TaveiraP. R. Z.De MoraesC. H. V.Lambert-TorresG.Non-intrusive identification of loads by random forest and fireworks optimization20208750607507210.1109/access.2020.2988366LiuQ.KamotoK. M.LiuX.SunM.LingeN.Low-complexity non-intrusive load monitoring using unsupervised learning and generalized appliance models2019651283710.1109/tce.2019.28911602-s2.0-85059631246KongW.DongZ. Y.WangB.ZhaoJ.HuangJ.A practical solution for non-intrusive type II load monitoring based on deep learning and post-processing202011114816010.1109/tsg.2019.2918330LiD.DickS.Residential household non-intrusive load monitoring via graph-based multi-label semi-supervised learning20191044615462710.1109/tsg.2018.28657022-s2.0-85051696076ZhaoB.HeK.StankovicL.StankovicV.Improving event-based non-intrusive load monitoring using graph signal processing20186539445395910.1109/access.2018.28713432-s2.0-85053593981AlhusseinM.AurangzebK.HaiderS. I.Hybrid CNN-LSTM model for short-term individual household load forecasting2020818054418055710.1109/access.2020.3028281GillisJ. M.MorsiW. G.Non-intrusive load monitoring using semi-supervised machine learning and wavelet design2017862648265510.1109/tsg.2016.25328852-s2.0-84960532905MaY.MaqsoodA.CorzineK.OsleboD.Long short-term memory autoencoder neural networks based dc pulsed load monitoring using short-time fourier transform feature extractionProceedings of the 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE)June 2020Delft, Netherlands912917OsleboD.CorzineK.WeatherfordT.MaqsoodA.NortonM.Dc pulsed load transient classification using long short-term memory recurrent neural networksProceedings of the 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS)December 2019Gold Coast, Australia16BasuK.DebusschereV.Douzal-ChouakriaA.BachaS.Time series distance-based methods for non-intrusive load monitoring in residential buildings20159610911710.1016/j.enbuild.2015.03.0212-s2.0-84925799059MachlevR.LevronY.BeckY.Modified cross-entropy method for classification of events in NILM systems20191054962497310.1109/tsg.2018.28716202-s2.0-85053622280WuX.HanX.LiuL.QiB.A load identification algorithm of frequency domain filtering under current underdetermined separation20186370943710710.1109/access.2018.28510182-s2.0-85049134909BonfigliR.PrincipiE.FagianiM.SeveriniM.SquartiniS.PiazzaF.Non-intrusive load monitoring by using active and reactive power in additive Factorial Hidden Markov Models20172081590160710.1016/j.apenergy.2017.08.2032-s2.0-85029230174TeshomeD. F.HuangT. D.LianK.Distinctive load feature extraction based on fryze’s time-domain power theory2016326070BasuK.DebusschereV.BachaS.MaulikU.BondyopadhyayS.Nonintrusive load monitoring: a temporal multilabel classification approach201511126227010.1109/tii.2014.23612882-s2.0-84922952380WuX.HanX.LiangK. X.Event-based non-intrusive load identification algorithm for residential loads combined with underdetermined decomposition and characteristic filtering201913199107GhoshS.ChatterjeeA.ChatterjeeD.“Improved non-intrusive identification technique of electrical appliances for a smart residential system2019135695702FarokhiF.SandbergH.Fisher information as a measure of privacy: preserving privacy of households with smart meters using batteries2018954726473410.1109/tsg.2017.26677022-s2.0-85052452656FeiJ.LuC.Adaptive sliding mode control of dynamic systems using double loop recurrent neural network structure20182941275128610.1109/tnnls.2017.26729982-s2.0-85014845983LuoY.WongY.KankanhalliM.ZhaoQ.$\mathcal{G}$ -softmax: improving intraclass compactness and interclass separability of features202031268569910.1109/tnnls.2019.2909737ChenK.HuoQ.Training deep bidirectional LSTM acoustic model for LVCSR by a context-sensitive-chunk BPTT approach20162471185119310.1109/taslp.2016.25394992-s2.0-84978044162CuiG.LiuB.LuanW.YuY.Estimation of target appliance electricity consumption using background filtering20191065920592910.1109/tsg.2019.28928412-s2.0-85074194414