This research addresses the challenge of recognizing human daily activities using surface electromyography (sEMG) and wearable inertial sensors. Effective and efficient recognition in this context has emerged as a cornerstone in robust remote health monitoring systems, among other applications. We propose a novel pipeline that can attain state-of-the-art recognition accuracies on a recent-and-standard dataset—the Human Gait Database (HuGaDB). Using wearable gyroscopes, accelerometers, and electromyography sensors placed on the thigh, shin, and foot, we developed an approach that jointly performs sensor fusion and feature selection. Being done jointly, the proposed pipeline empowers the learned model to benefit from the interaction of features that might have been dropped otherwise. Using statistical and time-based features from heterogeneous signals of the aforementioned sensor types, our approach attains a mean accuracy of 99.8%, which is the highest accuracy on HuGaDB in the literature. This research underlines the potential of incorporating EMG signals especially when fusion and selection are done simultaneously. Meanwhile, it is valid even with simple off-the-shelf feature selection methods such the Sequential Feature Selection family of algorithms. Moreover, through extensive simulations, we show that the left thigh is a key placement for attaining high accuracies. With one inertial sensor on that single placement alone, we were able to achieve a mean accuracy of 98.4%. The presented in-depth comparative analysis shows the influence that every sensor type, position, and placement can have on the attained recognition accuracies—a tool that can facilitate the development of robust systems, customized to specific scenarios and real-life applications.
Accurate and timely recognition of human daily activities, throughout the day, is required in remote health-monitoring and care-giving systems [
In this work, we employ the Human Gait Database (HuGaDB) [
Our own previous research [
The contributions of this research can be summarized as follows: Identifying, for each sensor (accelerometer, gyroscope, and EMG), the direction and the sensor position that achieved the highest accuracies for human activity recognition, in addition to the best classifier from the six different algorithms applied Demonstrating that using the best axis for a sensor, together with the best classifier and selected features, can lead to a competitive performance that is close to the performance achieved considering the whole three axes of that sensor Showing that feature selection methods yield a significant improvement in the recognition accuracies, with an approximate of half the number of the original features retained Attaining the highest recognition accuracy on HuGaDB using sensor fusion (of accelerometers, gyroscope, and EMG) and off-the-shelf feature selection technique
The rest of this article is structured as follows. Section
Wearable sensors, such as accelerometers and gyroscopes, are widely used for human activity recognition. Because accelerometers measure linear acceleration, they fail to identify actions that are characterized by (or involve) joint rotation. Hence, combining gyroscopes, which measure rotational motion, with accelerometers can overcome such a problem. These two sensors are normally integrated in one wearable inertial mobile unit (IMU).
Another type of wearable sensors is the surface electromyography (sEMG). The myoelectric signal measured by the sEMG represents the electrical current associated with muscular action; hence, it can play an important role in activity recognition. In this study, we consider wearable inertial sensors and sEMG sensors placed on different positions of the body. In addition, we consider other major aspects of comparison including the number of subjects, features, activities, sensor placements, and types, and finally the various types of classifiers (the machine learning algorithms) used.
Among the previous work on human activity recognition is the scheme presented in [
A novel method was proposed for activity identification in [
In [
In the context of activity classification, the performance on two datasets was compared in [
Surface electromyography (sEMG) and accelerometer sensors were used for monitoring daily activities of stroke patients in [
The research in [
Recently, a new method was proposed in [
In [
In this research, we use 3 types of sensor data (accelerometer, gyroscope, and electromyography) and six classification techniques to classify the activities, in addition to four feature selection algorithms. Also, 12 activities and 7 different sensor placements, with 18 subjects, are considered. According to the literature survey presented earlier, and the summary in Table It investigates the effectiveness of the electromyography sensor when it is combined with accelerometer and gyroscope sensors to recognize human activities It presents a thorough comparison between the different types of commonly used machine learning techniques and feature selection methods, while at the same time, taking into consideration other aspects of comparison, such as the number of subjects and activities It achieves a significantly high accuracy (99.8%) on a recent-and-comprehensive dataset
Review of the different techniques from the literature that are most-related to the proposed research.
Study | No. of subjects | No. of activities | No. of features | No. of positions | Sensor position | Sensor type | Classifiers | Average of classification accuracy |
---|---|---|---|---|---|---|---|---|
[ | 10 | 7 | 11 | 2 | Wrist and ankle | Accelerometer | PNN and K-PNN | 96% |
[ | 10 | 7 | 5 | 3 | Hip, thigh, and ankle | Accelerometer | SVM, regularized LR, and Adaboost | 78.2% |
[ | 15 | 18 | 4 | 3 | Wrist, waist, and thigh | Accelerometer | Decision tree | 93.8% |
[ | 4 | 5 | 12 | 4 | Left thigh, right arm, ankle, and abdomen | Accelerometer | SVM, AMM, HNN | 81% avg. per subject |
[ | 30 | 6 | 24 | 1 | Waist | Accelerometer and gyroscope | RF, SVM, NB, J48, NN, K-NN, Rpart, JRip, Bagging, and Adaboost | 99.8% avg. per activity |
[ | 18 | 1 | 9 | 1 | Chest | Accelerometer | NB, SVM, RF, J48, NN, K-NN, Rpart, JRip, Bagging, and Adaboost | 99.9% avg. per activity |
[ | 10 | 11 | 8 | 8 | Arms, thigh, waist, and chest | Accelerometer and electromyography | ANN | 97.4% |
[ | 10 | 30 | 12 | 1 | Arm | Accelerometer, gyroscope, magnetometer, and electromyography | LDA and QDA | 71.6% |
[ | 19 | 13 | 19 | 4 | Chest, ankle, hip, and wrist | Accelerometer and gyroscope | k-NN | 99.13% |
[ | 10 | 12 | 14 | 1 | Wrist | Accelerometer | DT, SVM, k-NN, MLP, and NB | 96.87% |
[ | 30 | 6 | 17 | 1 | Waist | Accelerometer and gyroscope | SVM and RF | 99.22% |
[ | 31 | 6 | 17 | 1 | Waist | Accelerometer and gyroscope | SVM and RF | 95.33% |
[ | 30 | 6 | 5 | 1 | Waist | Accelerometer and gyroscope | Multiple HMMs, MOT, and k-NN | 92.6% |
[ | 4 | 4 | — | 14 | Upper body, leg, and hip | Inertial sensors and accelerometers | DL (NMF + SAE) | 99.9% |
[ | 10 | 12 | — | 3 | Chest, right wrist, and left ankle | Accelerometer, ECG, gyroscope and magnetometer | Hierarchical classification method HCM | 97.2% |
Ours | 18 | 12 | 14 | 7 | Right and left thighs, right and left shins, and right and left feet and an EMG on the thigh | Accelerometer, gyroscope and EMG | Neural networks, naive Bayes, random forest, (k-NN), SVM, and decision trees | 99.8% |
In Table
Review of the different performance metrics that were used with pertinent techniques in the literature.
Study | Accuracy (%) | Precision | Recall | CV method | Sensitivity (%) | Specificity (%) | |
---|---|---|---|---|---|---|---|
[ | 96 | — | — | — | Leave-one-out (LOOCV) | — | — |
[ | 78.2 | — | — | — | 10-fold CV | — | — |
[ | 93.8 | — | — | — | Leave-one-out (LOOCV) | — | — |
[ | 81 | — | — | — | Leave-one-out (LOOCV) | — | — |
[ | 99.8 | — | — | — | 5-fold CV | 100 | 100 |
[ | 97.4 | — | — | — | — | 95 | 99.7 |
[ | 71.6 | — | — | — | — | — | — |
[ | 99.13 | Avg. of all activities 98.86% | Avg. of all activities 98.77% | Avg. of all activities 98.95% | Leave-one-out (LOOCV) | — | — |
[ | 96.87 | 85.84% | — | — | 10-fold CV | 84.7 | 85.3 |
[ | 99.22 | Avg. of all activities 99.23% | Avg. of all activities 99.23% | Avg. of all activities 99.23% | 10-fold CV | — | — |
[ | 95.33 | Avg. of all activities 95.52% | Avg. of all activities 95.52% | Avg. of all activities 95.50% | 10-fold CV | — | — |
[ | 92.6 | — | — | — | — | — | — |
[ | 99.9 | 99.4% | 99.4% | 99.4% | Leave-one-out (LOOCV) | — | — |
[ | 97.2 | 97.2% | 97.2% | 97.2% | — | — | — |
Ours | 99.8 | 99.3% | 99.1% | 99.4% | 10-fold CV | 99.4 | 99.1 |
The Human Gait Database (HuGaDB) by Roman Chereshnev is used in this work [
Six body inertial wearable sensors were located in the left and right shin, thigh, and foot with a total of six placements, for accelerometers and gyroscopes. Electromyography sensors were placed on vastus lateralis. The samples were collected from 3-axis accelerometer, 3-axis gyroscope, and surface electromyography (sEMG) sensors, which yield a total number of 38 signals, 36 signals from the inertial sensors and 2 signals from the sEMG sensor. In addition to being recent and well-documented, the choice to work on the HuGaDB dataset is inspired by the inertial nature of the sensors used to collect the signals. This enables us to study how the individual movements of the different parts of the two legs can help in predicting the human activity.
Our proposed pipeline starts with a preprocessing stage which involves signal normalization and segmentation. Due to the difference in units and ranges of the collected data (from the 3 types of sensors), it was required to normalize it using zero-mean and unit variance as shown in the following equation:
Following the literature, most of the activity classification methods use windowing techniques for dividing a time-series signal into smaller segments. In this work, we use time-based sliding windows with 50% data overlap [
Accurate and efficient activity recognition requires the selection of the most relevant features and/or the removal of redundant information. The two steps of feature extraction and selection are meant to serve this purpose. In this research, we adopt the commonly used statistical and time domain-based features proposed in the literature for accurate human activity recognition (HAR) [
Definition of the features extracted in the proposed research.
Feature | Description |
---|---|
Standard deviation | Standard deviations of ( |
Standard deviation | Auto-correlation of the standard deviations of ( |
Standard deviation | Auto-covariance of the standard deviations of ( |
Variance | Variance of ( |
Mean | Mean of ( |
Mean | Auto-covariance of the mean values of ( |
Mean | Auto-correlation of the mean values of ( |
Minimum | Minimum value of ( |
Maximum | Maximum value of ( |
Skewness | Asymmetry of ( |
Kurtosis | Fourth central moment value divided by the variance square value of ( |
Root-mean squared | Square root of the mean square ( |
Mean crossing rate | Mean crossing rate of ( |
Jitter | Jitter of ( |
Feature selection is commonly used for dimensionality reduction. In this research, we aim to investigate the impact of sequential feature selection (SFS) algorithms on the recognition accuracy. Particularly, we adopt the following feature selection methods: (1) sequential backward selection, (2) sequential forward selection, (3) sequential backward floating selection (SBFS), and (4) sequential forward floating selection (SFFS). The notable merit of the SFS family of algorithms is their simple implementation. Basically, this group of greedy algorithms sequentially adds one feature at a time and retains that feature if it yields a better classification accuracy [
In this work, we recognize activities in the HuGaDB dataset using a set of well-studied machine learning techniques, namely, multilayer perceptron (MLP), naive Bayes (NB), random forest (RF),
We elaborate below on the pipeline of the proposed system, which is composed of three stages. To justify each step, we show the impact it has on the ultimate recognition accuracies. Stage one aims to classify the activities from accelerometer and gyroscope sensors to find the best classifier and the most revealing sensor placement. Stage two is meant to highlight the effect of using feature selection. Finally, the last stage aims to investigate the fusion of the outputs of the three sensors, namely, accelerometer, gyroscope, and electromyography sensor. Figure The dataset used in our research is collected from the thigh, foot, and shin for both left and right legs. Accelerometer and gyroscope signals contain ( Each sensor detects twelve different signals that correspond to the different activities from eighteen subjects. Each signal is then normalized to have the same range for all sensors, and then a fixed-length window is applied to each signal with 100 samples and a 50% overlap. Next, we extract 14 features from each window. The optimal number of features is computed from the signals by applying the four different types of sequential feature selection to select the most important information. Each signal obtained different number of features to achieve the highest accuracy. We divide the dataset to a 70–30 ratio for training and testing to apply the aforementioned six machine learning algorithms. Fusion of sensors is then implemented between the accelerometer-and-gyroscope signals and the electromyography signal. Electromyography signal goes throw the same aforementioned steps before getting fused with other sensors. Investigating the fusion between sensors is done according the three following approaches: (1) accelerometer and electromyography signals, (2) gyroscope and electromyography signals, and (3) accelerometer, gyroscope, and electromyography signals.
Scheme of the proposed system adopted in this research.
We dedicate this section to highlight the results attained by applying the machine learning classifiers that were discussed in Section
Parameters of each of the adopted classifiers.
Classifiers | Parameters |
---|---|
Multilayer Perceptrom | sklearn.neural_network.MLPClassifier(hidden_layer_sizes = (100,), activation = “relu”, solver = “adam”, alpha = 0.0001, batch_size = “auto,” learning_ate = “constant,” learning_rate_init = 0.001, power_t = 0.5, max_iter = 200, shuffle = True, random_state = None, tol = 0.001, verbose = False, warm_start = False, momentum = 0.9, nesterovs_momentum = True, early_stopping = False, validation_fraction = 0.1, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1 |
Decision tree (DT) | sklearn.tree.DecisionTreeClassifier(criterion = “gini,” spliter = “best,” max_depth = None, min_samples_split = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_features = None, random_state = None, max_leaf_nodes = None, Min_impurity_decrease = 0.0, min_impurity_spilt = None, class_weight = None, presort = “deprecated.” ccp_alpha = 0.0) |
Random forest (RF) | sklearn.ensemble.RandomForestClassifier(n_estimators = 128, criterion = “gini,” max_depth = None, min_samples_spilt = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_features = “auto,” max_leaf_nodes = None, min_impurity_decrease = 0.0, min_impurity_split = None, bootstrap = True, oob_score = False, n_jobs = None, random_state = None, verbose = 0, warm_start = False, class_weight = None, ccp_alpha = 0.0, max_samples = None) |
k-Nearest neighbors (k-NN) | kNeighborsClassifier(n_neighbors = 5, weights = “uniform,” algorithm = “auto,” leaf_size = 30, p = 2, metric = “minkowski,” metric_params = None, n_jobs = None, |
Support vector machine (SVM) | sklearn.svm.SVC(C = 10, kernel = “linear,” degree = 3, gamma = “auto,” coef0 = 0.0, shrinking = True, probability = False, tol = 0.001, cache_size = 200, class_weight = None, verbose = False, max_iter = ‒1, decision_function_shape = “ovr,” break_ties = False, random_state = None |
Naive Bayes | sklearn.naive_bayes.GaussianNB (priors = None, var_smoothing = 1 |
To evaluate the best sensor placement, we considered the accelerometer and gyroscope sensors only, because of their ability to recognize the activities accurately as mentioned in our previous work [
Boxplots of different classifiers’ accuracies for data from accelerometer signals (please see text for more details).
First, we started by investigating the classification accuracies of the accelerometer and gyroscope sensors attained from the left leg. We compared between three different positions, namely, the foot (lf), the thigh (lt), and the shin (ls). The random forest (RF) has consistently outperformed the other classifiers, while the least effective classifiers are the For ID 11 (sitting in the car), it was predicted as ID 6 (sitting) twice For ID 7 (standing), it was predicted as ID 8 (cycling) For ID 6 (sitting), it was predicted as ID 11 (sitting in the car)
Confusion matrix for the random forest classifier from the accelerometer and the gyroscope sensors placed on the left thigh [
The accuracy reached 98.6%, and this result was obtained from both types of sensors (accelerometers and gyroscopes) and by setting the random forest number of trees to 256 instead of 10 (10 is the default number of trees in Scikit-learn library).
This section presents a method for obtaining the optimal number of features. This is done by using cross-validation and the sequential feature selection family of algorithms, in order to score different feature subsets, and then choose the best number of features that attain the highest accuracy. The curve in Figure
Gyroscope
After selecting the optimal number of features for the best position (thigh on the left leg), we compare the performance achieved using all proposed features and that obtained using the four types of sequential feature selection approaches (backward, forward, backward floating, and forward floating). In this experiment, we included the classifiers that achieved the highest recognition accuracies in the previous experiments, support vector machine, random forest, k-NN, and the one that attained the least accuracies (decision tree). The comparison highlights the best output we obtained in Sections The best axis, namely, the The combination of the best axes (
Table
Comparison between accuracies attained using single axis and triple axes before using feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
A_lt_ | 88.10 | 89.40 | 90.70 | 79.10 | 14 |
G_lt_ | 85.80 | 83.90 | 86.80 | 66.90 | 14 |
A_lt_ | 90.00 | 89.40 | 71.50 | 28 | |
A, G_lt_ | 95.10 | 95.40 | 84.80 | 112 |
As mentioned earlier [
Comparison between accuracies attained using single axis and triple axes after using feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
A_lt_ | 89.80 | 91.50 | 91.67 | 82.00 | 7 |
G_lt_ | 88.10 | 85.20 | 88.70 | 87.80 | 8 |
A_lt_ | 91.60 | 91.20 | 75.00 | 15 | |
A, G_lt_ | 96.40 | 97.00 | 86.30 | 37 |
To conclude this part of the results, we showed that using feature selection, higher recognition accuracies can be attained with an average 50% reduction in the total number of features as mentioned in [
Comparison between accelerometers, gyroscopes, and electromyography sensor data for activity recognition have not received significant research attention in the literature. Specifically, the use of EMG sensors for daily activity recognition placed at different positions of the body has not received much attention. Figure
Six classification algorithm accuracies when applied on the
Accelerometers measure the linear acceleration that is applied to a device on all three axes (
Based on the conclusion in the previous section, we add our feature selection technique to the pipeline, in order to observe the improvement of the accuracies when the number of redundant or the noninformative features get eliminated. In Tables
Comparison between accuracies attained using single axis and triple axes for accelerometer and electromyography signals without using feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
A_lt_ | 88.10 | 89.40 | 90.70 | 79.10 | 14 |
EMG | 76.90 | 78.90 | 79.20 | 66.20 | 14 |
A_lt_ | 87.45 | 85.19 | 67.17 | 28 | |
A_lt_all | 92.03 | 93.60 | 95.00 | 83.90 | 56 |
A_lt_all + EMG | 93.65 | 92.87 | 84 | 70 |
Comparison between accuracies attained using single axis and triple axes for accelerometer and electromyography signals with sequential forward floating feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
A_lt_ | 89.80 | 91.50 | 91.67 | 82 | 7 |
EMG | 78.00 | 80.30 | 84.30 | 70.10 | 11 |
A_lt_ | 88.91 | 87.65 | 70.22 | 16 | |
A_lt_all | 94.20 | 95.10 | 96.20 | 85.10 | 23 |
A_lt_all + EMG | 95.65 | 95.01 | 85.83 | 30 |
Gyroscope measures the rate of a device rotation in rad/sec around each of the three axes (
Comparison between accuracies attained using single axis and triple axes for gyroscope and electromyography signals without using feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
G_lt_ | 85.80 | 83.90 | 86.80 | 66.90 | 14 |
EMG | 76.90 | 78.90 | 79.20 | 66.20 | 14 |
G_lt_ | 88.60 | 90.41 | 72.96 | 28 | |
G_lt_all | 87.90 | 88.60 | 92.50 | 81.20 | 56 |
G_lt_all + EMG | 90.27 | 90.83 | 73.10 | 70 |
Comparison between accuracies attained using single axis and triple axes for gyroscope and electromyography signals with sequential forward floating feature selection.
Sensor type and position | DT | SVM | RF | Number of features | |
---|---|---|---|---|---|
G_lt_ | 88.10 | 85.20 | 88.70 | 87.80 | 8 |
EMG | 78.00 | 80.30 | 84.30 | 70.10 | 11 |
G_lt_ | 90.11 | 91.78 | 75.23 | 18 | |
G_lt_all | 89% | 90.10 | 94.40 | 82.90 | 26 |
G_lt_all + EMG | 93.13 | 92.83 | 76.32 | 32 |
Finally, we study the fusion of the data gathered from three types of sensor data, in order to investigate how the different features from these heterogeneous signals interact together. The major three comparisons highlighted in our discussion are the performance corresponding to data from single axis versus triple axes, the number of adopted features, and the optimal sensor fusion output. We present these comparisons in Tables
Comparison between single axis vs triple axes for accelerometer, gyroscope, and electromyography signals before using feature selection.
Sensor type and position | DT | SVM | RF | No. of features | |
---|---|---|---|---|---|
A_lt_ | 88.10 | 89.40 | 90.70 | 79.10 | 14 |
G_lt_ | 85.80 | 83.90 | 86.80 | 66.90 | 14 |
EMG | 76.90 | 78.90 | 79.20 | 66.20 | 14 |
A_lt_ | 90.00 | 89.40 | 94.70 | 71.50 | 28 |
A_lt_ | 91.40 | 90.70 | 72.80 | 42 | |
A_lt_all | 92.03 | 93.60 | 95.00 | 83.90 | 56 |
G_lt_all | 87.90 | 88.60 | 92.50 | 81.20 | 56 |
A, G_lt_all | 95.10 | 95.40 | 96.80 | 84.80 | 112 |
A, G_lt_all + EMG | 96.80 | 96.70 | 87.30 | 126 |
Comparison between accuracies attained using single axis and triple axes for accelermoeter, gyroscope, and electromyography signals with sequential forward floating feature selection.
Sensor type and position | DT | SVM | RF | No. of features | |
---|---|---|---|---|---|
A_lt_ | 89.80 | 91.50 | 91.67 | 82 | 7 |
G_lt_ | 88.10 | 85.20 | 88.70 | 87.80 | 8 |
EMG | 78.00 | 80.30 | 84.30 | 70.10 | 11 |
A_lt_ | 91.60 | 91.20 | 96.90 | 75.00 | 15 |
A_lt_ | 93.10 | 92.40 | 78.40 | 24 | |
A_lt_all | 94.20 | 95.10 | 96.20 | 85.10 | 23 |
G_lt_all | 89 | 90.10 | 94.40 | 82.90 | 26 |
A, G_lt_all | 96.40 | 97 | 98.40 | 86.30 | 37 |
A, G_lt_all + EMG | 97.30 | 98 | 88.10 | 45 |
Comparison between 10-fold CV and LOPO CV for accelerometer, gyroscope, and EMG sensors before applying feature selection.
Classifier/validation protocol | 10-fold CV (%) | LOPO CV (%) |
---|---|---|
Random forest | 98.5 | 98.9 |
SVM | 96.7 | 96.5 |
Decision tree | 96.8 | 96.3 |
KNN | 87.3 | 86.8 |
Comparison between 10-fold CV and LOPO CV for accelerometer, gyroscope, and EMG sensors after applying feature selection.
Classifier/validation protocol | 10-fold CV (%) | LOPO CV (%) |
---|---|---|
Random forest | 99.8 | 99.4 |
SVM | 98 | 98.2 |
Decision tree | 97.3 | 97.1 |
KNN | 88.1 | 88.5 |
The results presented in this section are summarized in Figures The best single axis and position that yield the highest accuracy, for the accelerometer and the gyroscope Fusion between electromyography, and accelerometer, and gyroscope best single axis Accelerometer and gyroscope fusion with all parameters ( Fusion between electromyography with all accelerometer and gyroscope parameters
Difference in the number of features before and after using feature selection and sensor fusion.
Difference in accuracies attained with and without feature selection as shown in Figure
Significant results are obtained as shown in Figure
List of input sources and the corresponding features acquired from them.
Input source | Features |
---|---|
A_lt_ | Jitter, mean, standard deviation, maximum, standard deviation auto covariance, standard deviation auto-correlation, and root-mean square |
A_lt_ | Jitter, mean crossing rate, variance, minimum, maximum, standard deviation auto-correlation, kurtosis, and root-mean square |
A_lt_ | Jitter, mean, standard deviation, minimum, variance, standard deviation auto covariance, skewness, and kurtosis |
Ac_lt_mag | Mean and variance |
G_lt_ | Mean, jitter, and standard deviation auto-correlation standard deviation, minimum, variance, mean auto correlation, root-mean square, and skewness |
G_lt_ | Mean crossing rate, mean, mean auto covariance, and root-mean square |
G_lt_mag | Mean crossing rate, standard deviation, standard deviation auto covariance, and root-mean square |
EMG_l | Mean crossing rate, minimum, and kurtosis |
Towards accurate recognition of daily human activities, this paper proposed a novel approach that is based on sensor fusion and feature selection. Other than performing feature selection and sensor fusion consecutively, our pipeline learns a model that selects indicative features by jointly considering heterogeneous signals. These signals are acquired from electromyography and wearable inertial sensors on the thigh, foot, and shin. We believe that this approach enables constructive interaction between features that would have been dropped (during feature selection) otherwise. We attained a mean recognition accuracy of 99.8% on HuGaDB—the highest on this dataset—which provides signals from different sensor types (gyroscopes, accelerometers, and sEMG sensors), placements, and positions. Using off-the-shelf feature selection methods and time-based and statistical features, the presented joint fusion-selection approach had successfully realized the potential of sEMG sensors and incorporated them effectively to benefit the performance of the system. Moreover, towards the development of less obstructive systems, we highlighted the potential of the left thigh as a key sensor placement for attaining high recognition accuracies. A mean recognition accuracy of 98.4% was attained using only one inertial sensor on that single placement. Through extensive simulations and comparative analysis, we justified the impact of every stage in the proposed pipeline and showed the influence of various system parameters on the recognition accuracy. This research is envisaged to facilitate building robust systems that are tailored to specific scenarios and real-life applications.
Previously reported wearable inertial sensor and EMG sensor data (HuGaDB) were used to support this study and are available at DOI: 10.1007/978-3-319-73013-4_12. These prior studies (and datasets) are cited at relevant places within the text as references [
The authors declare no conflicts of interest.
A.Al-Kabbany and H. Shaban carried out conceptualization, provided resources, and were responsible for supervision and project administration. A. Badawi, A. Al-Kabbany, and H. Shaban performed methodology, validation, formal analysis, investigation, reviewing and editing, and visualization; A. Badawi provided software and performed data curation. A. Badawi and A. Al-Kabbany wrote the original draft.