Human activity recognition via triaxial accelerometers can provide valuable information for evaluating functional abilities. In this paper, we present an accelerometer sensor-based approach for human activity recognition. Our proposed recognition method used a hierarchical scheme, where the recognition of ten activity classes was divided into five distinct classification problems. Every classifier used the Least Squares Support Vector Machine (LS-SVM) and Naive Bayes (NB) algorithm to distinguish different activity classes. The activity class was recognized based on the mean, variance, entropy of magnitude, and angle of triaxial accelerometer signal features. Our proposed activity recognition method recognized ten activities with an average accuracy of 95.6% using only a single triaxial accelerometer.
1. Introduction
Recently, activity recognition has become an emerging field of research and one of the challenges for pervasive computing. A typical application for activity recognition is in health care. Activity recognition is also an important research issue in building a pervasive and smart environment to provide personalized support.
Computer vision-based techniques and body-fixed accelerators are the main methodologies used for activity recognition. Computer vision-based techniques for activity recognition should be conducted in a well-controlled environment and be subject to the limitations of the environment. However, they may significantly fail in an environment with clutter and variable lighting [1–3]. Body-fixed accelerators offer a practical and relatively low-cost method to measure human motion.
The existing literature demonstrates many studies on activity recognition that use accelerometers. However, there are three primary challenges in these studies.
(1) The large muscles of the body are controlled for walking, running, sitting, and other activities. The glutes are the primary muscles that drive lower-body movement because of their natural strength and leverage advantage on the legs. Lower-body movement includes activities such as running, jumping, and walking. Sleeping, sitting, standing, walking, running, and jumping must be recognized as typical physical activities. The activity recognition algorithm in Khan et al. [4] did not consider jumping. Running and jumping were excluded from the experiments in the research of Trabelsi et al. [5], Tang and Sazonov [6], Lee et al. [7], and Deng et al. [8]. Gupta and Dallas [9] did not report how to recognize standing and sleeping, and Tao et al. [10] did not describe tests for recognizing sitting and sleeping. Alshurafa et al. [11] studied only walking and running recognition. These studies were incomplete in recognizing typical physical activities [12].
(2) Some studies [6, 9, 13, 14] required the combination of multiple sensors to increase recognition performance. However, a user is less likely to wear a more complex operating system at all times. People may not feel comfortable wearing multiple sensors. Nevertheless, the multisensor systems do not have an enormous advantage over the single-sensor system on the recognition accuracy if the single-sensor system uses a higher sampling rate, suitable features, a more sophisticated classifier, and the correct sensor position, which has the best performance for recognizing activities. A single sensor mounted at the right position can also obtain good recognition performance. For typical physical activities, multiple sensors are not helpful for significantly improving recognition performance [15–17].
(3) A series of lectures [18–20] have been given on the topic of recognizing so-called ADL (activities of daily living), which is not physical-activity recognition. “Activities of daily living” is a term used in healthcare to refer to daily self-care activities, such as cooking and hair drying, within an individual’s place of residence or in outdoor environments. Physical activity included any body movement that works the muscles and requires more energy than resting, and it simply implies a movement of the body that uses energy, such as running or walking [21–23]. Physical-activity recognition is discussed in this paper.
Many researchers have used particular devices to collect the raw accelerometer data for a set of movements and various activity recognition algorithms including Artificial Neural Networks (ANN) [4, 7, 13], k-Nearest Neighbor (KNN) [8, 10, 11, 19], Support Vector Machines (SVM) [6, 14, 18], and Hidden Markov Model (HMM) [5, 20]. In our study, we addressed the activity recognition algorithm using SVM for three reasons.
(1) SVM and ANN have been broadly used in human activity recognition, although they do not include a set of rules understandable by humans [24]. As two different algorithms, SVM and ANN share the same concept of using the linear learning model for pattern recognition. The difference is mainly on how nonlinear data are classified. Consequently, SVM models have preferable prediction performances to ANN models. SVMs have been demonstrated to have superior classification accuracies to neural classifiers in many experiments. The generalization performance of neural classifiers considers the structure size, and the selection of an appropriate structure relies on cross validation [25]. The performance of SVMs depends on the selection of kernel function type and parameters, but this dependence is less effective [26].
(2) KNN does not perform well when the size of dataset increases, and it is suitable for small datasets. SVM is a complicated classifier; here, we implement the leaner kernel function. We conclude that the accuracy and other performance criteria do not significantly depend on the dataset size, but they depend on the number of training cycles among all factors. The number of training cycles is the best classifier for activity recognition [27].
(3) When a continuous HMM approach to activities is used, the length of the event sequence that gives the best predictions uses sequential data. A HMM is used to model the sequential information in multiaspect target signatures. The parameter-learning task in HMMs is to determine the best set of state transition and emission probabilities given an output sequence or a set of such sequences. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM for the set of output sequences. Typical physical activities are nonsequential, and it is not easy to use HMM to recognize a single physical activity [28].
The traditional SVM [29] is formulated for binary nonlinear classification problems. How to effectively extend the SVM for multiclassification remains a hot topic. The Least Squares Support Vector Machine (LS-SVM) is an advanced version of the standard SVM, and LS-SVM defines a different cost function from the classical SVM and changes its inequation restriction to an equation restriction. Recently, there have been relatively few studies that use LS-SVM to recognize activities using a triaxial accelerometer. Nasiri et al. [30] addressed the Energy-Based Least Square Twin Support Vector Machine (ELS-TSVM) algorithm, which is an extended LS-SVM classifier that performs classification using two nonparallel hyper planes instead of a single hyper plane, which is used in the conventional SVM. ELS-TSVM was used to recognize activities using computer vision instead of a triaxial accelerometer. Altun et al. [31] compared the performances of the least squares method (LSM) and the SVM but did not include the LS-SVM. The LS-SVM for multiclassification is decomposed into multiple binary classification tasks. The LS-SVM for multiclassification reduces the computational complexity by using a small number of classifiers and effectively eliminates the unclassifiable regions that possibly affect the classification performance of this algorithm [32–34].
In this paper, we aimed to overcome the limitations of the existing physical-activity recognition system and intended to develop a new method that could recognize a set of typical physical activities using only a single triaxial accelerometer. This method consisted of three parts, six features for activity recognition, the hierarchical recognition scheme, and the activity estimator based on the LS-SVM and NB algorithms. This method could recognize ten physical activities with a high recognition rate.
The remainder of the paper is organized as follows. Section 2 describes the experimental dataset and hierarchical classification framework in this paper. Section 3 involves feature extraction to improve the classification accuracy using feature data over raw sensor data. Section 4 focuses on an activity estimator for multiclassification to estimate the human activity from the feature data. The experimental results and conclusion are presented in Sections 5 and 6, respectively.
For this work, the used dataset was the University of Southern California Human Activity Dataset (USC-HAD). The USC-HAD was specifically designed to include the most basic and common human activities in daily life from a large and diverse group of human subjects. The activities in the dataset were applicable to many scenarios. The activity data were captured using a high-performance inertial sensing device, which is MotionNode [35]. MotionNode integrates a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer, and the measurement range for each axis of the accelerometer and gyroscope is ±6 g and ±500 dps, respectively. MotionNode was firmly attached onto the participant’s right front hip. The sampling rates of this dataset for both accelerometer and gyroscope were set to 100 Hz. The dataset included 10 activities: walking (forward, left, and right), walking (upstairs, downstairs), jumping, running, standing, sitting, and sleeping [36–38].
The main goal of this paper was to identify ten activities, which were divided into four groups: 2D walking (walking forward, left, and right), 3D walking (walking upstairs, downstairs), plane motion (jumping, running), and static activities (standing, sitting, and sleeping). The division was performed using a single triaxial accelerometer. The activities are listed in Table 1.
Classified states and activities recognized in this study.
State
Activities
Act_Label
Walking 2D
Walking forward
WF
Walking left
WL
Walking right
WR
Walking 3D
Walking upstairs
WU
Walking downstairs
WD
Plane motion
Jumping
JU
Running
RU
Static activity
Standing
ST
Sitting
SI
Sleeping
SL
2.2. Hierarchical Classification Framework
To achieve higher scalability than the single-layer framework, a multilayer classification framework was presented. In the first layer, because the walking-related activities (walking forward, walking left, walking right, walking upstairs, and walking downstairs), jumping, running, and static activities were differentiated from one another, we classified the activities into two subsets (walking and all static activities) and two activities (jumping and running) based on feature selection. In the second layer, the walking-related activities subset included plane motion and 3D motion. In this layer, the static activity subset could be classified by standing, sitting, and sleeping. In the third layer, all detailed activities of 2D and 3D walking were recognized [39, 40].
Figure 1 illustrates the structure of the hierarchical classification framework. The yellow boxes represent the activity set, and the green boxes represent the ten types of activities to recognize. Now, the problem of recognizing ten activity classes was broken down to n distinct classification problems, and the red boxes represent the classifiers. A preliminary investigation of n selection is reported in Table 2. The four-class classifier was the best selection in this hierarchical classification framework because of the small number of classifiers and high average accuracy rate of each classifier. The four-class classifier was used in this paper.
A preliminary investigation of n selection.
n-class classifier
Number of classifiers
Average accuracy rate of each classifier
2
9
≥90
3
6
≥90
4
5
≥90
5
3
≈80
Structure of the hierarchical classification framework.
Two-class classifier
Three-class classifier
Four-class classifier
Five-class classifier
In the hierarchical classification framework of the four-class classifier, classifier 1, at the top layer, distinguishes walking-related activities, jumping, running, and static activities. Walking-related activities include walking forward, walking left, walking right, walking upstairs, and walking downstairs. Static activities include standing, sitting, and sleeping [37]. Classifier 2, at the second layer, distinguishes plane motions and 3D motions. Classifier 3 recognizes activities from plane motion, and classifier 4 distinguishes walking upstairs and downstairs from 3D motions. Finally, classifier 5 focuses on recognizing different static activities.
3. Feature Design and Selection
Recent related work in feature selection was performed in a filter-based approach using Relief-F and a wrapper-based approach using a variant of sequential forward floating search. Because different features were on different scales, all features were normalized to obtain the best results for KNN or Naive Bayes classifiers, which were used for error estimation and ensure equal weight to all potential features [1–6, 8–10, 13, 18, 24, 29].
In our approach, according to the elementary mechanics of walking, running, jumping, and sleeping, we used the means and variances of magnitudes and angles as the activity features and the magnitudes and angles that were produced by a triaxial acceleration vector. The reasons for this approach are as follows. First, according to [41–43], the muscles produce different forces when people walk, run, jump, and sleep. Normally, the forces increase in the order of sleeping, walking, running, and jumping. Based on Newton’s second law, the resultant accelerations of these activities also increase in that order. Second, as in [44], a model of persistent 2D random walks can be represented by drawing turning angles. Detailed features are described below. Third, Shannon entropy in the time domain can measure the acceleration signal uncertainty and describe the information-related properties for an accurate representation of a given acceleration signal.
The triaxial acceleration vector A→(t) is (1)A→t=xate→x+yate→y+zate→z,t=1,…,n,where xa(t), ya(t), and za(t) represent the t acceleration sample of the x, y, and z axes. This feature is independent of the orientation of the sensing device and measures the instantaneous intensity of human movements at index t.
We computed the mean, variance, and entropy of magnitude and of the angle of over the window and used them as six features: Mmag, Vmag, Emag, Mang, Vang, and Eang, where T is the window length. θ is the angle between vectors A→(t-1) and A→(t), as shown in the following. Let i=1,2,…,n/T; then(2)Mmag=Mmag1,Mmag2,…,where Mmagi=1T∑t=i-1T+1TA→t,Vmag=Vmag1,Vmag2,…,where Vmagi=1T∑t=i-1T+1TA→t-Mmagi2,Emag=Emag1,Emag2,…,where Emagi=-∑t=i-1T+1TA→t2log2A→t2,Mang=Mang1,Mang2,…,where Mangi=1T-1∑t=i-1T+2Tθt,Vang=Vang1,Vang2,…,where Vangi=1T-1∑t=i-1T+2Tθt-Mangi2,Eang=Eang1,Eang2,…,where Eangi=-∑t=i-1T+2Tθt2log2θt2,A→t-1·A→t=A→t-1A→tcosθt.
To explore the performance and correlation among these six features, a series of scatter plots in a 2D feature space is shown in Figure 2. The horizontal and vertical axes represent two different features. The points in different colors represent different activities. In Figure 2(a), the relationship between Mmag and Vmag is described, and the running, jumping, walking, and static activities are clustered. In Figure 2(b), the straight line between 2D walking (forward, left, and right) and 3D walking (upstairs and downstairs) implies that Mang is an available feature. Figure 2(c) illustrates that the Emag and Mmag features successfully partition the triaxial acceleration data samples from walking forward, walking left, and walking right into three isolated clusters, where each cluster contains data samples roughly from one single activity class. Figure 2(d) demonstrates the discrimination power of the Eang and Mang features to differentiate walking upstairs and walking downstairs. Figure 2(e) shows that the triaxial acceleration signal can be classified into standing, sitting, and sleeping based on the Emag and Mmag features.
Scatter plots in the 2D feature space (T=50).
Mmag versus Vmag
Mang versus Vang
Emag versus Mmag
Eang versus Mang
Emag versus Mmag
In this study, we used Mmag, Vmag, Emag, Mang, Vang, and Eang as the best features for the classifiers in each layer [45].
4. Activity Estimation for Multiclassification
We presented an activity estimator for multiclassification to estimate the human activity from the feature data. Each activity estimator for the multiclassification included one LS-SVM classifier and a maximum Act_Label frequency estimator (Figure 3).
Activity estimator for multiclassification.
We used the LS-SVM [34] method to cluster the feature data. After loading the testing data into Matlab, we built an activity-recognizing model from the data. After the parameters of the model were calculated, we estimated the activity by inputting some test feature data [46]. The function trainlssvm() was used to train the support features of an LS-SVM for classification, and the function simlssvm() was used to evaluate the LS-SVM for some test feature data.
Because Mmag, Vmag, Emag, Mang, Vang, and Eang have (n/T) elements, the LS-SVM for the multiclassifier outputs an activity set, which includes n/T elements of Act_Label. The activity set may have different Act_Labels, and we must estimate the Act_Label maximum likelihood in this activity set. We used the Naive Bayes algorithm to compute all Act_Label likelihoods and obtained the human activity using the maximum Act_Label likelihood. The following described how to mathematically compute the maximum Act_Label likelihood: (3)Activity=…,acti,…=LS_SVMMmag,Vmag,Emag,Mang,Vang,Eang,pAct_Labelj∣classifierk=N_BayesActivity∣classifierk,j=1,…,10,k=1,…,5,resultrec=maxpact=Act_Labelj∣classifierk,j=1,…,10,k=1,…,5,act∈Activity.
Figure 4 shows the activity estimator working process, which includes the training stage and testing stage (online activity recognition). In the training stage, the labeled data of triaxial acceleration were normalized and the statistical features were extracted from those synthesized-acceleration data. Then, the multiclassification estimator was used to build the classification model. In the testing stage, unlabeled raw data of the triaxial accelerometer were processed with the method that was used in the training stage. These synthesized data were classified using the multiclassification estimator, and the recognized result was obtained [47, 48].
Activity estimator working process.
5. Experiment
The activity recognition dataset was the USC Human Activity Dataset. The activity dataset included ten activities and collected data from 14 subjects. To capture the day-to-day activity variations, each subject was asked to perform 5 trials for each activity on different days at various indoor and outdoor locations. Although the duration of each trial varies for different activities, it was sufficiently long to capture all information for each performed activity [37]. In this section, we estimated the performances of the five activity classifiers in this activity recognition scheme. Table 3 shows the results of five activity recognition classifiers. These activity classifiers had over 95% accuracy [24] and were acceptable.
Activity classifier accuracy test.
Classifiers
Activities recognition accuracy rate (%)
Classifier average accuracy (%)
WF
WL
WR
WU
WD
RU
JU
ST
SI
SL
Classifier 1
98.3
97.1
97.1
100
98.2
Classifier 2
98.1
99.1
—
—
—
—
—
98.6
Classifier 3
98.6
97.1
95.7
—
—
—
—
—
—
—
97.1
Classifier 4
—
—
—
97.1
97.1
—
—
—
—
—
97.1
Classifier 5
—
—
—
—
—
—
—
98.6
97.1
98.6
98.1
The results of these folds are summarized in Table 4. The average recognition accuracy of 95.6% indicates that our proposed human activity recognition scheme can achieve high recognition rates for a specific subject. Because 2D walking and 3D walking are similar, the recognition accuracy of the five walking activities is low. We will attempt to obtain higher recognition accuracy using an adequate amount of training data in future research.
Confusion matrix for average recognition accuracy for all activities.
Input
Accuracy rate (%): 95.6%
Output
WF
WL
WR
WU
WD
RU
JU
ST
SI
SL
WF
95.7
1.2
3.1
0
0
0
0
0
0
0
WL
2.8
92.9
4.3
0
0
0
0
0
0
0
WR
3.4
5.2
91.4
0
0
0
0
0
0
0
WU
0
0
0
92.9
7.1
0
0
0
0
0
WD
0
0
0
5.7
94.3
0
0
0
0
0
RU
0.8
0
0
0
0
97.1
2.1
0
0
0
JU
0
0
0
0
0
2.9
97.1
0
0
0
ST
0
0
0
0
0
0
0
98.6
0.8
0.6
SI
0
0
0
0
0
0
0
2.1
97.1
0.8
SL
0
0
0
0
0
0
0
0.4
1.0
98.6
We compared the accuracy rate and running time for common multiclassification methods. All algorithms were run on a computer with CPU i7-2670QM 2.2 G, 8 G ram, and Matlab 2013a. The LS-SVM performed notably well in the tests. The average running time for the hierarchical classification framework with the LS-SVM recognizing activities was 0.021 seconds, which was less than the ANN (Artificial Neural Network), DT (Decision Tree), and KNN (k-Nearest Neighbor) algorithms. We performed the ANN, DT, and KNN classifier tests with the built-in functions of Matlab. The LS-SVM method was also better than ANN, DT, and KNN in terms of the average recognition accuracy rate for the ten activities. Table 5 shows the results.
Accuracy rates and running times of the classification methods.
Method
Accuracy rate (%)
Average rate (%)
Running time (s)
WF
WL
WR
WU
WD
JU
RU
ST
SI
SL
ANN
96.1
91.4
90.2
90.5
85.4
77.5
98.6
96.7
95.2
99.1
92.1
0.085
DT
93.9
94.6
91.7
91.2
90.8
84.9
94.1
95.7
97.2
94.4
92.9
0.411
KNN
93.5
92.1
90.1
88.2
86.7
88.6
93.8
96.1
95.7
93.8
91.9
0.183
LS-SVM
95.7
92.9
91.4
92.9
94.3
97.1
97.1
98.6
97.1
98.6
95.6
0.021
6. Conclusion and Future Work
This paper aims to provide an accurate and robust human activity recognition scheme. The scheme used triaxial acceleration data, a hierarchical recognition scheme, and activity classifiers based on the LS-SVM and the NB algorithm. The mean, variance, entropy of magnitude, and angle of triaxial acceleration data were used as the features of the activity classifiers. The scheme effectively recognized a typical set of daily physical activities with an average accuracy of 95.6%. It could distinguish walking (forward, left, right, upstairs, and downstairs), running, jumping, standing, sitting, and sleeping activities using only a single triaxial accelerometer. The experimental results of the hierarchical recognition scheme show significant potential in its ability to accurately differentiate activities using triaxial acceleration data. Although the scheme remains to be tested with USC-HAD datasets, the core of this scheme is independent of the features of other activity datasets; therefore, it is applicable to any dataset.
The novelty of the proposed human activity recognition scheme is the introduction of the LS-SVM method as the classifier algorithm. The LS-SVM is an advanced version of the standard SVM, and there are recently relatively few studies using LS-SVM to recognize activities with only one triaxial accelerometer. The human activity recognition scheme with LS-SVM classifiers simplifies the construction of the hierarchical classification framework and has a lower running time than other common multiclassification algorithms. Accuracy is the basic element that must be considered when any activity recognition system is implemented, and this recognition scheme has a high success rate, for which it can recognize ten different types of activities with an average accuracy of 95%.
The next stage of our research has two parts. First, the algorithms are improved to recognize these activities, and the user will not have to worry about placing the sensors at the correct positions to correctly detect the activities. Second, an unsupervised approach for automatic activity recognition is considered. An unsupervised learning framework of human activity recognition will automatically cluster a large amount of unlabeled acceleration data into discrete groups of activity, which implies that the human activity recognition can be naturally performed.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was partially supported by Appropriative Researching Fund for Professors and Doctors, Guangdong University of Education, under Grant 11ARF04, and Guangdong Provincial Department of Education under Grants 2013LYM_0063 and 2014GXJK161.
AggarwalJ. K.XiaL.Human activity recognition from 3D data: a review201448708010.1016/j.patrec.2014.04.0112-s2.0-84906782091HernándezJ.CabidoR.MontemayorA. S.PantrigoJ. J.Human activity recognition based on kinematic features201431434535310.1111/exsy.120132-s2.0-84907983118YinJ.TianG.FengZ.LiJ.Human activity recognition based on multiple order temporal information20144051538155110.1016/j.compeleceng.2014.04.0062-s2.0-84902346892KhanA. M.LeeY.-K.LeeS. Y.KimT.-S.A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer20101451166117210.1109/titb.2010.20519552-s2.0-77956366233TrabelsiD.MohammedS.ChamroukhiF.OukhellouL.AmiratY.An unsupervised approach for automatic activity recognition based on hidden markov model regression201310382983510.1109/tase.2013.22563492-s2.0-84892568856TangW. L.SazonovE. S.Highly accurate recognition of human postures and activities through classification with rejection201418130931510.1109/jbhi.2013.22874002-s2.0-84892576252LeeM.-W.KhanA. M.KimT.-S.A single tri-axial accelerometer-based real-time personal life log system capable of human activity recognition and exercise information generation201115888789810.1007/s00779-011-0403-32-s2.0-81155150370DengW.-Y.ZhengQ.-H.WangZ.-M.Cross-person activity recognition using reduced kernel extreme learning machine2014531710.1016/j.neunet.2014.01.0082-s2.0-84893667084GuptaP.DallasT.Feature selection and activity recognition system using a single triaxial accelerometer20146161780178610.1109/TBME.2014.23070692-s2.0-84901278500TaoD. P.JinL.WangY.LiX.Rank preserving discriminant analysis for human behavior recognition on wireless sensor networks201410181382310.1109/tii.2013.22550612-s2.0-84890905141AlshurafaN.XuW.LiuJ. J.HuangM.MortazaviB.RobertsC. K.SarrafzadehM.Designing a robust activity recognition framework for health and exergaming using wearable sensors20141851636164610.1109/jbhi.2013.2287504ChangA.MotaS.LiebermanH.GestureNet: a common sense approach to physical activity similarityProceedings of the Conference on Electronic Visualisation and the ArtsJuly 2014London, UKBanosO.DamasM.PomaresH.RojasF.Delgado-MarquezB.ValenzuelaO.Human activity recognition based on a sensor weighting hierarchical classifier201317233334310.1007/s00500-012-0896-32-s2.0-84872805924ChengJ.ChenX.ShenM.A framework for daily activity monitoring and fall detection based on surface electromyography and accelerometer signals2013171384510.1109/titb.2012.22269052-s2.0-84880336329KernN.SchieleB.SchmidtA.Multi-sensor activity context detection for wearable computing20032875Berlin, GermanySpringer220232Lecture Notes in Computer Science10.1007/978-3-540-39863-9_17GaoL.BourkeA. K.NelsonJ.Sensor positioning for activity recognition using multiple accelerometer-based sensorsProceedings of the 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine LearningApril 20134254302-s2.0-84887080158GaoL.BourkeA. K.NelsonJ.Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems201436677978510.1016/j.medengphy.2014.02.0122-s2.0-84900795161LiuS.GaoR. X.JohnD.StaudenmayerJ. W.FreedsonP. S.Multisensor data fusion for physical activity assessment201259368769610.1109/tbme.2011.21780702-s2.0-84863126954WanJ.O’GradyM. J.O’HareG. M. P.Dynamic sensor event segmentation for real-time activity recognition in a smart home context201519228730110.1007/s00779-014-0824-x2-s2.0-84910595564ZhanY.KurodaT.Wearable sensor-based human activity recognition from environmental background sounds201451778910.1007/s12652-012-0122-22-s2.0-84894089084http://en.wikipedia.org/wiki/Physical_exercisehttp://www.nhlbi.nih.gov/health/health-topics/topics/physhttp://en.wikipedia.org/wiki/Activities_of_daily_livingLaraÓ. D.LabradorM. A.A survey on human activity recognition using wearable sensors20131531192120910.1109/surv.2012.110112.001922-s2.0-84881311778AroraS.BhattacharjeeD.NasipuriM.MalikL.KunduM.BasuD. K.Performance comparison of SVM and ANN for handwritten devnagari character recognition2010186372RenJ.ANN vs. SVM: which one performs better in classification of MCCs in mammogram imaging20122614415310.1016/j.knosys.2011.07.0162-s2.0-84155181072RaikwalJ. S.SaxenaK.Performance evaluation of SVM and k-nearest neighbor algorithm over medical data set20125014122410.5120/7842-1055EastwoodM.GabrysB.A non-sequential representation of sequential data for churn prediction2009Berlin, GermanySpringer209218NamY.ParkJ. W.Child activity recognition based on cooperative fusion model of a triaxial accelerometer and a barometric pressure sensor201317242042610.1109/JBHI.2012.22350752-s2.0-84885088576NasiriJ. A.Moghadam CharkariN.MozafariK.Energy-based model of least squares twin Support Vector Machines for human action recognition201410424825710.1016/j.sigpro.2014.04.0102-s2.0-84900390433AltunK.BarshanB.TunçelO.Comparative study on classifying human activities with miniature inertial and magnetic sensors201043103605362010.1016/j.patcog.2010.04.0192-s2.0-77953617522WangR.KwongS.ChenD.CaoJ.A vector-valued support vector machine model for multiclass problem201323517419410.1016/j.ins.2013.02.001MR30422952-s2.0-84875931452ZhangN.WilliamsC.Water quantity prediction using least squares support vector machines (LSSVM) method2014245358BrabanterK. D.KarsmakersP.LS-SVMlab Toolbox User's Guide2011, http://www.esat.kuleuven.be/sista/lssvmlab/downloads/tutorialv1_8.pdfhttp://www.motionnode.com/ZhangM.SawchukA. A.A feature selection-based framework for human activity recognition using wearable multimodal sensorsProceedings of the International Conference on Body Area Networks (BodyNets '11)November 2011Beijing, ChinaZhangM.SawchukA. A.USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensorsProceedings of the ACM International Conference on Ubiquitous Computing (UbiComp '12), International Workshop on Situation, Activity and Goal AwarenessSeptember 2012Pittsburgh, Pa, USAhttp://sipi.usc.edu/HAD/IncelO. D.KoseM.ErsoyC.A review and taxonomy of activity recognition on mobile phones20133214517110.1007/s12668-013-0088-32-s2.0-84878102799LiangY.ZhouX.YuZ.GuoB.Energy-efficient motion related activity recognition on mobile devices for pervasive healthcare201419330331710.1007/s11036-013-0448-92-s2.0-84904159626CrossR.Standing, walking, running, and jumping on a force plate199967430430910.1119/1.192532-s2.0-0033416483HofA. L.Van ZandwijkJ. P.BobbertM. F.Mechanics of human triceps surae muscle in walking, running and jumping20021741173010.1046/j.1365-201x.2002.00917.x2-s2.0-0036164920ColaG.VecchioA.AvvenutiM.Improving the performance of fall detection systems through walk recognition20145684385510.1007/s12652-014-0235-xWuH.-I.LiB.-L.SpringerT. A.NeillW. H.Modelling animal movement as a persistent random walk in two dimensions: expected magnitude of net displacement20001321-211512410.1016/s0304-3800(00)00309-42-s2.0-0034734113LiC.LinM.YangL. T.DingC.Integrating the enriched feature with machine learning algorithms for human movement and fall detection201467385486510.1007/s11227-013-1056-y2-s2.0-84896725719ZhangN.WilliamsC.OsosanyaE.MahmoudW.Streamflow Prediction Based on Least Squares Support Vector Machines2013, http://www.asee.org/documents/sections/middle-atlantic/fall-2013/11-ASEE2013_Final%20Zhang.pdfRodriguez-MartinD.SamàA.Perez-LopezC.CatalàA.CabestanyJ.Rodriguez-MolineroA.SVM-based posture identification with a single waist-located triaxial accelerometer201340187203721110.1016/j.eswa.2013.07.0282-s2.0-84881165460VarkeyJ. P.PompiliD.WallsT. A.Human motion recognition using a wireless sensor-based wearable system201216789791010.1007/s00779-011-0455-42-s2.0-84869145288