Noncontact Heart Rate Measurement Using a Webcam, Based on Joint Blind Source Separation and a Skin Reflection Model: For a Wide Range of Imaging Conditions

College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, China Key Laboratory of Medical Image Computing, Ministry of Education, 110169, China Electrical and Computer Engineering Department, Stevens Institute of Technology, Hoboken, NJ 07030, USA Blizard Institute, Barts & The London School of Medicine & Dentistry, Queen Mary University of London, E1 2AD, London, UK


Introduction
The heart rate (HR) is a widely used indicator of health status [1,2]. At present, electrocardiography (ECG) or photoplethysmography (PPG) are well-established as reference methods for HR measurement [3,4], but they require the application of electrodes or transducers to the skin during the measurement, which may cause discomfort in longterm monitoring and do not lend themselves well to personal use.
In recent years, as an alternative and less obtrusive method, remote photoplethysmography (rPPG) has offered a low-cost and contactless approach for measuring HR with a camera. In 2008, Verkruysse et al. first introduced ambient light photoplethysmography to implement noncontact HR measurement [5,6]. In contrast to contact PPG, which in both reflection and transmission modes, uses a built-in light source, rPPG measures the variation with time of ambient light reflected from the skin, making it necessary to distinguish between the specular and diffuse reflections of light from the skin surface. The diffused component is the reflected light that remains after absorption and scattering in the skin, subdermal tissue, and blood. Consequently, it varies with changes in blood volume and the movement of the blood vessel wall [7][8][9]. The reflected light can be recorded with a variety of optical sensors, including cameras, and the plethysmographic signal can be retrieved to infer the physiological information of interest. Thus, the rPPG approach for noncontact HR monitoring and telemedicine is receiving more and more attention and, especially in recent months, for its potential application in isolation units to reduce the risk of cross-infection.
However, many factors influence the accuracy of rPPGbased HR measurement. These include the subject's skin tone [10], head movement [10], rapid changes during recovery from exercise [11], variation in ambient illumination [12], and, when imaging the head, facial expression changes associated with talking, for example, during human-machine interaction [13]. To address these issues, many approaches have been proposed, the principal ones of which include signal decomposition, model-based methods, and data-driven methods.
1.1. Signal Decomposition Methods. In early studies, blind source separation (BSS) based on independent component analysis (ICA) [8] or principal component analysis (PCA) [14] was widely used to extract pulse information from the rPPG signal. The BSS methods usually assume that the pulse signal satisfies independence from the other components in the mixed signal and can be extracted by analyzing the correlation among its different components. Recently, some new signal decomposition methods have been reported. Cheng et al. proposed a joint blind source separation and ensemble empirical mode decomposition (JBSS_EEMD) method, which was found to be a feasible way to suppress the effects of varying illumination, by extracting the underlying common light sources [15]. Wei et al. used a second-order BSS method based on two regions of interest (ROIs) and obtained plausible estimates of both the heart and respiratory rates, although the ROIs were not detected automatically [16]. Macwan et al. presented a semiblind source method using a multiobjective optimization approach with autocorrelation as a periodicity measure, which focused on the subject's skin tone and a human-machine interaction scenario, wherein subjects were engaged in a mathematical computer game [17]. Song et al. also proposed a new semiblind source approach using fast kernel density independent component analysis (KDICA), which is a nonparametric BSS method, with specific importance attached to different distances between the participant and the camera [18]. However, these signal decomposition methods only achieve high accuracy under a limited range of conditions, whereas the modelbased methods outlined below are more robust for a more extensive variety of scenarios.

Model-Based
Methods. At present, model-based methods are based on the physical principles, which govern the rPPG signal generation. There are two typical models: the skin reflection model [19] and the local invariance of HR model [20]. The skin reflection model-based methods use the color vector information of the different components of the rPPG signal to facilitate the separation of specular and diffuse reflections. De Haan and Jeanne presented a chrominancebased signal processing method, which used the chrominance subspace to extract the pulse signal, although they mainly focused on the exercise recovery scenario [11]. Wang et al. proposed a planar orthogonal-to-skin (POS) method based on a skin reflection model, and they considered subjects' skin tone, recovery after exercise, head movement, and illumination by fluorescent lamps of different colors [19]. In a previous study from our laboratory, Qi et al.
employed a skin reflection model with ICA to present the Project_ICA method and considered the effects of skin tone, exercise recovery, head movement, and human-computer interaction [21]. The local invariance of the HR model, developed by Pilz et al., converts rPPG signals to certain specific spaces and uses the local group invariance of HR to estimate its value. They have successively proposed the diffusion process method [13], the local group invariance method [20], and the spherical method [22] and investigated the effect of head movement and person-to-person interaction.
1.3. Data-Driven Methods. More recently, to address the limitations of conventional heuristic man-made features or engineered models, some researchers have used data-driven methods to infer HR through collected signal datasets. For example, Monkaresi et al. proposed a machine learningbased K-nearest neighbor algorithm [23], using the rPPG signal features for regression analysis, considering both stationary and exercise scenarios. In addition, Chen et al. proposed a deep learning-based convolutional attention network: Deep-Phys [24], and they focused on the subject's skin tone, head movement, and person-to-person communication scenarios. However, data-driven methods require a large amount of training data to feed the network, so they have some limitations in real applications.
To verify the robustness of rPPG-based HR measurement, most of the previous studies have focused on various common scenarios, e.g., the subject's skin tone, exercise recovery, and head movement, but less attention has been paid to poor illumination and illumination variation, both of which can cause interference in facial rPPG signal acquisition. Therefore, we have addressed these two illumination problems by means of a novel rPPG-based HR measurement method using a webcam. The system also functions well under a variety of other conditions, including exercise recovery, head movement, and human-machine interaction, and is therefore applicable to more scenarios than existing methods.
The main contributions of this paper are twofold: (1) We use both the signal decomposition method and the projection process based on a skin reflection model. This approach can eliminate the common illumination component from the facial and background ROI signals and extract the signal which carries the pulse rate information. (2) After ICA processing, to determine the output signal which carries the pulse information, we use the green channel signal in the facial ROI as a reference to identify the pulse signal. To validate the proposed method under a range of real-life conditions, we introduced for testing three datasets which include six different scenarios. The first dataset is from our previous study [21], the second is the Public Benchmark Dataset [25], and the third is a new dataset assembled for this study. We then compare the proposed method with three recently developed methods, using three metrics: Pearson correlation coefficient (r), Mean Absolute Deviation (MAD) (bpm), and Root Mean Square Error (RMSE) (bpm).
The remainder of the paper is structured as follows. Section 2 describes the proposed method and the datasets. The experimental findings are presented in Section 3. The results 2 Journal of Sensors are analyzed and discussed in Section 4, and conclusions are drawn in Section 5.

Methods and Materials
The proposed method includes the following steps: (1) The facial video is recorded with a standard RGB camera, and the facial and background ROI signals are extracted from each video frame. (2) The JBSS method is used to remove the common illumination component from the facial and background ROI signals, obtaining a new facial ROI signal.
(3) The pulse signal is extracted from the new facial ROI signal using the projection method. (4) The green channel signal from the facial ROI is used to identify the pulse signal, and the HR is calculated using the fast Fourier transform (FFT). This method not only removes the illumination variation to enhance the extraction of the pulse information but also solves the problem of no ordering of the output components after ICA. The whole process is shown in Figure 1, and the specific details of the major steps are described in the following Sections 2.1 to 2.4.

Acquisition of Facial ROI and Background ROI Signals.
In this step, the Viola-Jones face detector is used to detect a rectangular facial region in the first video frame, and the facial feature points in the rectangular facial region are then detected using the minimum eigenvalue algorithm [26]. Next, the Kanade-Lucas-Tomasi (KLT) tracking method [27] is utilized to produce the facial feature matching point pairs between the previous and current frames, and a rectangular facial region is generated in each frame, as Figure 2(a) shows. Then, the YCbCr [28] color space is used to detect the skin region, as shown in Figure 2(b). The skin region is treated as the facial ROI, and the nonskin background region is treated as the background ROI. Finally, we calculate the spatial RGB averages of all the pixels in each facial ROI and background ROI at time t to obtain the facial ROI signal and background ROI signal as where SðtÞ is the facial ROI signal or the background ROI signal from the RGB channels, S skin is the area of facial ROI or the background ROI, and Mðx, y, tÞ is the pixel value of pixel ðx, yÞ in S skin at time t. K is the total number of pixels in S skin .

Joint
Blind Source Separation. rPPG-based HR measurement is often affected by environmental illumination, which is taken as a common factor in the facial and background ROI signals. We employed the JBSS to remove interference caused by the illumination, separating the underlying common data sources into multiple datasets and maintaining a consistent sequence of the extracted sources in the multiple datasets. This method termed independent vector analysis Gaussian (IVA-G) [29] was selected to implement the JBSS framework, and the process of IVA-G is shown in Figure 3. Assuming that each matrix X ½m can be composed of L underlying independent sources as where A ½m ∈ ℝ P×L is the mixing matrix and S ½m ∈ ℝ L×Q can be expressed as S ½m = ½s , and s ½m ðnÞ ∈ ℝ 1×Q is the underlying independent source, as shown in Figure 3(a).
Furthermore, L SCV, as defined in [30], can be obtained by extracting the underlying independent sources in each matrix S ½m , and the n-th SCV can be expressed as SCV n = ½s ½1 ðnÞ , s . Each SCV is independent of all others, and the components within each SCV are correlated. In order to get the target SCV which is regarded as the common component across the given M matrices for each SCV, we can compute a M × M correlation matrix with M × ðM − 1Þ/2 distinct correlation values. (The correlation values are determined by the linear correlation between the components within the SCV.) Then, using these correlation values, we can analyze how close the components within each SCV are to each other. We assume that all the correlation values for the target SCV are sufficiently high, and we then determine the target SCV [31] by (1) counting the number of correlation values greater than an empirically determined threshold, (2) calculating the ratio of this number to the total number of correlation values for each SCV, and (3) choosing the maximum of this ratio. The procedure is shown diagrammatically in Figure 3(b).
In this study, the facial and background ROI signals are regarded as two matrices X ∈ ℝ P×Q and Y ∈ ℝ P×Q , where M = 2 and P = 3 for the RGB channels, and Q is the number of video frames (see Figure 4). After IVA-G, the target SCV can be extracted and is assumed to be the common illumination interference. To remove this interference, each component in the target SCV is set to zero, and then, the resulting S ½1 new is substituted into equation (2) to get a new X new , which is the new facial ROI signal [15]. For the signal preprocessing, JBSS is applied to the whole facial ROI signal and the background ROI signal is extracted from each video clip.

Projection Process Based on a Skin Reflection Model.
The new facial ROI signal X new contains the specular reflection component due to the direct reflection of illumination from the skin surface, and the diffuse reflection component after the reabsorption of the incident light by the skin and underlying tissue, including blood vessels. In order to extract the diffuse reflection component, which carries the pulse rate information, we use the projection method based on the simplified skin reflection model [19]: where X new ðtÞ is the skin pixel intensity from the RGB channels at time t, 1•ð1 + iðtÞÞ represents the illumination variation component and 1 is the unit vector, u s · sðtÞ denotes the specular component, u p · pðtÞ denotes the diffused component, I 0 denotes the stationary illumination component, and C ∈ ℝ 3×3 is the diagonal matrix that is used to normalize the steady component.
In order to reduce the signal dimensionality and eliminate the illumination variation component iðtÞ, the timeseries signal X new ∈ ℝ 3×Q is projected onto the specific orthogonal plane D ∈ ℝ 2×3 . The optimal plane D was selected empirically by studying how the projection axes on the S [2] S [M] X [1] A [1] = × X [2] A [2] X where D 1 ∈ ℝ 1×3 and D 2 ∈ ℝ 1×3 are orthogonal to each other and Pr ðtÞ ∈ ℝ 2×Q is the projection signal with two components Pr 1 ðtÞ ∈ ℝ 1×Q and Pr 2 ðtÞ ∈ ℝ 1×Q (see the left-hand section of Figure 5). After the projection process, the threedimensional signal X new ∈ ℝ 3×Q has been reduced to a twodimensional signal Pr ðtÞ, and iðtÞ is eliminated. Pr 1 ðtÞ and Pr 2 ðtÞ in Pr ðtÞ are assumed to be independent of each other; thus, Pr 1 ðtÞ and Pr 2 ðtÞ can be decomposed using ICA, to retrieve sðtÞ and pðtÞ, as shown in equation (6): where W ∈ ℝ 2×2 is the unmixing matrix (see the right hand (ICA decomposing section) in Figure 5).

Calculation of Heart
Rate. However, there is no ordering of sðtÞ and pðtÞ after ICA processing, so we have to determine which one is the pulse signal. In our previous study [21], we selected the POS signals [19] for linear correlation analysis with sðtÞ and pðtÞ and chose the one more closely correlated to the POS signal as pðtÞ. In contrast, some previous studies have pointed out that the green channel signal in the facial ROI contains more pulse rate information [5,32]. Lin and Lin also determined that the green channel signal achieved the highest signal-to-noise ratio for a rPPG signal under a variety of different lighting conditions [33]. Consequently, we performed a linear correlation analysis between the green channel signal and the output signals after ICA and then chose the pulse signal pðtÞ which had the highest correlation with the green channel signal. After pðtÞ was selected, the components unrelated to HR were removed by applying a band-pass filter with cut-off frequencies of 0.7 Hz and 4 Hz, covering the normal range of human HR from 42 beats per minute (bpm) to 240 bpm [21]. Finally, an FFT was applied to the filtered pulse signal, and the heart rate was taken as the frequency where the spectral power was maximal. The process is shown in Figure 6.

Experiments and Datasets.
To evaluate the algorithm's performance, we selected three datasets for testing: data from a previous study in our laboratory [21], the Public Benchmark Dataset [25], and a dataset collected specifically for this study. A [1] A [1] 3 ×3
Pr 2 (t) Figure 5: Schematic of the projection process. 5 Journal of Sensors various skin colors (18 pale-skinned subjects and 10 darkskinned subjects) [21]. The subjects were tested in four different scenarios: stationary, exercise recovery, swinging head, and interacting with a computer (playing video games) in a natural light environment. A standard RGB camera (C2070i, Logitech Inc., CA, USA) was used to record the subject's face at 30 frames per second (fps) with a pixel resolu-tion of 640 × 480, and each video record was one-minute long. A transmissive finger pulse oximeter (Yuwell™ YX303, Yuyue Medical Equipment and Supply Co., China.) was used to acquire the reference HR value, averaged over successive 10-second periods. Details of the four scenarios are shown in Table 1. More details of the experimental setup are described in reference [21].    [21].

Scenarios
Description of the scenario Stationary All subjects were required to sit still with their faces unobstructed and to look at the camera directly without movement.
Exercise recovery All subjects were asked to do 20 squats in approximately two minutes. Immediately after the exercise, the subjects were asked to sit still and look directly at the camera.
Swinging head All subjects were required to swing their heads from side to side at a rate of about 10 times per minute. Some stills from the recording of two subjects with different skin colors are shown in Figure 7. Human-computer interaction All subjects were required to play computer games with no constraints on their movement.   Table 2.
More details of the experiment can be found on the dataset website https://osf.io/rwsx6/. Some stills from recordings of the head movement scenario are shown in Figure 8. For the other conditions, the subjects were required to look at the camera without moving.

Dataset Recorded Specifically for This Study. The Public
Benchmark dataset considers different illumination intensities and color temperatures but does not consider variation in the background illumination. This variation can affect the accuracy of rPPG-based noncontact HR measurements [15]. For example, when a person is looking at the computer screen, the flickering light of the screen will be reflected on their face. Similarly, the illumination on a driver's face in a traveling car will be continuously varying in the face of oncoming traffic and other changes in external illumination.  To further investigate the effect of illumination variation on rPPG-based noncontact HR measurements, we established a varying illumination dataset which, in contrast to the Public Benchmark Dataset, used variations in color temperature rather than intensity. It contains 12 videos from 12 subjects (8 pale-skinned and 4 dark-skinned), all of whom gave their informed consent. The experimental protocol was approved by the ethical review board of our institute. A standard RGB camera (C2070i, Logitech Inc., CA, USA) was used to record signals for one minute at 30 fps with 640 × 480-pixel resolution.
All subjects remained stationary, exposed to a natural light background upon which was superimposed the additional illumination, switched between two color temperatures. The additional illumination was provided by an LED lamp (LED308W, Godox Inc., China), placed in front of the subjects at a distance of 1 meter, giving a light intensity on the face of 430 lux (manufacturer's figures). The color temperature, set by a manual control unit, was switched between 3300 kelvin and 5600 kelvin every 10 seconds for a period of one minute, according to the schematic in Figure 9(a). A multiphysiological parameter monitor (NSC-M12P, Neusoft Inc., China) was used to acquire the reference HR signal using a pulse oximeter finger probe, and the reference HR value was averaged over successive 10-second intervals. The whole experimental setup is shown in Figure 9(b).

Comparison with Existing Methods.
In order to evaluate the performance of the proposed method, we compared it with three other recently developed ones: JBSS_EEMD [15], POS [19], and Project_ICA [21]. All these methods were applied to the three datasets. To ensure a fair comparison between the different methods in the preprocessing stage, we used the method in Section 2.1 to extract the original facial and background ROI signals from the video. Then, the facial ROI signal was applied using POS and Project_ ICA, and both signals were processed by JBSS_EEMD and the proposed method to estimate the HR. A 30 s time window was used to process each video. Adjacent time windows were overlapped by 29 s so that each one-minute video yielded 31 results. The procedure is illustrated in Figure 10. It is worth noting that in the preprocessing stage, the JBSS method was applied to the whole facial ROI signal (1 min long) and background ROI signal (1 min long), and then, the projection process was applied to all time windows (30 s long).

Results
To assess and compare the performance of the four methods on the three datasets, three evaluation metrics were adopted: Pearson correlation coefficient (r), Mean Absolute Deviation (MAD) (bpm), and Root Mean Square Error (RMSE) (bpm):      Journal of Sensors where HR ðiÞ rppg and HR ðiÞ gt represent the subject's estimated and ground-truth HR values, respectively, at the i-th time window. In order to remove outliers in the estimated HR values, the Boxplot method was employed [34], by choosing 0.8 and 1.2 times the median value as the lower and upper limits, respectively, and removing values outside this range.

Results from the Dataset Used in a Previous Study from
Our Laboratory. This dataset tests the performance of the algorithm under the conditions of no movement, exercise recovery, swinging head, and human-computer interaction. All the measurements were performed on pale-and darkskinned subjects, and the results are summarized in Tables 3 and 4. The best and second-best agreements are marked with " * " and " * * ," respectively. Tables 3 and 4 indicate that, for most of the scenarios, the results of the proposed method and those of POS and Pro-ject_ICA are in good agreement, and these three methods outperform JBSS_EEMD as shown by its higher values of MAD and RMSE and lower values of r. The proposed method had the best performance in two scenarios (exercise recovery and swinging head for dark-skinned subjects) and the second-best in one scenario (swinging head for paleskinned subjects). Specifically, for the pale-skinned subjects, the POS and Project_ICA methods performed slightly better than the approach proposed here, under the stationary and human-computer interaction scenarios. Furthermore, the results of the proposed method are comparable with POS and Project_ICA under the exercise recovery and swinging head scenarios. For the dark-skinned subjects, the proposed method is more accurate than POS and Project_ICA under the swinging head scenario. The results of POS and Pro-ject_ICA are a little better than that of the proposed method under the stationary and human-computer interaction scenarios. Moreover, the proposed method performs as well as the POS and Project_ICA methods under the exercise recovery conditions.
To evaluate the consistency between the measured HR value and the ground-truth, we constructed Bland-Altman plots comparing HR rppg obtained from the proposed method with HR gt for each of the four scenarios (see Figure 11), considering dark-and pale-skinned subjects separately. Figure 11 demonstrates that, for the proposed method, the agreement between HR rppg and HR gt under the stationary and human-computer interaction scenarios is better than that under the exercise recovery and swinging head scenarios. The proposed method confirms the superior performance obtained with the pale-skinned subjects under the stationary scenario, with narrower 95% limits of agreement (-6.03 to 4.73 bpm), and with the mean bias close to zero (-0.83 bpm). The results for the pale-skinned subjects under the human-computer interaction scenario show that the 95% limits of agreement are also narrower, ranging from -6.20 to 6.09 bpm with the mean bias close to zero (-0.05 bpm). For all scenarios, the agreement between the proposed method and the ground-truth was closer for the pale-skinned subjects than for the dark-skinned subjects.

3.2.
Results from the Public Benchmark Dataset. This dataset has been used mainly to test the performance of the algorithm under various inadequate lighting conditions. In this study, a single light-skinned subject was tested under exercise recovery conditions and with their head moving at different frequencies. The two dark-skinned subjects were only tested under conditions LC1 to LC5, none of which involved the   Table 5, and those for dark-skinned subjects are shown in Table 6. The best and the second-best results are marked with " * " and " * * ," respectively. Tables 5 and 6 indicate that, for the 12 scenarios which involve the different illumination conditions (P1LC1-LC7, P2LC1-LC5, and P3LC1-LC5), the proposed method and JBSS_EEMD were more accurate than POS and Project_ ICA, and that the proposed method has the best performance in six scenarios and the second-best in five. It is worth noting that under some low illumination scenarios, such as LC1, LC2, LC3, and LC7, the errors of POS and Project_ICA are much larger than those of the JBSS_EEMD and the proposed method. The results of the proposed method are comparable with POS and Project_ICA under the exercise recovery (P1H1) and swinging head scenarios (P1M1), and these three methods are superior to JBSS_EEMD. It should be noted that the results of all four methods were less accurate when subjects nodded their heads at a rate of 90 times per minute (P1M3) than when subjects nodded their heads 60 times per minute (P1M2). For the proposed method, we constructed Bland-Altman plots to show the agreement of the HR rppg and HR gt (see Figure 12). Since the two skin-tone groups were both recorded under LC1 to LC5 light intensities, the results of light-skinned and dark-skinned subjects under these five conditions are plotted alongside each other for ease of comparison. Figure 12 compares the agreement between HR rppg and HR gt for the subjects with different skin tones and shows that this method performs better for the light-skinned subject than the dark-skinned ones under all conditions.

Dataset Specific to This
Study. This dataset allows evaluation of the performance of the four methods when the subjects are stationary, and the background is varied by step changes in the color temperature of the illumination. Experiments were performed on 8 pale-skinned and 4 darkskinned subjects, and the results are listed in Table 7. As before, the best and the second-best results are marked with " * " and " * * ," respectively. Table 7 shows that when the background color temperature is repeatedly varied between two levels, the proposed method yields the best results both for the pale-and darkskinned subjects. Figure 13 further demonstrates the consistency of the proposed method and shows the consistency for the pale-skinned subjects, with the 95% limits of agreement from -7.74 to 10.06 bpm and the mean bias of 1.16 bpm, which is marginally better than that for the darkskinned subjects, where the corresponding values are -12.82 to 9.45 bpm for the 95% limits of agreement and -1.68 bpm for the mean bias.

Discussion
The results from the data obtained in our earlier study [21] show that, with sufficient illumination, POS, Project_ICA, and the proposed method, all of which are based on the skin reflection model, have better outcomes than the JBSS_EEMD approach. This implies that the skin reflection model, which relies on the projection process to weaken the contribution of the specular component of the light reflected from the skin, and thus the processed signal, can better retain the pulse rate information [19,21]. We also note that, under adequate illumination, the performance of the POS and Project_ICA methods are slightly better than that of the proposed method.
The results from the Public Benchmark Dataset show that POS and Project_ICA performed badly when the illumination was poor. In contrast, JBSS_EEMD and the new method proposed here had relatively better outcomes, although the new method was superior to JBSS_EEMD. Not unexpectedly, under adequate illumination, the proposed method was more accurate than when operating with low illumination. For POS and Project_ICA, the projection process is applied directly to eliminate the component of the signal due to variations in background illumination, giving rise to more efficient separation of the diffuse and direct reflection components [19]. However, when there is inadequate background illumination, the camera cannot capture enough illumination and the projection process does not work well. At the same time, the JBSS preprocessing can effectively reduce some of the undesirable effects of variation in the environmental illumination and, therefore, enhance the performance of the subsequent projection process. We also found the same pattern, namely, good agreement between them for each metric, in the exercise recovery and head swinging scenarios as in the dataset from our earlier study [21].  14 Journal of Sensors In the results from the dataset compiled for this study, when the background illumination was repeatedly switched between two color temperatures, it was found that the proposed method had the optimum results. The main reason is that it first uses JBSS to weaken the adverse effects of changes in environmental illumination and then uses the skin reflection model to more consistently obtain the signal component which contains the pulse rate information.
Under the head movement and exercise recovery scenarios, the results from the Public Benchmark Dataset were similar to those from the dataset used in our previous report [21], and the methods based on the skin reflection model were generally better than the JBSS_EEMD. We also observed that HR errors of subjects under the head movement and exercise recovery scenarios were greater than those of the stationary subjects. We also found that when the frequency of head   movements is higher, the accuracy of the four methods is lower. The reason for the poor performance in the head movement scenario is that when the subjects' heads are moving rapidly, the detected facial ROI becomes small or is even lost entirely, which leads to the unsatisfactory acquisition of the rPPG signal. The reason for the comparatively poor performance in the exercise recovery scenario is that the subject's HR is expected to change markedly during the measurement period thus perturbing the HR estimation over the sampling period used in this and our previous study [21]. Therefore, this variation cannot be accurately captured, and the average over this period does not represent the exercise HR or the resting value to which it returns. We also observed that POS and Project_ICA performed slightly better than the proposed method under the stationary scenario because, under the conditions of sufficient and invariant illumination, the JBSS is of limited contribution in the preprocessing. From the results on the three datasets, it is clear that the dark-skinned subjects yield less accurate HR values than those with lighter skins. The reason is that the absorption spectrum of melanin and that of hemoglobin coincide to some extent, so the components of the light reflected from the skin which has interacted with the hemoglobin will be affected by the melanin in the skin [35]. The more melanin, the more significant the impact it has on the HR signal.
Overall, the proposed method combines the advantages of the signal decomposition method and the skin reflection model approach, in which the JBSS is used to remove the common illumination component from the facial ROI and background ROI signals, and the projection process is then used to obtain the signal that retains the pulse rate information. This technique can not only lessen the effect of poor illumination but can also provide an accurate HR measurement for subjects under conditions of inadequate and or repeated changes of illumination. However, the proposed method may be inadequate as a means of processing the facial ROI, resulting in poorer accuracy for the rapid head movement cases as well as for the subjects with more melanin in the skin. Although we have exploited three datasets that contain a variety of subjects and scenarios, there are still some limitations. The number of subjects in the Public Benchmark Dataset and the number of scenarios in the dataset compiled for this study are still too small to robustly validate the accuracy of noncontact video HR measurements under real-world conditions. In addition, the dataset compiled for this study involves a simple and repeated change in color temperature and is thus only a basic simulation of the more diverse changes in ambient illumination expected in the field. In future studies, we will investigate more realistic changes in illumination and pay further attention to the application of novel sensor technologies, such as fiber optic sensing, perhaps combining them with devices designed to detect force or biomolecules of interest, such as blood glucose or ascorbic acid [36][37][38][39], although achieving this in a noncontact device would be a challenging problem.

Conclusions
In this study, we have proposed a new method for measuring HR using a webcam, which combines the JBSS and projection methods based on a skin reflection model. The JBSS method can effectively reduce the effect of low-intensity ambient illumination and illumination variations. The projection method based on the skin reflection model requires adequate illumination intensity to effectively extract the pulse rate information. We validated the proposed method on three datasets and compared the results with data from three other studies. Although the proposed method is not optimal under the stationary and human-computer interaction scenarios, differing from the best by a small margin, it can provide better estimates under most conditions, especially under inadequate and varying illumination. Thus, the approach proposed here has the potential to become the method of choice for the noncontact measurement of HR, because it is reasonably accurate and