Research on the Application of Wireless Wearable Sensing Devices in Interactive Music

Wireless wearable devices can greatly assist and promote the artistic presentation of interactive music and have attracted the attention of more and more composers, musicians, dancers, and visual artists. It can pick up data information in real time, integrate with performers, and provide immersive performance experience. It not only builds a bridge between subjective feeling and spiritual perception for performers and audience but also enables audience to directly observe art information better. This paper mainly introduces the development process of a wearable sensor system designed for monitoring interactive music movement. Firstly, an interactive music motion model is established according to the principle of human body kinematics, and the experimental scheme of measuring the swaying angle of interactive music with a single sensor device is standardized. A multisensor fusion algorithm is proposed to estimate the swing angle of interactive music. Based on the “ cost-incentive ” emotional model, the wireless wearable device and interactive music model are regarded as continuous variables determined by the emotional e ﬀ ect value and the incentive value. Extract energy, rhythm, harmony, time domain, and spectrum features of interactive music of wireless wearable devices, and reduce the dimension of a music feature space through principal component analysis, spatial projection, and relief feature selection. Finally, the practicability of the system and the accuracy of the algorithm are veri ﬁ ed by experiments. The recognition rate of wireless wearable devices and interactive music realized based on this algorithm was improved.


Introduction
Wireless wearable devices in the field of digital media application in multimedia playback controls and the parameters of the health monitoring, multimedia equipment data control, new media art and design, and design of VR games and entertainment and intelligent fiber, such as common armband, all kinds of gloves, VR, AR glasses, and headtype equipment are the most typical wearable devices [1,2]. The development of brainwave sensors, DIY sensors, and interaction systems specifically designed in different functional interaction systems and other technologies has provided a richer range of functions for wearable devices. The application of wearable devices is very active in the field of artistic creation. Artists try to find new ways to interpret their artistic ideas and pursuits from innovative technologies, reflecting the integration of art and technology. The author only expounds the application of wearable devices in the field of interactive music, especially the application of artistic creation using brainwave sensors developed in recent years.
The "wearable" feature of wireless wearable devices can provide users with an immersive experience [3]. For interactive music performance, the input data of human movement and gesture information are very important. The data application principle of wireless wearable devices in interactive music includes motion capture, motion tracking, and data mapping. On the one hand, sensors are widely used in interactive music creation and performance. Through real-time tracking of human movements and acceleration and other information, these data are transmitted to max-pd or processing and other programming environments capable of real-time data processing [4], and these data are output in the form of sound or images. A series of actions of performers on the stage provide the audience with better interactive experience, so that the audience can better understand the corresponding relationship between the actions of performers and their artistic effects. Some artists even welcome audiences to use wearable devices provided by composers to participate in the creation and presentation of their works, which is another form of interactive performance. Because the user is so integrated into the sensor, its presence is almost invisible, allowing the user to focus on the performance itself. On the other hand, WiFi or Bluetooth connection not only realizes the wireless transmission of data but also ensures the freedom of performers on the stage, increases the flexibility of movement, and can integrate performers' body movements with music, visual image effects, and other media into a better whole.
Wireless wearable devices based on glove designs and Myo armbands are widely used in the creation and performance of interactive music, especially EEG sensors that can better convey and represent the inner world of performers. Hands are the organ that people can easily touch and perceive the natural world in their daily life. It has been more than 40 years since the birth of wearable devices designed based on gloves. The device feeds into the interactive system by detecting the curvature of the fingers, the pressure between the fingers, the relative distance between the fingers, the direction of movement of the palm, and the acceleration, and it is possible to play other instruments while wearing it.

Related Work
Wearable devices often use multiple sensors to detect different kinds of data. This involves multisensor data fusion technology. Multisensor data fusion technology is mainly applied in the military field, but with the rapid development of sensor technology, sensors applied in various fields have emerged [3]. In order to make the monitoring results more accurate, multiple sensors are usually integrated. Use the different characteristics of various sensors to provide observations. In the past two decades, multisensor data fusion technology has been widely used in military and nonmilitary fields. Military applications include maritime and air surveillance and defense, battlefield intelligence acquisition, and early warning. [5]. Nonmilitary applications include remote sensing, device monitoring, medical care, and robotics [6]. Wearables need to exchange data with a computer and phone. However, such devices have limited resources (battery, processing power, storage capacity, etc.), short data collection cycles, and strong real-time performance. Therefore, the secure communication mechanism of TLS [4] and TCP/IP protocol cannot use a digital authentication mechanism similar to the certificate center [7]. In addition, the network communication links of wearable devices are relatively fragile, so they are more vulnerable to various attacks in an open environment, such as device authentication and message reliability and integrity. Among all kinds of attacks, identification-based attacks [8] (such as identity spoofing and MAC address spoofing) are the easiest to implement and the most destructive to the system. If identity-based attacks are successful, attackers can launch man-in-the-middle attacks, denial-of-service attacks, data tampering, session hijacking, and other attacks [9,10]. In addition, various physical signs of the human body are collected and transmitted to the remote cloud server through the wearable device node, and the data is provided to doctors, nursing staff, and researchers for use. However, the cloud environment has problems such as unreliable data services, illegal access, and tampering of data. Such data involves a high degree of privacy of users, so it is necessary to design an efficient data access and management mechanism to ensure the security of user data management and access [11].
At present, there are few reports on psychophysical experiments of interactive perception of music based on hearing, sight, and touch. An experimental system of "vision-touch" interactive perception of music is established [12,13]. Tactile music is generated through "vibrating tactile chair," and "visual music" is displayed on a computer monitor. Forty-three people with hearing disabilities were selected to participate in the interactive perceptual music experiment of "listening to one sight and one touch" [14]. Through a questionnaire survey and statistical analysis, it is found that all the subjects like "visual music" and "tactile music," but 54% prefer to perceive music only through a "vibrating tactile chair" and 46% prefer to use "vibrating tactile chair" and "visual music" synchronous interactive perception of music. Participants who did not completely lose hearing were also required to provide earphones to interact with music through auditory, visual, and tactile modes. Participants highly rated "vibratory tactile music" and "visual music" [15], expressing their strong desire for musical experience. Experiments were conducted on interactive music perception of "vision-vibration and touch," but the subjects were limited to people with hearing disabilities. The method of experiment and data statistics was too simple, and the experimental research was not conducted systematically according to the psychophysical experiment method. The experimental process and data statistics method need to be further optimized [16]. And it does not discuss how the interaction between "vision and vibration and touch" can enhance the sense of immersion of perceptual music.
Many of the wearable devices already on the market serve specialized fields like medicine, technology, and communication, but they get body movements, movements, and gestures and even a range of biological information like heart, brain, and muscle tone signals. This information can be read, saved, and processed by a computer and become important data and materials for artistic creation and realize interactive performance [17,18]. A Myo armband is a wearable motion capture and data control device designed by Thalmic Labs that is controlled wirelessly via Bluetooth and data transfer with a computer, smartphone, or other electronic device [19]. The Myo arm consists of a nine-axis inertial Measurement Unit (IMU) containing an EMG sensor, an acceleration sensor, a gyroscope, and a magnetometer. A large amount of data can be obtained through the recognition of arm movement direction, muscle tension, palm and finger movements, acceleration, and direction [20,21]. It can transfer the acquired data to a variety of 2 Journal of Sensors applications through the OSC protocol, while providing artists with the possibility to work in a variety of programming environments [22]. As it is convenient to wear and does not affect the motion range of performers, the Myo armband greatly optimizes the immersive experience effect of performers, which is convenient not only for instrumentalists but also for artists who need to perform large movements on the stage, such as dancers and theater actors [23,24]. Its advantage is also reflected in the fact that signals used for human motion tracking by it are not limited by dimensions, and data can be picked up and transmitted stably as long as it is within the coverage range of Bluetooth signals [25]. Up to now, many companies have developed biosensor devices based on electroencephalography (EMG) to obtain bioinformation data of the human body, such as Emotiv, NeuroSky, iWinks, and Muse. By acquiring the brain waves in the brain activity, we can obtain the brain wave data of performers in different emotional states such as relaxation, meditation, or thinking.

Research on the Interactive Music Data Fusion Algorithm for a Wireless Wearable Sensor
3.1. Design Requirements. The workflow of the system is shown in Figure 1. The original data is transmitted wirelessly from the sending node to the receiving node. The receiving node supports the Zigbee protocol adapter, and the receiving node transmits the received original data through the USB port. After the PC completes the processing, calculation, and storage operations, it can choose whether to send back the feedback command according to the user's needs.
(1) Two-way data transmission with the receiving node Through the DART interface, the original data is received from the receiving end and decoded correctly. After data processing is completed, users can send instructions set by the user according to user needs.
(2) Real-time calculation of attitude angle According to the decoded accelerometer, gyroscope, and magnetometer data, the inclination of the sensor in each direction is calculated by the built-in kinematic algorithm.
(3) Save data According to user requirements, the original data and angle data after angle calculation are saved in the form of text for offline analysis.

Research Flow of Interactive Music for Wireless Wearable
Sensor Devices. This section introduces how each function of the software is implemented one by one according to the previous design requirements. The overall flowchart of the software is shown in Figure 2. The software system is responsible for processing, saving, and displaying data.
The physical meaning of the original sensor data decoded is the voltage value, so it needs to be converted into the corresponding physical quantity, and the conversion parameter coefficient is shown in the data manual. Then, the data is transferred to the algorithm function to calculate the instantaneous attitude angle. According to the algorithm, the quaternion representation and Euler angle representation of angle values can be obtained. Quaternion representation can be directly used to draw three-dimensional graphics, and Euler representation can be used to draw two-dimensional curves. The angle data is stored in the corresponding global variables.
3.3. Interactive Music Data Fusion Algorithm. In data fusion theory, according to the degree of data abstraction, it can be divided into three levels, namely, data layer fusion, feature layer fusion, and decision layer fusion. The data fusion in this paper belongs to feature layer data fusion. A multisensor data fusion algorithm based on weighting is proposed by using three different sensors to measure attitude. A set of attitude measurement data is obtained by combining an acceleration sensor and magnetic sensor, and a set of attitude measurement data is obtained by using angular velocity δ ðxÞ sensor alone. The data fusion formula is as follows: where γ represents the allocation weight and αðxÞ represents the attitude measurement data after fusion. To obtain the weight, the posterior reliability obtained by each single sensor should be integrated to assign the weight.
A sensor can be trusted by a set value of the nearest neighbor method. The main idea is to select the uniqueness of the closed value and the predicted position of the tracked target to associate as a target object in the most recent observation. This generally refers to the smallest statistical distance; most neighboring method is the essence of a "greedy" algorithm, and the adjacent method is easy to implement. It is suitable for high signal-to-noise ratio x and small target density.
It is known that the initial state of the Kalman filtering algorithm has the characteristics of Gaussian distribution, and its expectation and covariance matrix are as follows: In order to test the effectiveness of the Kalman filter algorithm in an attitude estimation algorithm, MATLAB software is used as the data acquisition interface in the simulation experiment and the algorithm program is run. A Kalman filter is applied in the calculation process of attitude sensor data through simulation experiments. Figure 3 is the simulation comparison diagram before and after three attitude angle filters. The realization line in the figure shows the attitude angle curve without filtering, the dotted line of the attitude angle curve represents the Kalman filter, and the horizontal sampling number and the vertical coordinate represent the attitude angle. The validity of the Kalman 3 Journal of Sensors filtering algorithm can be clearly seen from the figure. The unprocessed data produces large errors due to various influences such as noise, leading to serious vibration of simulation results. However, the simulation curve after Kalman filtering is smooth and has good continuity and stability.

Interactive Music Regression and Classification for
Wireless Wearable Devices. Wireless wearable devices and interactive music are evaluated by listeners based on their subjective feelings and comprehensive understanding of music. The most direct method is to estimate the emotional types of music through audience ratings. In order to study the different wireless wearable devices and interactive music feelings of each independent listener, the subjective rating method of wireless wearable devices and interactive music is often used to estimate the VA(axial vector flow) value of each song. Subjective rating of wireless wearable devices and interactive music usually requires multiple audiences with similar backgrounds to rate the test songs under the same listening conditions. In model training, it is known that the music signals of wireless wearable sensor devices and interactive music are transformed into the training data set ðX, YÞ of one-to-one corresponding feature X and emotional label Y after signal preprocessing and feature extraction. For the input training data set, the machine learning algorithm trains the classification or regression model with the minimum classification error or minimum mean square error as the objective function. In the stage of unknown type wireless wearable sensing device and interactive music prediction, it mainly predicts the emotional attributes of music. Music signals of unknown emotion types undergo similar signal preprocessing and feature extraction model training to generate a music feature vector x containing only the test data set. Then, in the input model training stage, a regression or classification model is generated. The device and interactive music can be implemented on the wireless wearable sensor to predict the output. By analyzing the computer wireless wearable sensor device and interactive music recognition model framework, it can be seen that the wireless wearable sensor device and   Journal of Sensors interactive music attributes of training data need to be predicted in model training, so the subjective scoring method of the wireless wearable sensor device and interactive music is often used in the preparation of training data sets. At the same time, due to the uniqueness and subjectivity of wireless wearable sensor devices and interactive music, the "training data set" obtained by a subjective evaluation method can be used for the training of wireless wearable sensor devices and interactive music recognition model, which can effectively generate a music recognition system with individual preference. A music signal is a continuous sequence of nonstationary signals, so the signal needs to be processed by frame. The MediaEval wireless wearable device and interactive music database dynamically annotate the VA mean and standard deviation of 2 Hz; i.e., an annotation is given every 0.5 s from 15 s of the music file. The corresponding length is adopted in the paper, and the sliding length of each frame is 0.5 s. Rectangular windows with overlapping window lengths of 50% frame each music file in the database. Figure 4 shows the results of music signal frame number 3 in the MediaEval wireless wearable device and interactive music database. Feature extraction and statistics of the music signal in each frame can obtain local feature data set corresponding to dynamic VA annotation in the database. By further statistical processing of 60 frames of data of each file, the global feature data set corresponding to the static V and A annotations of the whole music file can be obtained.
Because of the nonstationary and short-time variation of the music signal, it is not appropriate to process the signal in each frame of IS, especially the time spectrum analysis. Studies have shown that the spectral characteristics and some physical characteristic parameters of music are basically unchanged in the range, so music signal processing is often considered to be short-time stationary. Under the assumption of short-term smoothing, the processing method and theory of the smoothing process can be introduced into music signal processing, and the signal in each frame is further divided into multiple short time frames (periods) to overcome the inability of information between frames. For each short-term frame, the superposition of sliding frames is also used, so that a smooth transition can be made between frames.
Windowing the music signal is directly dotted with the window function on the music time series: In wireless wearable devices and interactive music recognition, regression and classification are two main methods. Different from classification, which only needs to distinguish emotions such as happiness, anger, sadness, and calmness, the goal of wireless wearable devices and interactive music regression is to identify more accurate wireless wearable devices and interactive music, which regards the emotional plane as a continuous space. And identify the emotional state represented by each point in the VA emotional plane by a regression model (find out the mapping between music and the specific coordinate position in the VA emotional plane). The training and prediction framework of the wireless wearable sensor devices and interactive music regression model is shown in Figure 5. The framework uses machine learning methods to realize regression model training and then can predict the VA value of wireless wearable devices and interactive music. Specifically, the model is trained and evaluated by the training data under the rules of regression algorithm, and the parameters of the trained regression model are used to test the data of wireless wearable sensor devices and interactive music prediction. The V and A values of the predicted wireless wearable sensor device and interactive music are compared with the V and A values of the manual calibration to verify the accuracy of the regression of the wireless wearable sensor device and interactive music.

Example Verification.
In different feature spaces, there are differences in the optimal recognition results, and the regression methods corresponding to the optimal recognition results are also different, which indicates that music emotion recognition is based on special cases, so it is necessary to effectively select the regression algorithm that can realize the optimal emotion recognition. By analyzing the experimental results, it can be found that the multiple adaptive regression spline method, radial basis function regression, random forest regression, and support vector regression algorithm have achieved a good regression effect on music emotion recognition.
A support vector regression method still has stable and good prediction results in the feature space after dimensionality reduction. In the fitting of V and A values, SVR parameter selection results are shown in Figure 6. From the regression results, it can be seen that the grid parameter search method adopted in this paper plays an important role in optimizing SVR parameters.
The above experimental results show that although the regression result of musical emotion achieved in this paper is superior to the reference benchmark and the best result reported in literature, the accuracy of musical emotion recognition still needs to be further improved; in particular, the error recognition between two adjacent quadrants is prone to occur. The analysis of the reasons is mainly the influence of subjective factors in the subjective score of music emotion. The uncertainty of personal subjective factors in the process of manual marking V and A values makes the regression identification of music emotion have a large error.
The experimental results of classification are shown in Figure 7. As can be seen from the experimental results, the hybrid classifier model proposed in this paper achieves the best music emotion recognition effect in all feature space, PCA feature space, and relief feature space, with recognition accuracy of 84.4%, 83.1%, and 80.1%, respectively. The experimental results show that the proposed hybrid classifier can improve the accuracy of music emotion classification     Journal of Sensors and further reduce the risk of misclassification. The proposed hybrid classifier method is effective. It can be seen from the experimental curve comparison diagram that the angle curve without algorithm processing is seriously jittered due to the existence of data error, especially in the beginning and end of the experiment. The smoothness of the curve processed by the algorithm is better, and the coincidence degree of the curve processed by the algorithm is also better, indicating that the algorithm has a good performance in reducing errors and improving data accuracy. For the real-time algorithm, it can be analyzed according to the corresponding situation of the simulation curve at the peak point and the zero point, as shown in Figure 8. After the curve of the data fusion algorithm is processed at the starting point and peak, the zero point has a delay compared with the original curve, but it is relatively small. Within the allowable range of the attitude measurement system, the algorithm has better real-time performance.
For the data fusion algorithm, the directional quaternion sequence extracts the optimal key frame in real time, and each time the key frame is transmitted, it is only rebuilt in real time on the PC to reduce the frequency of the wireless communication module sending mode. This can further reduce the energy consumption of wireless wearable devices and extend the service life of wireless sensor devices. The proposed fusion algorithm can effectively reduce the amount of transmitted data and improve the energy efficiency of the equipment while ensuring the accuracy of motion data through the validation of theoretical index TRR and practical index endurance time and battery capacity.

Conclusion
In recent years, interactive music has been developing continuously under the background of continuous technological progress, and wireless wearable sensing devices have also been widely applied in different fields. The flexibility and concealment of wireless wearable sensor devices provide greater free space for performers. In this highly immersive performance, performers can have a clearer perception of themselves in the performance space and integrate the movements and gestures in interactive music performance with the final musical expression. The application of wireless wearable sensing devices in interactive music reduces the distance between performers and audience as well as between performers and themselves, enhances their sense of participation and immersion, and greatly broadens the selection of artists' creative means and tools. At present, this kind of interdisciplinary new media art creation and multimedia form of cooperation is also developing rapidly. As mentioned above, these wearable wireless sensing devices are for artists to explore elements such as sounds, images, text, and structure of various fusion methods and are widely used in interactive electronic music, art, dance and drama, and other fields; performers' and audience's body by a realtime signal and the biological information is converted to the corresponding output information, greatly enhancing their sense of participation and expanding the expression of art. The wearable sensor network requires the cooperation of multiple sensor nodes. The algorithm in this paper deals with the information data of a single node, and problems may occur when multinodes cooperate to process data. Therefore, multinode cooperation is also a point to be studied.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.