Fractal-Type Dynamical Behaviors of Complex Systems

Physics Department, Faculty of Applied Sciences, Polytechnica University of Bucharest, Bucharest 060042, Romania Department of Physics, Technical “Gh. Asachi” University, 700050 Iasi, Romania Department of Electronic Engineering, City University of Hong Kong, Hong Kong Université de Lille, CNRS, UMR 8523, PhLAM-Physique des Lasers, Atomes et Molécules, CERLA-Centre d’Etudes et de Recherches Lasers et Applications, F-59000 Lille, France

This special issue of Complexity initially aimed at gathering leading-edge up-to-date studies showcasing the occurrence of fractal features in the dynamics of highly nonlinear complex systems. More than thirty years after coining term by Mandelbrot [1], fractals continue to fascinate the scientific community or the general public, with their wonderful propensity to infinitely repeat the same patterns at various (spatial and/or temporal) scales. On a more specialized ground, these continuous but nondifferentiable mathematical objects have become increasingly useful tools in developing original approaches to efficiently model natural or physical phenomena which escape from the traditional horizon of Euclidean metrics. Their range of applications is tremendously wide, from the well-known initial quest of measuring the length of Britain coast [2] to the most recent investigations on digital image processing for materials science [3], chaotic circuits [4], information technology [5], and medicine [6]. Initially developed at macro/mesoscopic scales, these objects penetrate now the nanoworld [7] with the development of ultrahigh resolution probes. At a temporal level, modeling of ultrafast phenomena (as peculiar oscillations evidenced in transient laser-generated plasmas [8]) can benefit from innovative scale relativity of fractal-based theoretical developments [9].
Although appealing, the initial fractal-oriented scope of this special issue has been proven to be quite sharp, and the decision was taken, in agreement with the journal editorial board, to broaden its range to a more general nonlinear dynamic field. In this context, 17 papers have been received for review, covering a wide variety of topics, from "pure" mathematics and theoretical physics to applications in materials science, plasma physics, medicine, and financial markets. Despite the high quality of all submitted papers, the very rigorous review process retained only seven of them for publications in this special issue. A brief presentation of each accepted paper is given in the following.
In "Analysis of Implicit Type Nonlinear Dynamical Problem of Impulsive Fractional Differential Equations," N. Ahmad et al. use a powerful combination of Caputo fractional derivatives and classical fixed point theorems (Banach and Krasnoselskii) to investigate the existence, uniqueness, and various kinds of Ulam-Hyers stability [10] of the solutions to impulsive fractional differential equations with nonlocal boundary conditions. A "pure" mathematical object, the Ulam concept of stability [10], finds quite significant applications in real-world problems like the formation of rogue waves (tsunami) in the oceans or the design of ultrahigh precision optical clocks. A very recent example is offered by the observation in optical fibers of the broken symmetry of Fermi-Pasta-Ulam recurrence [11]. In another "mathematical" paper of this special issue ("Moderate Deviations for Stochastic Fractional Heat Equation Driven by Fractional Noise"), X. Sun et al. use a fractional Laplacian operator to study a class of stochastic fractional heat equations driven by fractional noise. A central limit theorem is given, and a moderate deviation principle is established.
Two papers in this special issue focus on plasma physics subjects. Plasmas are typical examples of strongly nonlinear dynamic systems, with many degrees of freedom, favorable for the development of (self-) ordered structures, instabilities, and (reversible) transitions from ordered to chaotic states. Complex space charge structures such as fireballs, multiple double layers, plasma bubbles, and solitons offer exciting study grounds for both experimentalists and theoreticians. C. Ursu et al. in "Fractal Method for Modeling the Peculiar Dynamics of Transient Carbon Plasma Generated by Excimer Laser Ablation in Vacuum" propose a fractal approach to simulate the formation of (surprising) V-shape radiating plasma structures (consisting of two lateral arms of high optical emissivity and a fast expanding central part of low emissivity), which have been previously observed [12]. In their model, the complexity of the interactions between the transient plasma particles (in the Euclidean space) is substituted by the nondifferentiability (fractality) of the motion curves to the same particles, but in a fractal space. For plane symmetry and particular boundary conditions, stationary geodesic equations at a fractal-scale resolution give a fractal velocity field with components expressed by means of nonlinear solutions (soliton-type, kink-type, etc.). The theoretical model successfully reproduces the formation of V-shape radiating plasma structures experimentally observed. Another example on the usefulness of fractional-order modeling is offered by M. Guo et al. in "Study of Ion-Acoustic Solitary Waves in a Magnetized Plasma Using the Three-Dimensional Time-Space Fractional Schamel-KdV Equation." Most of the quantum hydrodynamic models (such as the Korteweg-de Vries [13]) used to date to describe the propagation of these waves in the plasma are integer-order models while fractional calculus has been rarely considered, most probably because of its nonlocal character. In the present paper, the authors take advantage of the multiscale analysis to obtain a new model called the time-space fractional Schamel-KdV (TSF-Schamel-KdV). Other advanced tools, such as Lie symmetry analysis, Riemann-Liouville fractional derivative, and multisoliton solutions derived through the Hirota bilinear method, are used to fully explore the characteristics of the ionacoustic solitary waves.
High current interest shape memory alloys (SMA) exhibit temperature-and stress-induced phase transformations which promote their use as both a sensor and an actuator, with prodigious practical applications in biomedical, robotics, automotive, and aerospace. Stress-strain hysteretic effects in these advanced functional materials lead to nonlinearity. Experimental dynamic analysis of SMAs was carried out and found that nonlinearity in phase transformation leads to a complicated dynamic behavior of the system. In this context, because of the nonlocal property (i.e., the next state of a system depends not only on its current state but also on all of its historical states), fractional calculus can provide better (or more realistic) results, for instance, in revealing the chaotic behavior (sometimes even for orders inferior to 3). Using a fractional-order approach, K. Rajagopal et al. reveal in the paper "Bifurcation and Chaos in Integer and Fractional Order Two-Degree-of-Freedom Shape Memory Alloy Oscillators" some periodic, quasiperiodic, chaotic, and hyperchaotic oscillations of a SMA-based oscillator for selected values of the excitation parameters and operating temperatures. The Grünwald-Letnikov derivative is used to derive the fractional-order model of the oscillator, and bifurcation plots are derived to study its dynamical behavior. Compared to the integer-order model, the fractionalorder model shows a more complex chaotic behavior, with also more extended regions of chaos.
In recent years, the fractal and multifractal analysis in biomedical data has seen a growing interest. Using a 2D technique of multifractal denoising, Y. Karaca et al. analyzed the data of 2204 individuals kept under observation at Massachusetts Medical School, University of Worcester, Massachusetts, USA, in their paper "Stroke Subtype Clustering by Multifractal Bayesian Denoising with Fuzzy C Means and K-Means Algorithms." Such a high-dimension database may contain subtle details, not easily detectable by human observation. For the stroke subtype diagnosis, details are important including hidden information concerning the possible existence of medical history, laboratory results, and treatments. In the present paper, 2D multifractal denoising techniques and K-means and FCM algorithms applied to numeric data obtained from the attributes belonging to patients with seven different stroke subtypes. The newly proposed 2D multifractal Bayesian denoising technique (2D mBd) was revealed to be the most successful feature descriptor in each stroke subtype dataset, when compared to the existing literature.
The only paper of this special issue not making use of fractional calculus is "Information Feedback in Temporal Networks as a Predictor of Market Crashes" by S. Begušić et al. They use information theory (nonparametric measures) to study cross-sectional time-lagged dependencies and estimate directed temporal dependency networks in financial markets and real estate data. By analyzing the emergence of strongly connected feedback components in the estimated networks, the authors hypothesize that the existence of information feedback in financial networks induces strong spatiotemporal spillover effects and thus indicates a systemic risk. A systemic risk indicator is built and is shown to be useful in obtaining early warning signals for crashes in financial markets.
The variety of application fields and the richness of the submitted papers highlight, once more, the universality of the nonlinear dynamics conceptual approaches, at various levels, from nano-to galactic scale, from attoseconds to astronomical time units. From this perspective, complexity remains a challenging notion for theoretical modeling, technical analysis, and numerical simulation, in physics and mathematics (among others), because highly correlated nonlinear phenomena evolving over a wide range of timescales and length scales control the underlying systems and processes in their spatiotemporal evolution. The papers gathered here are just a few examples of this complexity, illustrating current efforts from different research groups for a better understanding of fascinating nonlinearity manifestations.
Last but not least, the guest editors would like to express their gratitude to the highly committed editorial office and to the voluntary reviewers, who ensured the success of this special issue.

Conflicts of Interest
The editors declare that they have no conflicts of interest regarding the publication of this Special Issue.

Introduction
Connectivity patterns in complex systems and their dynamic properties have been the focus of extensive research in physical, biological, neurological, and social systems [1][2][3][4]. In many studies, identification of strong dependencies between interconnected components has been linked to systemic risk-the risk associated with the collapse of the entire system [5][6][7][8]. This line of research is especially prominent in modeling risk in financial systems [9,10], where dependent components within the system (e.g., banks, companies, and financial assets) are more likely to fail simultaneously and influence other connected components (a phenomenon known also as spillover effect), thus inducing a potential cascade of failures in the entire system [11]. However, due to the dynamic nature of financial systems and often complex dependency relationships, the identification and quantification of these effects is generally not a trivial task [12,13]. In addition, financial variables and time series (e.g., prices, returns, and volumes) are known to exhibit strongly non-Gaussian characteristics, heavy tails, and long-range dependence, which calls into question standard parametric approaches [14]. The complexity of economic and financial systems have been in the focus of research from various perspectives, employing Ising models [15,16], agent-based models [17][18][19], and game theory [20]. The instabilities therein, observed as crises and crashes [21,22], are especially elusive and hard to model and predict [23,24]. Generally, connectivity patterns in financial markets are modelled by estimating and analyzing graphs of financial assets [25], which have been found to capture the structural properties of these complex systems, such as the hierarchical structures captured by spanning trees in financial markets [26,27]. Empirical research on connectedness in financial systems and the relation to systemic risk has intensified in the aftermath of the 2008 subprime crisis, focusing either on contagion within the banking sector or the comovement in financial markets [28]. Longin and Solnik [29] first provided formal evidence of increased correlations during bear markets, and recent studies report that volatility cross-correlations exhibit long memory meaning that once high volatility (risk) is spread across the entire market, it could last for a long time [30]. Adrian and Brunnermeier's ΔCoVaR and the systemic expected shortfall (SES) by Acharya et al. [31] quantify the potential distress of financial institutions conditional on other institutions' poor performance, thus measuring the spillover of losses within the financial sector. Kritzman and Li [32] quantify the divergence of the cross section of financial returns from their historical behavior expressed as the Mahalanobis distance, and find that it can be used similarly to implied volatility in sets of assets without liquid option markets, while accounting for their interactions. To measure the total contribution of assets to the systemic risk of the entire market, Kritzman et al. [33] propose the absorption ratio, based on the principal component analysis of the cross section of financial returns. A more detailed look into the connectivity patterns in financial systems was proposed by Billio et al. [34], who analyzed the dynamic causality patterns in networks of hedge funds, brokers, insurance companies, and banks. They report a highly asymmetric relationship during the subprime crisis of 2008 and find that the proportion of significant causal relationships in the network increases with the financial distress and crises in the market. Recently, Curme et al. [35] introduced a numerical method for validating time-lagged correlation networks of assets and found a growing number of statistically validated links and the rise of instability in financial networks. Although the state-of-the-art approaches employ correlation-based measures and Granger causality tests for inference of dependency relationships in financial networks, the assumptions of linearity and Gaussianity of these methods are often violated with financial data. In addition, the conclusions drawn from such analyses require further long-range historical backtests of the relationship between specific network patterns and systemic risk in the markets which would include more systemic events than the single 2008 subprime crisis.
In this paper, we investigate the dynamic relationships within a network of financial assets, their evolution through time and relation to the systemic risk in the market. We take a nonparametric approach to identify and validate time-lagged cross-sectional links between pairs of assets in a financial market. Based on the information-theoretic concept of entropy which quantifies uncertainty, we measure the dependence between and within time series as a reduction in uncertainty, as quantified by Schreiber's transfer entropy [36]. The concept of entropy has been used to measure sequential irregularity in many time series applications, with notable results in finance. Moreover, transfer entropybased methods yield state-of-the-art results in detecting information flows in computational neuroscience, bioinformatics, and financial economics [37]. Specific financial applications include estimation of serial irregularities and risk in time series or returns and inference of the global dependence networks of financial indices [20,[38][39][40][41][42]. We expand on previous results by investigating the evolution of dynamic causal networks (sometimes referred to as information flow networks) through time and analyzing their association with systemic risk in the market. Moreover, we focus on the emergence of information feedback within these networks and hypothesize that strongly connected feedback components may indicate future distress in the system. The concept of feedback in financial systems was previously linked to systemic risk in various studies [22,43], most notably, the DebtRank methodology proposed by Battiston et al. [44] uses interbank lending networks to assess the risk within the financial sector. Our approach moves beyond interbank lending networks and relies on time series of returns for any set of financial assets to infer directed dependency networks and study the information feedback within. In addition, previous approaches often depend on fundamental firm-level financial data, available only in quarterly time intervals and often heterogeneous in nature, while in this paper we estimate dependency networks using asset prices from financial markets, allowing for wider areas of application.
Based on the proposed methodology for estimating dependency networks of financial assets from time series of asset returns, we examine the general levels of predictability in the market and the topologies of the estimated dependency networks. Furthermore, we study the emergence of information feedback in the networks and introduce a network-based systemic risk indicator to test our hypothesis. We apply the proposed approach to 9 U.S. sector indices on a period from 1999 to 2016 and a selection of S&P 500 constituent company stocks from 1980 to 2016. In addition, we consider the U.S. House Price Index [45] data and apply our approach to the real estate market as well. Our results suggest that the dynamic dependency networks exhibit strongly connected feedback components, particularly around periods of financial crises. The proposed systemic risk indicator is shown to yield predictive power for future market distress, both for the two stock market datasets (sector indices and individual stocks), and the real estate market. These results demonstrate the validity of our approach and indicate that the proposed methodology can be used to construct early-warning signals for crashes in financial markets.

Methods and Data
2.1. Information Theoretic Causality Measures. In thermodynamics and statistical mechanics, entropy is a measure of disorder in a system. In information theory, it is a quantification of uncertainty of a process, based on the Shannon KL divergence would reach its maximum value, equal to the entropy H X i t | X i t − 1 . In the Granger-Wiener causality sense, by using expression (4) we measure the improvement in predicting X i when knowing X j , conditional on the past of X i itself. It has been shown that for Gaussian variables, transfer entropy is equal to Granger causality [50], which implies that the standard linear and VAR model-based methods are a special case of the proposed approach.
Due to a positive bias in the estimatesT j→i , Marschinski and Kantz [38] propose the effective transfer entropy for financial time series, calculated by subtracting the mean of the surrogate measurementsT s j→i , using random permutations of the source time series X j . In order to measure the fraction of the maximum possible value, we employ the concept of normalized transfer entropy, used in neurophysiology and computational neuroscience [51,52]. It is defined as the effective transfer entropy divided by the entropy of X i given its own past, which is the maximum theoretic value of T j→i (for a case when H X i t | X i t − 1 , X j t − 1 = 0): This measurement represents the fraction of information in X i not explained by its own past which is explained by including the past of X j .
To include potential serial dependency of each time series X i t on its own past X i t − 1 , we also estimate the mutual information: I i = H X i t − H X i t | X i t − 1 . In the discrete case, the expression can be reformulated as follows: The above expression again corresponds to the KL divergence [47] between the joint distribution p X i t , X i t − 1 and the product of marginal distributions p X i t p X i t − 1 . If the return distributions were independent, the joint distribution would be equal to the product of marginals and the KL divergence would be equal to 0. On the other hand, if the subsequent returns were functionally (deterministically) dependent, then the KL divergence would reach its maximum value, equal to the entropy H X i t . In addition, the mutual information measure for Gaussian distributions is determined by the Pearson correlation coefficient ρ KL = −0 5 log 1 − ρ 2 -again implying that the proposed approach is a generalization of the correlationbased methods. In analogy with (5), we estimate the effective mutual informationÎ i − EÎ p X i t − 1 need to be estimated, commonly not a simple task due to scarcity of data. Various methods for discretizing asset returns for transfer entropy estimation have been utilized [40,41], mainly based on binning. Specifically, due to the dynamic nature of financial systems, dependencies are estimated using time windows rather than the entire history, which makes the studied time series considerably short. Since a large number of bins may significantly deteriorate the estimation of multidimensional distributions, we significantly reduce the number of variables by limiting returns into just two classes based on their sign: positive and negative. However, rather than just taking the signs of returns and counting their occurrences on a given time window, we employ concepts from fuzzy set theory [53] and define a sigmoid membership function which encodes realized returns r t to two classes (positive and negative) with membership μ + t and μ − t : where α is a parameter defining the steepness of the sigmoid function. Note that ∀r t : μ + t + μ − t = 1. Therefore, the sigmoid functions define the membership of r t for the positive and negative classes, depending on its magnitude-very positive returns will have much higher μ + t (and therefore smaller μ − t ), and vice versa, as shown in Figure 1.
To better understand the procedure of estimating discrete distributions of returns with just two discrete realizations based on sigmoid membership functions, we give an illustrative example: let r = 0 03, −0 01, 0 04, −0 02 be a time series of asset returns at n = 4 discrete points in time. By simply counting the occurrences of positive and negative returns sgn r = 1, −1, 1, −1 , one can estimate the discrete distribution of two return classes: p + = 0 5 and p − = 0 5. However, when the sigmoid membership functions are applied, one obtains μ + = 0 99, 0 18, 0 997, 0 05 and μ − = 0 01, 0 82, 0 003, 0 95 . From these memberships, the probabilities can be estimated as p + = 1/n∑μ + (same for μ − ): p + = 2 22/4 = 0 55 and p + = 1 78/4 = 0 45. The procedure for multidimensional returns is analogous to this one-returns are discretized to two classes using the proposed sigmoidal membership function, and discrete distributions are estimated by summing the memberships of the samples. In this fashion, we do not treat all signs equally but through memberships assign more weights to those returns which have larger magnitudes, thus, very small either positive or negative returns are not as significant as larger ones. By doing so, we manage to keep the dimensionalities of discrete distributions in (6) and (4) low, while accounting for the magnitude of returns.
To obtain directed causality networks from multiple time series (which represent components of a dynamical complex system), we estimate T n i→j for all pairs of time series i, j . From these we infer directed networks with weights defined as In the constructed causality networks, directed edges and their weights correspond to the estimated normalized mutual informationÎ n i . Therefore, we estimate a temporal network where links represent causality relationships between individual financial assets (including selfloops, which indicate serial dependency).
In this contribution, we provide empirical results by performing our analysis on three distinct datasets: (i) daily prices of 9 U.S. sector indices on a period from 1990 to 2017, (ii) daily stock prices of 47 companies which are long-established constituents of the S&P 500 index from 1980 to 2017, and (iii) quarterly house price indices of 51 U.S. states on a period from 1975 to 2017 [45]. Since prices generally have a positive drift (often modeled as Brownian motion), to measure causality between financial time series we use logarithmic returns, calculated as the change in the logarithm of the price S t : Thus, in our analysis time series X i , i = 1, … , N correspond to the N individual components of the considered financial systems (specifically: 9 sector indices, 47 companies, and 51 house price indices).

Information Feedback in Directed Dependency Networks.
The proposed methodology is based on detecting pairs of time series where knowing the past of one helps predict the future of the other with respect to only knowing its own past. By estimating pairwise links in such a way, we obtain networks where the nodes represent individual time series and the directed links between them denote the estimated 4 Complexity dependencies. Even though pairwise methods may not capture a common third driving factor, accounting for all of such effects is virtually impossible in real-world datasets. On a network level, the goal is not to measure the entire dependency of the system (for instance, as measured by the total correlation)-this would require estimation in high-dimensional settings, which is especially problematic in systems such as financial markets, where long time windows may not be appropriate due to their dynamic properties. Rather than that, we are interested in the patterns created by the estimated dependency networks, inferred from pairwise links. Note that these are not instantaneous correlations (which would be measured by the correlation/covariance matrices), but rather time-lagged effects which may not exist even in the presence of high correlations, and represent a nonlinear extension of the well-known Granger causality [54]. We hypothesize that feedback effects in the dependency structure of financial assets are symptomatic of severe inefficiencies in the system, and thus are related to the overall level of systemic risk. We define information feedback not only as a bidirectional dependency (or information transfer, as measured by transfer entropy) estimated in pairs of assets, but also as a loop of any size in the network, forming a pattern of cyclic dependency. This is related closely to strongly connected components (SCC) from network theory-defined as subgraphs consisting of nodes which are all reachable from every other node within the same subgraph-as shown in Figure 2.
Evidently, the emergence of SCCs indicates feedback in the networks, not only between pairs of nodes, but also cycles through multiple nodes and edges. We measure this by employing Tarjan's procedure [56] for identifying SCCs in each network. The procedure is a depth-first search algorithm which traverses all nodes and their respective neighbors, and partitions the original directed graph into subgraphs corresponding to the SCCs. The algorithm passes once through each node, building a forest of trees and subtrees containing nodes reachable from each traversed node, while keeping track of the highest reachable parent node for each traversed node in the graph (since these links are not necessarily preserved in the trees). Nodes which contain a link to a parent in their tree or another node which may link to a common parent in the tree form a strongly connected component, including all other nodes in their subtree. The algorithm pseudocode for a set of vertices V and edges E is given below.
The algorithm does not depend on the ordering of nodes or the choice of the first root node. Moreover, since the depth-first search traverses each node only once, the computational complexity of O V + E . We employ Tarjan's algorithm to detect SCCs in the estimated directed dependency networks obtained from financial time series, and study the properties and emergence of information feedback in the system.

U.S. Stock
Market. First, we analyze the dependency networks of the U.S. financial markets represented by the 9 sector indices and 47 stocks of S&P 500 constituent companies. Before estimating temporal networks, to measure a general amount of predictability we estimate the entropies H X i t and conditional entropies H X i t | X i t − 1 for all stocks i, and conditional entropies H X i t | X i t − 1 , X j t − 1 for all stock pairs i, j on a rolling time window of T = 1 year and a step of 1 day. Figure 3 shows the estimated entropies of the 47 stocks of S&P 500 constituent companies, averaged over all companies i and pairs i, j , subtracted from the theoretical maximum entropy, thus quantifying the amount of predictability in the time series of returns through time. The evidence in Figure 3 suggest that the predictability of stock returns from their own past H X i t | X i t − 1 and the past of other stock returns H X i t | X i t − 1 , X j t − 1 generally diminishes through time, as indicated by the distinct negative trends in conditional entropies and the differences between the conditional entropies and the entropies of each stock i. This may be interpreted through the implications of the efficient market hypothesis [57,58] and suggests that through the last 4 decades the frictions in the U.S. stock market have decreased-this is in line with a reported increase of liquidity and reduction of trading costs in the U.S. stock market in the observed period from 1980 to 2017 [59].
We estimate the normalized transfer entropiesT n j→i for all pairs of stocks i, j and analyze snapshots of inferred causality networks at different points in time, shown in Figure 4. It is evident that the causality networks exhibit considerable differences in different periods: (a) During the period of stable market growth from 1994 to 2006, the estimated network is sparse with relatively low values of normalized transfer entropy links.
(b) During the Dot-com bubble and crash between 1999 and 2001, the network is much more dense with higher link weights.
(c) Again during a period of market recovery from 2004 to 2006, the network is considerably lighter in terms of link weights.
(d) During the subprime bubble and crisis from 2007 to 2009, the estimate network is almost fully connected with considerably heavier weights.
These results suggest that cross-sectional causal relationships between stocks rise during turbulent market periods and are substantially lower when the entire system is stable.
To analyze this relationship, we measure the total number of links as a percentage of the maximum possible number of links in the network and the dynamics of this quantity through time. In addition, we study the emergence of feedback relationships in the estimated networks-here we define the elementary feedback pair as a situation where bothT n j→i andT n i→j are nonnegative for a pair of time series i, j . We count the number of pairs i, j for which such relationships are found and again express it as a percentage of the total 5 Complexity possible number of such pairs in a network. In addition, we inspect the average link weight (equal to the averageT n i→j for identified links), and display these quantities for the networks of 47 stocks of long-established S&P 500 constituents in Figure 5.
The average link weight is shown to gradually decrease over time, which is in line with the predictability levels in Figure 3. However, due to the bias correction in (5), the number of nonnegative estimated normalized transfer entropies (i.e., the number of links) does not have this drift.
We estimate the SCCs in each network through the observed period and analyze the individual SCC sizes (number of nodes within the SCC), demonstrated in Figure 6.
These results reveal the nature of the feedback in temporal causality networks, suggesting that it mainly concentrates within one large SCC, rather than multiple commensurable components. This finding is especially interesting when considering the fact that for correlation matrices of contemporaneous asset returns, the first eigenvalue in the spectral decomposition (also known as market mode) is found to account for the majority of variance in the market [33,34]. This finding suggests that the common risk component in interconnected financial markets is not only contemporaneous, but also spills over through temporal dependencies between assets, and that this effect seems to persist through time.
Moreover, the notable rise in the SCC size around 1998 which remains relatively high until after the 2008 subprime crash (with some fluctuations between the 2000 Dot-com bubble and the 2008 crash) might be evidence of a phase transition in the complex network of financial assets [2,60]. The emergence of such a large SCC in the network might be a consequence of the investors forming patterns of feedback trading across almost the entire market-something reported both in individual and institutional investors [61]. A possible line of further research might try to find which behavioral properties of agents on a microstructural level cause such dynamics to be observed in the system [17,62,63].
In this paper, we hypothesize that the information feedback observed in directed dependency networks indicates inefficiencies in the financial market and may be used as a measure of systemic risk. To verify our hypothesis, we measure the amount of feedback in each network by only considering the first largest SCC (motivated by the fact that is significantly larger and consists of up to 100% of nodes in the network) and propose a measure based on the outdegrees d + i of all nodes i within the largest SCC: n i→j 10 We normalize by 1/ N N − 1 which is the maximum possible number of directed links in a fully connected giant SCC which would contain all nodes in the network. We call the quantity in (10) the SCC index and calculate S t from the networks estimated at each point in time. Due to the fact that the existence of feedback within the network does not necessarily mean that a negative shock is about to occur, we combine the estimated SCC index with the VIX index, calculated using implied volatilities on the S&P 500 index, also known as the fear index. The goal of including the VIX into the early-warning indicator is to measure both the feedback and fear within the market-however, it is not in the scope of this paper to undertake the details of constructing a comprehensive early-warning indicator for financial crises: this is left for further analyses which would include exogenous effects and data. In the following results, we only use the observed market data to demonstrate the predictive power of the proposed approach. The results for the SCC index and the VIX ⋅ S indicator for both stocks and sector index datasets are shown in Figure 7.
The estimated SCC index builds up in the bull market prior to the Dot-com bubble of 2000, as well as just prior to the 2008 subprime mortgage crisis, but in both cases deflates with the aftermath of the crashes. The shift in the behavior of the system seen in 2003 has also been reported in a mutual information-based analysis by Harré and Bossomaier [64]. It is important to note that events such as the September 11 attacks are exogenous to the system and thus cannot be expected to be found in the data and predicted by such endogenous approaches-such considerations are important, especially in the presence of so-called "twin crises" [65]. In 1: procedure TARJAN V, E 2: S ← new empty stack 3: SCC ← // list of strongly connected components 4: nodecounter ← 0 5: for v ∈ V do 6: if index v is not defined then //v has not been visited 7: BUILDTREE(v) 8: procedure BUILDTREE(v) 9: nodecounter ← nodecounter + 1 10: index v ← nodecounter //index of a visited node is the current counter nodecounter value 11: lowlink v ← nodecounter //initialize the smallest index of an accessible node to its own index 12: push v to S 13: for neighbors u of v do 14: if lowlink u is undefined then //u has not been visited 15: BUILDTREE(u) 16: lowlink v ← min lowlink v , lowlink u 17: else if u is on S then //u has been visited but it is in the current component 18: lowlink  the case of the subprime mortgage crisis, although some econometric evidence exists on liquidity and return dependence prior to the crisis [66,67], the question remains to what extent the housing bubble was priced in the stock market. Our SCC index seems to rise just in the onset of the 2007 instabilities, but prior to the large drawdown in the S&P 500 index of 2008-a similar increase in risk was observed in other systemic risk measures [32,33].
Generally, there are two assumptions that each earlywarning indicator should meet. First, it should warn prior to the arrival of the large shocks, thus ex ante not ex post. Second, the peaks indicating these shocks should be substantially higher compared to the bulk of the indicator quantifying the rest of the events. To verify that increased values of the SCC index S t and the proposed systemic risk indicator VIX · S t both rise prior to periods of increased volatility, we inspect the cross-correlation functions between the two and the volatility of the S&P 500 index, as measured by the 3M Abb ott labo rato ries Al tri a Gr ou p A m er ic an E le ct ri c P ow er A rc h e r D a n ie ls M id la n d    year-the past 1-year realized volatility σ t−1y . The crosscorrelation functions for both stocks and sector index data are shown in Figure 8, indicating that both considered signals increase prior to the increase in volatility, by ca. τ = 150 days for stocks and τ = 250 days for sector indices. Furthermore, to test the capacity of the SCC index to predict future distress in the market, we perform a regression on the future 1-year realized volatility σ t+1y of the S&P 500 index, calculated as the sample standard deviation. We choose this period even though results from the cross-correlation analysis may suggest that the index calculated from U.S. stock market data may be predictive over a shorter time frame. Since different time windows for the estimation of the SCC index could very well yield different time frames for the prediction of volatility, this is left to further research and we proceed with the 1-year time window both for the estimation of the SCC index and inference on forward volatility prediction. First, we define the reduced model where the input variables are the VIX indicator and the past 1-year realized volatility σ t−1y , to account for the autoregressive nature of market volatility: σ t+1y t = β 0 + β 1 VIX t + β 2 σ t−1y t . The VIX is included in the null model in order to investigate how much the proposed SCC index alone improves volatility prediction as opposed to standard techniques and signals. It is also important to note that the past volatility is calculated on a time period t − 1y, t and future volatility on t, t + 1y , meaning that there is no overlapping between the windows. In the expanded model, we include the SCC index S: σ t+1y t = β 0 + β 1 VIX t + β 2 σ t−1y t + β 3 S t . To compare these, we use the adjusted R 2 measure (which takes into account the number of parameters in the model) and the Akaike information criterion (AIC)-note that the AIC is observed on an additive scale, and only the differences of AIC between models are interpretable, in such a way that a model with AIC reduced by 10 or more is considered to have substantial support [68]. The results in Table 1 demonstrate a significant improvement in future volatility prediction when the SCC index is included, as opposed to the reduced models including only past volatility and the VIX index. This is implied both by the increase in the adjusted R 2 measure and the Akaike information criterion (AIC) which is strongly decreased in the expanded model. Although from a modelling perspective the R 2 adj of 0.45 and 0.5 for the expanded models may not seem as an exceptional fit, these are not distant from other results in financial research; for instance, the fit of Shiller's CAPE on the 10-year forward returns which is around 0.5-0.6 (depending on the period) [69]. However, to the best of our knowledge there are no similar results for such short-to midterm predictions of market volatility (i.e., the forward 1-year period used here). In addition, to see how the combination of VIX and the SCC index fares against the linear model, we include the interaction term VIX · X in the regression, and observe that the model does not seem to improve significantly, as suggested by the R 2 adj and AIC measures. However, we obtain the best results for the AIC when estimating a linear model using only the interaction term σ t+1y t = β 0 + β 1 VIX t · S t -for both the stocks and sector index datasets, implying that this single variable captures the most important dynamics pertaining to future volatility prediction, and allows for the simplest form of the model. Owing to the normalization of the SCC index by the maximum possible size of the network, both the SCC index estimated from 47 stocks and the one estimated using 9 sector indices perform very consistently with respect to the magnitude of the estimated coefficients. In addition, the fact that the respective models fit very similarly indicates that the proposed methodology captures common effects in the market, observable from different datasets.
3.2. U.S. House Price Index. In addition to the financial market data, we also perform our analysis on the U.S. house price data, which consists of quarterly house price indices for 51 U.S. states from 1970 to 2017. We estimate the directed dependency networks on a time window of T = 20 quarters (5 years), and compare the calculated SCC index with the overall house price index for the entire U.S. real estate market in Figure 9.
It is evident from the results that the estimated SCC index is high in the midst of the 2007-2009 housing bubble, indicating that there were strong feedback components and dependencies between the real estate prices in U.S. states. We repeat the previous analysis and inspect the cross-  Figure 7: The estimated SCC index and the product of the VIX index and the SCC index as early-warning indicators for financial crises, shown with the S&P 500 market index as a benchmark of the U.S. stock market performance. The sector index data is only available since 2000 (green lines), and the VIX data is available since 1990, thus the VIX · S on the lower graph is only displayed from that point on-the time frames are nevertheless kept the same on both graphs for comparison. correlation between the estimated SCC index and the volatility of the U.S. HPI index, estimated on a rolling window of 5 years (i.e., 20 quarters)-the results are shown in Figure 10.
Even though the SCC index was estimated on a rolling time window of length T = 20 quarters (i.e., 5 years), the cross-correlation in Figure 10 suggest that it is predictable for future volatility on a shorter time span, from 2 to 8 quarters (half to two years). To measure the extent to which volatility prediction is improved by including the SCC index, we employ regressions on the future volatility of the U.S. HPI index. To avoid overlapping windows in calculating input and output variables (past and future volatility), we adopt a 2-year window (8 quarters) to estimate both the past volatility σ t−2y t and the future volatility which is the dependent variable in the model σ t+2y t . The reduced model reads: σ t+2y t = β 0 + β 1 σ t−2y t . The expanded model includes the SCC index: σ t+2y t = β 0 + β 1 σ t−2y t + β 2 S HPI t . The results of the performed regressions are given in Table 2.
The results suggest that a significant improvement in the model is introduced by including the SCC index, indicating our proposed indicator manages to capture these effects and timely indicates the systemic risk associated with the U.S. housing market. The strength of the relationship is similar to the one found in financial market data in Section 3.1, although volatilities are generally lower in real estate prices. These results additionally support our hypothesis and extend the applicability of our approach from stock market data to economic complex systems such as the housing market.

Summary
In this contribution, we have analyzed directed dependency networks of financial assets, estimated using information theoretic concepts of transfer entropy and mutual information. We employed a resampling technique to remove the estimation bias and obtain validated directed networks. Firstly, we  10 Complexity found that the general predictability levels have been diminishing through the last decades in the U.S. stock market, which is coincidental with the reduction in bid-ask spreads and transaction costs, and may be interpreted as an indicator of rising market efficiency. In addition, we examined the estimated directed dependency networks for various periods in time, and report that the networks exhibit strong connections with feedback loops during periods of high volatility and market crashes, as opposed to sparse networks estimated during periods of stable market growth. To test the hypothesis that information feedback in the financial network indicates elevated systemic risk levels, we estimated directed temporal networks on a rolling time window and identified the strongly connected component (SCC) for each time step.
To estimate the contribution of the SCC to the entire system, we define the SCC index as the sum of all SCC node outdegrees, normalized by the maximum theoretical number of links. Using stock market and real estate data, we show that the SCC index can be used to timely indicate systemic risk and predict future market volatility with a remarkable precision and consistently in all the considered datasets. These results indicate that our methodology yields relevant information for evaluating systemic risk in financial networks, and may be useful for both academics and practitioners as a tool for developing early-warning signals for future market crashes.

Data Availability
The sector index and stock price data used in this paper were obtained from Yahoo! Finance, and are available from Stjepan Begusic (stjepan.begusic@fer.hr) upon request. The U.S. house price data was obtained from the Federal Reserve Bank of St. Louis database (FRED) [45], where it is openly available.

Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.

Introduction
Plasma structures are often assimilated into complex systems, when considering their functionality and structures [1,2]. The models commonly used for studying their dynamics are assuming the differential hypothesis of the physical quantities describing space and time evolution, such as energy, momentum, density, and so on, although such models are limited to large-enough plasma volumes where differentiability and integrability can be applied [3][4][5]. The differential methods become unsuccessful when considering the physical reality having nonintegral and nondifferentiable dynamics, for example, instabilities of complex systems that are generating chaos or patterns. Therefore, it is required to define explicitly a scale resolution in the expression of variables and thus implicitly in the equations governing the systems' evolution. Consequently, the dynamic variables remain space and time dependent as in the classical description, but they become also dependent on the scale resolution. Otherwise, instead of using nondifferentiable functions, one can work with various approximations of them, obtained by averaging at various scale resolutions. Therefore, dynamic variables behave as the limit of a family of functions, which at zero scale resolution are nondifferentiable, while at nonzero scale resolution are differentiable [6][7][8].
The previous approximation is well applied for complex system dynamics, because the real measurements are done for a finite scale resolution. This implies building a physical theory for a new geometric structure, by introducing the scale laws (and thus the scale transformation invariance) into the motion laws (which already are invariant to transformations of space and time coordinates). These requirements are satisfied by the scale relativity theory (SRT) [6] and more recently by the SRT in an arbitrary constant fractal dimension [7], where the interaction complexity is replaced by nondifferentiability, and motions take place without constraints on continuous but nondifferentiable curves in a fractal space time. Otherwise, when the time scale resolution is large enough when compared with the inverse value of the highest Lyapunov exponent [9,10], potential routes are replacing deterministic trajectories and positions characterized by probability densities are substituting the definite ones. Therefore, the motion curves are both geodesics (in a fractal space) and fractal fluid stream lines, so that the geodesics are selected through the external constraints.
When the scale covariance principle is functional, the transition from differentiable to fractal physics is achieved by replacing the usual time derivative with the fractal operator [7] d dt where X l is the fractal space coordinate, t is the nonfractal time coordinate having the role of an affine parameter, and V l is the complex velocity field, of real part V l D , which is scale resolution (dt) independent, and imaginary part V l F , which depends on the scale resolution. The diffusion coefficient, λ, describes the fractal-nonfractal transition. For the fractal dimension D F of the motion curves, one can choose various definitions (Kolmogorov, Hausdorff-Besikovitch, etc. [9,10]), but its value has to be kept constant during the analysis of the complex system dynamics.
The fractal operator (1) behaves as a scale covariant derivative, and it allows us to obtain the dynamics fundamental equations, similarly as in the differentiable case. Thus, applying the fractal operator (1) to the complex velocity field (4), it giveŝ while for an external scalar potential U, it results witĥ In the present paper, by means of (4), new solutions of fractal velocity fields are obtained in the hypothesis of constant density, null fractal force, and constant flux momentum per length unit. Then, the theoretical results are applied in the field of laser-produced plasma, to explain various expansion features of carbon plasma obtained by excimer laser ablation in various focusing conditions. Such transient plasmas play a significant role in the production of various nanostructured materials (carbon nanotubes, nanowires, graphene, etc.) [11][12][13], and controlling their properties (expansion velocities, density, and temperature) allows a significant enhancement of the deposition process [14][15][16].

Theoretical Aspects
Separating the dynamics of the complex system on the resolution scale (on the differential one through the real part, V l D , and on the fractal one through the imaginary part, V l F ) leads tô At differential resolution scale, this separation implies the fractal force, Nullifying its value, in the condition, induces a particular velocity field of explicit form given in the following. Finding the solutions for these equations can be relatively difficult, due to the fact that this equation system is nonlinear [4,5]. However, for the particular case of a stationary flow in an x, y symmetry plane, an analytical solution of this system exists. With the previous assumption, (8) and (9) take the form and that the flux momentum per length unit is constant, Such conditions imply that on the limit y = 0, the velocity is directed on the x-axis and that it is on the y-axis at large distances, while (12) is the conservation law of a specific kinetic energy density along the x-axis.
Using the method from [4,5], the following solutions result with For y = 0, we obtain in (13) the critical flow velocity (maximum value along the x-axis) in the form while (12), taking into account (15), becomes so that the critical cross section of the stream line tube (diameter of the flowing tube corresponding to the critical flow velocity) of the complex system flow is given by Equations (13), (14), and (15) can be significantly simplified by introducing the normalized quantities, where x 0 , y 0 are the specific lengths, V c is the specific velocity, and μ is the fractalization degree (similar to the one defined in [17]). It results that Relation (20) suggests that, independent of the fractalization degree, the fractal velocity field is highly nonlinear: along the (Oξ) flow axis, it is of soliton type (sech 2 ), while along the (Oη) flow axis, it is a "mixture" between a soliton type, a kink type (tanh), and so on [4,18]. The 3D dependences of the normalized components of the fractal velocity field on the nondimensional spatial coordinates are plotted in Figures 1(a) and 1(b), along with its norm V ξ, η = u ξ, η 2 + v ξ, η 2 ( Figure 1(c)), for the fractalization degree μ = 1. It results with a central symmetry axis of the flow Oξ and a decrease of the velocity norm from the left boundary. Meanwhile, the local central maximum is adjacent with two symmetrical regions of the minimum.
In Figures 2(a)-2(d), the flow velocity field of the fractal fluid is given for various values of the μ parameter. One can observe that for low values of μ, a V-shaped fluid flow exists, having two lateral branches characterized by a "quasi-stagnating" structure (decreased values of the velocity norm-see the arrow marks in Figure 2(a)). Meanwhile, in the central part, the speed has its local maximum value, regardless of the values of the μ parameter.
In the following, these theoretical results are applied to explain the expansion of laser-produced carbon plasma in various laser beam-focusing conditions.

Experimental Aspects
The experimental setup used for measurements is described in details in [19][20][21]. Briefly, carbon plasma has been produced by the ablation of high-purity (99.99%) pyrolytic graphite targets placed in an evacuated chamber (0.0001 Pa) using a KrF excimer laser (248 nm, 20 ns, and 5 Hz). The beam was directed onto the target surface at a 45°incidence angle and focused through a spherical lens of a 500 mm focal length. The laser energy was typically set to E L = 200 mJ/pulse. Before focusing, the beam profile had approximately a rectangular geometry, with a top-hat energy distribution for the long axis (24 mm characteristic size) and a Gaussian one for the short axis (10 mm full width at half maximum). The laser fluence was varied in a moderate regime, below 3 J/cm 2 , by changing the distance between the focusing lens and the target surface, further denoted by the lens position (LP). Using the spot area on the target, we estimated fluence values of 1.6 J/cm 2 for LP = 46 cm and 0.9 J/cm 2 for LP = 41 cm (for the sake of simplicity, only the LP values will be displayed in the following). The global (i.e., not spectrally dispersed) plasma emission at different delays with respect to the laser pulse was recorded by ICCD fast imaging (Andor iStar DH740-18U-03 camera) in side view. Each image corresponds to a single laser pulse and 20 ns ICCD camera gate time. For ion current measurements, a Faraday cup (FC) was placed on the axis of the ablation plume or circularly displaced in various angular positions with respect to this axis, preserving the same distance of 14 cm from the center of the laser impact spot on the target surface. A 5 mm diameter aperture was used at the entrance of the FC, to narrow the ion collecting area and to achieve a better spatial resolution. A voltage of −30 V applied on the FC was determined to be sufficient for repelling the plasma electrons, in order to record the ion saturation current. The electrical signals were recorded by using the 50 Ω input impedance of a 2 GHz oscilloscope (LeCroy), externally triggered by a fast-response signal photodiode (2 ns rise time). Each recorded time-of-flight (TOF) signal was obtained from a single laser pulse directed onto fresh surfaces of the target.

Complexity
In Figure 3(a), the time evolution of plasma images recorded for LP = 46 cm shows the existence of a V-like radiating shape, having two lateral arms (with an angle of about 30°) that originate from two "seeds" of plasma observed after 300 ns from the laser pulse. The central part contains faster particles expanding along the plume axis with higher velocity, which are hardly discernible in a region of decreased contrast, for example, at 2000 ns (see dotted red marks in Figure 3(a)). We note that for each image, relative intensities (with respect to the maximum recorded value) were used, to better observe the radiating features.
Such peculiar plasma shape is significantly influenced by the laser beam focusing on the target. In Figure 3(b), at 500 ns after the laser pulse, that is, the time sequence considered to be most relevant for subsequent plasma dynamics, one can observe that for LP = 41 cm and LP = 42 cm, the usual "ellipsoidal" plume shape was recorded, that is in agreement with the "top-hat" energy distribution for the long axis of the laser beam [22]. The V-like shape becomes observable at LP = 44 cm, while for LP = 46 cm, the two plasma "seeds" appear to be completely separated. In previous works [20,21], we observed some correlations with the ablation crater depth profile. Thus, for LP = 41 cm and LP = 42, its shape is mirroring the laser energy distribution, but for LP = 46 cm, two ablation depths were observed, with a deeper hole in the crater center and two lateral thresholds at distances similar with the long axis of the laser beam [20,21].
When plasma expansion takes place in an ambient gas (Ar, 1 Pa pressure), the well-known drag effect [23][24][25][26] can be observed, as it results (from Figure 3(c)) in various focusing conditions at 2000 ns delay. The lateral arms of the V-shaped transform in two radiating "balls," flying almost parallel after 2000 ns.
Using the FC as ion collector in various angular positions θ with respect to the plasma expansion central axis (i.e., the normal to the target surface in the laser impact spot) shows that the maximum number of charged particles is always recorded on the normal direction to the target, regardless of the laser beam focusing (Figures 4(a)-4(c)). Such result is unexpected for LP = 46 cm, where most of the recorded optical radiation comes from the lateral arms. However, in the central regions, prompt and low-emitting particles were previously distinguished that can explain the FCrecorded current.
Such fast particles are better evidenced in the presence of a deposition substrate located at 50 mm in front of the target. In Figure 5, we extracted the emission intensities along the z-axis (normal to the target surface in the laser spot center) by cross-sectioning ICCD images (like the ones in Figure 3) at various delays with respect to the laser pulse (for LP = 46 cm and a fluence of 2.1 J/cm 2 ). One can observe that fast particles accumulate in front of the substrate starting with 1500 ns, which roughly means a maximum of expansion velocity v fast = 3.1·10 4 m/s. They are followed by a slower 5 Complexity structure, emitted through a thermal mechanism [27][28][29][30] from the target. For this part, the dependence of the maximum-emitting positions versus time delays (dashed line in Figure 5) resulted to be linear and the derived "slow" structure velocity (v slow = 3.3·10 3 m/s) is one order of magnitude lower than the "fast" structure one.
Returning now to the theoretical results given by (20) and plotted in Figures 2(a)-2(d), let us observe a good agreement with plasma images given in Figure 3(b). The fractalization degree used in the model can be correlated with the collision rate in the ablation plasma, in the sense that increased collision rates correspond to low values of fractalization μ . According to (19), the parameter μ is connected with the "fractal" contribution, by means of fractal-nonfractal transition coefficient, λ, and scale resolution, dt, and fractal dimension, D F , of the movement curves. Between two successive   6 Complexity collisions in the ablation plasma, the trajectory of any particle is a straight line, that is, a continuous and differentiable curve, while in the impact point, it becomes nondifferentiable. Then, assuming that all the collision impact points are defining an uncountable set of points, it results that the trajectories become continuous and nondifferentiable curves. In such conjecture, the more collisional plasma means more broken trajectories, that is, an increased fractal dimension, D F , and consequently a decreased fractalization degree μ.
From an experimental point of view, tighter focusing of the laser beam (i.e., higher LP in Figure 3(b)) leads to a denser ablation plasma, that is, more collisions between particles, which explains the very good qualitative agreement between Figure 3(b) (experiment) and Figure 2 (theory). We note that in [17], a similar interpretation of the fractalization degree was able to explain the well-known fast (hot) and slow (cold) plasma structures ejected through electrostatic and thermal mechanisms, respectively. The V-shaped lateral arms of the expanding carbon plasma obtained for LP = 46 cm and LP = 45 cm correspond to "quasi-stagnating" regions of intense optical radiation emission. Our preliminary spectroscopic investigations show the dominance in these regions of carbon dimer vibrational transitions. It is known that the presence of the C 2 molecule in the plasma plume favors the nanoparticle and cluster production [31,32]. There are two mechanisms assumed as responsible for dimer formation: direct ejection from the target and three-body recombination [33,34]. In our opinion, the existence of the theoretically predicted and experimentally evidenced quasi-stagnating regions promotes the second mechanism in the enhancement of C 2 molecule content. These investigations are underway and will be published in a separate paper.

Conclusions
A fractal approach is proposed to simulate peculiar dynamics of carbon plasmas generated by excimer laser ablation, by substituting particle motions on continuous and differential curves in a Euclidean space with free motions on continuous and nondifferentiable (fractal) curves in a fractal space. The mathematical procedure imposes the use of a motion operator, having the role of scale covariant derivative and then obtaining the fractal space geodesics by postulating a principle of scale covariance. For plane symmetry, stationary geodesic equations at a fractal scale resolution give a fractal velocity field with components expressed by means of nonlinear solutions (soliton type, kink type, etc.).
The theoretical results are applied in the field of laserproduced plasma, more precisely to explain various dynamics of carbon plasma obtained by excimer laser ablation in different focusing conditions. The occurrence of a V-shaped radiating plasma structure, consisting in two lateral stagnating arms of high emissivity and a fast-expanding central part of low emissivity, is evidenced, both experimentally and theoretically. Such peculiar plasma shape results for specific values of the fractalization degree which are correlated with the ablation laser-focusing conditions. The more collisional plasma means more broken trajectories, that is, an increased fractal dimension and consequently a decreased fractalization degree. From the experimental point of view, tighter focusing of the laser beam leads to a denser ablation plasma, that is, more collisions between particles, which explains the very good qualitative agreement between experiment and theory.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The authors declare that they have no conflicts of interest.

Introduction
Since the work of Freidlin and Wentzell [1], the large deviation principle (LDP) has been extensively developed for small noise systems and other types of models (such as interacting particle systems) (see [2][3][4][5][6][7]). Cardon-Weber [2] proved a LDP for a Burgers-type SPDE driven by white noise. Marquez-Carreras and Sarra [3] proved a LDP for a stochastic heat equation with spatially correlated noise, and Mellali and Mellouk [4] extended Marquez-Carreras and Sarra's [3] to a fractional operator. Jiang et al. [5] proved a LDP for a fourth-order stochastic heat equation driven by fractional noise. Budhiraja et al. [6] studied large deviation properties of systems of weakly interacting particles. Budhiraja et al. [7] proved a large deviation for Brownian particle systems with killing.
Similar to the large deviation, the moderate deviation problems also come from the theory of statistical inference. Using the moderate deviation principle (MDP), we can get the rate of convergence and an important method to construct asymptotic confidence intervals, for example, Liming [8], Guillin and Liptser [9], Cattani and Ciancio [10], and other references therein. There are also many works about MDP about stochastic (partial) differential equations; some surveys and literatures could be found in Budhiraja et al. [11], Wang and Zhang [12], Li et al. [13], Yang and Jiang [14], and the references therein. On the other hand, fractional equations have attracted many physicists and mathematicians due to various applications in risk management, image analysis, and statistical mechanics (see Droniou and Imbert [15], Bakhoum and Toma [16], Levy and Pinchas [17], Mardani et al. [18], Niculescu et al. [19], Paun [20], and Pinchas [21] for a survey of applications). Stochastic partial differential equations involving a fractional Laplacian operator have been studied by many authors; see Mueller [22], Wu [23], Liu et al. [24], Wu [25], and the references therein.
Motived above, we investigated the moderate deviations about the stochastic fractional heat equation with fractional noise as follows: Assume that the coefficients satisfy the following.
Under the conditions of Assumption 1, (1) possesses a unique solution in the sense of Walsh [26] as follows: As the parameter ε → 0, the solution v ε t, x of (1) will tend to v 0 t, x which is the solution to the following equation: This paper mainly devotes to investigate the deviations of v ε from the deterministic solution v 0 , as ε → 0, that is, the asymptotic behavior of the trajectories.
where a ε is the same deviation scale that strongly influences the asymptotic behavior of V ε . If a ε = 1/ ε, we are in the domain of large deviation estimate, which can be proved similarly to Jiang et al. [5].
The case a ε ≡ 1 provides the central limit theorem. As ε↓0, we will prove that v ε − v 0 / ε converges to a random field in this paper.
To fill the gap between scale a ε = 1 and scale a ε = 1/ ε, we mainly devote to the moderate deviation when the scale satisfies the following: This paper is organized as follows. In Section 2, the definition of the fractional noise B H ds, dz is given. In Section 3, the main result is given and proved. In Appendix, some results about the Green kernel are given.

Fractional Noise
Let H ∈ 1/2, 1 , and B H 0, t × A t,A ∈ 0,T ×B ℝ is a centered Gaussian family of random variables with the covariance satisfying where |A| denotes the Lebesgue measure of the set A ∈ B ℝ and B ℝ denotes the class of Borel sets in ℝ.
We denote φ as the set of step functions on 0, T × ℝ. Let H be the Hilbert space defined as the closure of φ with respect to the scalar product.
According to Nualart and Ouknine [27], the mapping 1 0,t ×A → B H t, A can be extended to an isometry between H and the Gaussian space H 1 B H associated with B H and denoted by Define the linear operator K * H φ↦L 2 0, T by where K H is defined by with C H = 2αΓ 3/2 − α /Γ α + 1/2 Γ 2 − 2α 1/2 , and one can get Moreover, K H satisfies the following: Then, since where ϕ and ψ in φ are any step functions. So the operator K * H gives an isometry between the Hilbert space ℋ and L 2 0, is a space-time white noise, and B H has the following form: Therefore, the mild formulation of (4) has the following form: That is, the last term of (4) is equal to The following embedding proposition is given by Nualart and Ouknine [27].

Main Results and Their Proof
which is a Cameron-Martin space endowed with the norm Suppose e ∈ E. Now, let v e be the solution of the following deterministic equation: satisfies the following: The above (26) shows a unique mild solution.
Similar to Jiang et al. [5], one can get the following: 3 Complexity Theorem 1. Let H ∈ 1/2, 1 . Under Assumption1, the law of the solution to (1) satisfies a deviation principle on C γ D × 0, T with the good rate function: More precisely, for any Borel measurable subset B of where B o and B denote the interior and the closure of B, respectively.
We furthermore suppose that the coefficients satisfy the following.
Assumption 2. f is differentiable, and the derivative f ′ of f is Lipschitz. That is to say, there exist positive constant m and m ′ which satisfy the following: Together with the Lipschitz of f , we conclude that Now, we give the following central limit theorem.
Theorem 2. Let f and its derivative f ′ satisfy Assumptions 1 and 2. Then, for Let the function U e be the solution to the following partial differential equation: Under Assumptions 1 and 2, by Theorem 1, one can get U/a ε which satisfies large deviation principles on C γ 0, T × D with the speed e 2 ε and the good rate function satisfies the following: Now, the second result is given as follows: Under the Assumptions1and2, then, the random field 1/ εa ε v ε − v 0 satisfies a large deviation principle on the space C γ 0, T × D with speed a 2 ε and the good rate function I φ defined by (36), where 0 < γ < 1.
3.3. The Proof of the Main Results. We first prove Theorem 1.
Our proof is based on the following proposition (see Doss and Priouret [28]).

Proposition 2.
Suppose X ϵ j , ϵ > 0 and j = 1, 2 are two families of random variables and E j , d j and j = 1, 2 are two Polish spaces. Suppose that (1) There exists a map K I 1 < ∞ → E 1 such that I 1 ≤ a → E 1 is continuous for any a < ∞.
(3) For any R, δ, a > 0, there exist ρ > 0 and ϵ 0 > 0 satisfying I 1 ≤ a and ϵ ≤ ϵ 0 for all h ∈ E 1 , and Then, X ϵ 2 , ϵ > 0 satisfies a LDP with the rate functions To prove Theorem 1, one only needs to prove (i) Under some topology, Z e : e e E ≤ a → C 0, T , L p D is continuous for any a > 0.
(ii) In Freidin-Wentzell inequality, for any R > 0, η > 0, and e ∈ E, there exist a δ > 0 satisfying Theorem 4. When the level set e E ≤ a is endowed with the topology of uniform convergence on 0, T × D , is the continuous map for any a ∈ 0, +∞ .
Proof. One only needs to prove that for fixed a > 0, e, h ∈ ∥e∥ E ≤ a , Note that Now, we deal with II t, x , together with Recalling K * H is defined by (11), one can get Using Gronwall's inequality, we can get The proof of the theorem is completed.
We now prove the Freidin-Wentizell inequality as follows: and dp dp Now, one can prove (36). Note that, under p, then, So under p, by Gronwall's Lemma, one can get Now, one can change (34) the proof to the following theorem.
Theorem 5. Suppose e ∈ E and e ∈ E ≤ a. For each R > 0, η > 0, and e ∈ E, there exists a constant δ > 0 satisfying In the following, we give a key Lemma to prove Theorem 1, which is similar to Candon-Weber [2], and the proof is omitted.
In the following, we first give Garsis-Rodemich-Rumser's Lemma in Bally et al. [29]. Lemma 4. Assume p > 1 and U ε t, x : t, x ∈ 0, T × D are a family of real-valued stochastic processes. Suppose the following is true.
Then, for any r ∈ 1, p , Now we can prove Theorem 2.
To this end, we need to prove that Assumptions 3 and 4 are satisfied for Using Taylor's formula, there exists a ξ ε t, x such that Note that f ′ is Lipschitz continuous and ξ ε t, x ∈ 0, 1 ; one can get Using Hölder's inequality, for p < 2 and 1/α < q < α, one can get where 1/p + 1/q = 1. Together with (5)
Proof of Theorem 3. By Theorem 1, U/a ε obeys large deviation principles on C γ 0, T × D , with the rate function I given by (30) and the speed function h 2 ε . Using Dembo and Zeitouni ([30] Theorem 4.2.13), to prove the large deviation principles of U ε /a ε is e 2 ε -exponentially equivalent to U/a ε , that is, holds for any δ > 0. Since To prove (67), we only need to prove Note the decomposition For any q ∈ 1, α , x, x′ ∈ 0, 1 , 1/p + 1/q = 1, and 0 ≤ s ≤ t ≤ T, by Hölder's inequality, (32), and (A.9), we can get where μ ′ ∈ 0, α + 1 /q − 1/α . Together with (99) and (100), we can get Thus, for t ∈ 0, T , we have By (64) to prove (67), we only need to prove that for any Note that By the same method in the proof of (59), we have Similar to the proof of (53), we have Together with (40) and (A.11), for any t ∈ 0, T , we can get that for any fixed θ > 0, one can get that for any β, We can get
The operator D δ,α is a closed, densely defined operator on L 2 ℝ , and it is the infinitesimal generator of a semigroup which is in general not symmetric and not a contraction. This operator is a generalization of various well-known operators, such as the Laplacian operator (when α = 2), the inverse of the generalized Riesz-Feller potential (when α > 2), and the lim sup 12 Complexity Riemann-Liouville differential operator (when δ = 2 + α 2 or δ = α − α ). It is self-adjoint only when δ = 0, and in this case, it coincides with the fractional power of the Laplacian. We refer the readers to Debbi and Dozzi [31] for more details about this operator. According to Komatsu [32], D δ,α can be represented for and for 0 < α < 1, by where κ δ − are κ δ + two nonnegative constants satisfying κ δ − + κ δ + > 0, φ is a smooth function for which the integral exists, and φ′ is its derivative. This representation identifies it as the infinitesimal generator for a nonsymmetric α-stable Lévy process. Suppose G δ α t, x is the fundamental solution to the following equation: where δ 0 · is the Dirac distribution. Using Fourier transform, one can get G δ α t, x which is given by Let us list some known facts on G δ α t, x which will be used later on (see, e.g., Debbi and Dozzi [31]).
Proof. For any x, y ∈ ℝ and t ∈ 0, T ,

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this article.

Introduction
Ion-acoustic solitary waves are well-known to be an important example of nonlinear phenomena in modern plasma research [1][2][3]. Many researchers have studied ion-acoustic solitary waves in different plasma systems such as thermal, magnetized, and unmagnetized plasmas. Among the different plasma systems, magnetized plasma systems have attracted intense interest. Many authors have studied ion-acoustic solitary waves in magnetized plasma based on the quantum hydrodynamic (QHD) model [4,5]. The QHD model is derived from the basic system of equations of ion-acoustic solitary waves and is one of the macroscopic mathematical models used to describe the hydrodynamic behavior of quantum plasmas.
For simplicity, 1D and 2D nonlinear partial differential equations have been used to describe the evolution of nonlinear ion-acoustic solitary waves. For the simplest 1D geometry where the ion-acoustic solitary waves become solitons, Washimi and Taniuti [6] derived the KdV equation by using the reductive perturbation method. Kako and Rowlands [7] derived the 2D KP equation based on the results of Washimi and Taniuti. However, in the real magnetized plasma environment, 1D and 2D models cannot solve some of the problems encountered in the motion of ion-acoustic solitary waves. Thus, it is necessary to introduce higherdimensional theories for the nonlinear ion-acoustic solitary waves. Therefore, in this paper, we discuss a new 3D model for nonlinear ion-acoustic solitary waves. Most of the QHD models, such as the KdV model, mKdV model, and KP model, are integer-order models. Fractional order models have rarely been considered. Fractional calculus is a generalization of integer calculus. Many of the physical processes that have been explored to date are nonconservative. It is important to be able to apply the power of fractional differentiation [8][9][10]. However, because of its nonlocal character, fractional calculus has not been used in physics and engineering. With the development of nonlinear science, fractional calculus theory has been continuously developed to date. Researchers have discovered that the derivatives and integrals of fractional order models are suitable for describing various physical phenomena. In recent years, the application of fractional differential equations has attracted increasing attention in plasma physics [11]. Thus, research on fractional order models is necessary.
The solution of the integer equation is a research hot spot in the field of research and development of various models [12][13][14], and similarly, the solution of fractional models has been a focus of our research [15,16]. Thus, many solution methods have been found and used to solve the fractional order equation. For instance, the iterative method [17][18][19], Hirota bilinear method [20,21], trial function method [22], Homotopy perturbation [23], and other methods have all been developed in the recent decades. In the past, researchers solved integer-order models by using the Hirota bilinear method. Recently, the Hirota bilinear method has been used to solve fractional models. In this paper, using the Hirota bilinear method, we obtain soliton solutions for the new model. Various phenomena can be explained via the application of the solutions given by the above methods [24][25][26]. Additionally, the use of these methods enables a better understanding of various magnetized plasma phenomena. Therefore, based on the solutions derived by the abovementioned methods, we seek to determine the properties of ionacoustic solitary waves. The properties of the model include conservation laws [27,28], boundary value problems [29,30], and integrable systems [31,32].
The research on conservation laws plays an important role in the study of the physical phenomena in nonlinear magnetized plasma. Conservation laws are a mathematical formulation, and they indicate that the total amount of a certain physical quantity remains the same during the evolution of a physical system [33,34]. In 1918, Noether [35] proved that each conservation law is associated with an appropriate symmetry and can be derived from the Lagrangian function and the invariance principle. In 1996, Riewe [36] introduced the Lagrangian function for the fractional derivative. In the past two decades, many different types of fractional Euler-Lagrangian equations have been generalized. Based on these conclusion, some fractional generalizations of Noether's theorem were proved [37], and many fractional conservation laws were obtained [38]. To study the conservation laws of the fractional differential equations, we use Lie symmetry analysis to construct the conserved vectors [39,40] In this paper, applying the basic system of equations of ion-acoustic solitary waves [41], we develop a new 3D model. Using the new model, we study the conservation laws and the solution of ion-acoustic solitary waves. The rest of the paper is structured as follows: In Section 2, based on the basic system of equations of ion-acoustic solitary waves, we obtain a new 3D Schamel-KdV equation by using multi-scale analysis and the perturbation method [42]. A new 3D TSF-Schamel-KdV equation is obtained in Section 3 according to the new integer-order model and by using the semiinverse method and the fractional variational principle [43,44]. In Section 4, applying the Riemann-Liouville fractional derivative [39,40], we discuss the conservation laws of the new fractional model. In Section 5, according to the Hirota bilinear method, we obtain the soliton solutions of the 3D TSF-Schamel-KdV equation. The propagation of solitary waves is important because it describes the characteristic nature of the interaction of the waves and the plasmas. Therefore, using soliton solutions [17,18], we study the characteristics of motion of ion-acoustic solitary waves.

Derivation of the 3D Schamel-KdV Equation
We use the basic system of equations of ion-acoustic solitary waves given by where is the ion number density, and , V, are the ion fluid velocities in the -, -, and -directions, respectively. is the electric field potential, is the electron number density, and Ω is the uniform external magnetic field. Ionacoustic solitary waves are assumed to propagate in thedirection, and the direction is specified by the unit vector .
We consider the propagation of ion-acoustic solitary waves in 3D space ( , , ) and introduce the following independent stretched variables: where is a small parameter characterizing the strength of the nonlinearity. Thus, we can obtain The dependent variables are expanded in the following form: and the boundary conditions are given by Substituting (3) and (4) into (1), we can obtain the approximate equations for in the following form: According to (6) and (7), we can obtain Substituting (10) into (8) and (9) and eliminating 2 , 2 , V 2 , 2 and 2 , we can obtain where 1 = , 2 = 1/2 and 3 = 4 = (1/2)(1 + 1/ 2 ).

Derivation of the 3D TSF-Schamel-KdV Equation
In Section 2, we have obtained a new 3D integer-order Schamel-KdV equation. To learn more about ion-acoustic solitary waves, we seek to obtain the 3D TSF-Schamel-KdV equation by using the semi-inverse method and the fractional variational principle. First, we introduce some definitions as follows.
Definition 2 (see [44]). The left Riemann-Liouville fractional derivation of a function ( , , , ) is defined as Complexity Definition 3 (see [45]). The Riemann-Liouville fractional derivation of a function ( , , , ) is defined as According to the integer-order 3D Schamel-KdV equation, assuming ( , , , ) = ( , , , ), where ( , , , ) is a potential function, and therefore, the potential equation of the 3D Schamel-KdV equation can be written in the following form: Then, the function of the potential equation (16) can be described as where , = 1, 2, 3, 4, 5, are Lagrangian multipliers which can be obtained later. Using integration by parts for (17) and taking | = | = | = | * = | = | = | = 0, we obtain Using the variation of the above function, integrating each term by parts and applying the variation optimum condition, we obtain Comparing (19) with (16), we obtain the following Lagrangian multipliers: Therefore, the Lagrangian form of the integer-order 3D Schamel-KdV equation is given by Similarly, the Lagrangian form of the 3D TSF-Schamel-KdV equation is given by where = ( ). Thus, the function of the 3D TSF-Schamel-KdV equation can be obtained as According to the Agrawal's method [46,47], the variation of functional Eq. (23) can be written as where Using the fractional integration by parts, we can obtain Optimizing the variation Eq.

Lie Symmetry Analysis.
In the previous section, we have obtained the 3D TSF-Schamel-KdV equation. To learn about the properties of the new model, we study the conservation laws [48,49]. First, we convert (30) to the following fractional partial differential equation form: We assume that (31) where , , , , and are infinitesimal functions, and , , , , , and , are the prolongations of infinitesimal functions defined as where and are the total derivative operators given by Applying the generalized Leibnitz rule as given by where and the chain rule for a compound function defined as we can obtain the following equation: For the chain rule given by (37), when ( ) = 1, we obtain Therefore, (38) can be rewritten as Similarly, using the generalized Leibnitz rule and the chain rule for a compound function, we also obtain the following equation: The infinitesimal generator can be defined as follows: Under the infinitesimal transformations, the invariance of the system (31) leads to the following invariance condition: ( ) (Δ) Δ=0 = 0, = 1, 2, 3, . . . , According to (42) and (43), we can obtain Then, we can obtain the following invariance criterion: Substituting (33), (34), (41), and (42) into (47) and equating the coefficients of alike partial derivatives, fractional derivatives and powers of , the set of determining equations can be obtained as By solving the above equations, we can obtain a series of Lie algebra of point symmetries as Hence, a series of Lie algebra of point symmetries can be written as

Conservation Laws.
We have obtained the Lie symmetry generator in Section 4.2. In this section, we will discuss conservation laws of the 3D TSF-Schamel-KdV equation based on the obtained Lie symmetry generator. We know that the conservation laws of (30) satisfy the following equation: where , , and are the conserved vectors. A formal Lagrangian for the 3D TSF-Schamel-KdV equation can be presented as L = ( , , , ) ( where ( , , ) is a new dependent variable. According to the formal Lagrangian, an action integral is defined as 8 Complexity Therefore, we can obtain the adjoint equation of (30) as the Euler-Lagrange equation * = L = 0, where / is the Euler-Lagrange operator defined as where ( ) * , ( ) * , ( ) * , ( ) * , and ( ) * are the adjoint operators of the Riemann-Liouville fractional differential operators , , , , and , respectively. These are given by where − and − are the right-sided fractional integral operators of orders − and − , respectively. and are the right-sided Caputo fractional differential operators of orders and , respectively. Therefore, the adjoint equation (54) can be rewritten as * = ( ) Based on Section 4.1, we obtain infinitesimal symmetry of (30). We assume that the Lie characteristic function is given by Applying this on the 5 of the symmetry (50), we obtain Using the Riemann-Liouville fractional derivative, the components of the conserved vectors of (30) are defined as where = [ ] + 1, and is the integral given by with the property that The conservation laws of the 3D TSF-Schamel-KdV equation are explained in detail below (see the appendix).

Multi-Soliton Solutions for the 3D TSF-Schamel-KdV Equation
The solution of the model is a relatively broad research area in science [50,51]. In this section, using the simplified Hirota bilinear method [24,52], we seek multiple soliton solutions of the 3D TSF-Schamel-KdV equation. First, we introduce the following fractional transforms: where 1 , 2 , 3 and 4 are constants. Using the above transformations and omitting the apostrophe, we can convert the derivatives into classical derivatives, Then, (30) can be described as We assume that the solution of (65) has the form ( , , , ) = ( , , , ) , where ( , , , ) = + + − .
Substituting (66) and (67) into the linear term of (65), we can obtain the following dispersion relation: Hence, can be written as

Three-Soliton Solution.
To investigate the three-soliton solution of (65), we assume that the auxiliary function ( , , , ) has the following form: ( , , , ) = 1 + 1 + 2 + 3 + 12 Substituting (77) and (81) into (65), we find the following pattern: According to the pattern obtained in Section 5.3, the -soliton solutions for the 3 TSF-Schamel-KdV equation can be obtained, where ≥ 1. Based on the singlesoliton solution and the two-soliton solution, we can study the characteristics of the motion of the ion-acoustic solitary waves.
In this section, we describe the interaction of two small ion-acoustic solitary waves with finite amplitude in a weakly relativistic 3D magnetic plasma. Then, we can study the characteristics of motion of the solitary waves by changing the coefficients. Based on the single-soliton solution of ionacoustic solitary waves, we obtain the evolution plots of the ion-acoustic solitary waves (see Figure 1). Figure 1 shows that the solitonic amplitude increases with an increase in the 2 / 1 ratio, and the initial superimposed solitons travel different distances over a period of time for the different choices of 1 and 2 . Therefore, we conclude that the soliton moves along the positive -axis with constant amplitude and velocity.
Examination of Figure 2(a) shows that the propagation trajectory of the soliton exhibits a periodic oscillation. shows the two-soliton interaction with constantly changing velocity. When → 0, the trajectory is sinusoidal with periodic oscillation. Otherwise, when is far from the origin, the trajectory is parabolic-like. It can be seen from Figure 2(c) that the soliton generates a peak at the time of the interaction. Based on this, we conclude that, in addition to the periodic oscillation of the solitons in the local region, the large-scale propagation trajectories for such a structure show parabolictype curves. Thus, if the variable coefficients are taken to have other forms, the corresponding characteristic curves will present different behaviors.
Remark 4. The present study describes the propagation and interaction of two small but finite amplitude ion-acoustic solitary waves in a weakly relativistic 3D magnetized plasma. Our conclusions can be considered as a generalization of the model suggested by Nejoh [53] and Pakira and Chowdhury [54] by including the effect of different coefficients. It has been found that the initial superposed solitons travel different distances over a period of time for the different choices of 1 and 2 and that the solitonic amplitude increases with an increase in the 2 / 1 ratio. Moreover, the time fractional order and space fractional orders , , and play an important role in higher-order perturbation theory in the variation of the soliton amplitude. We believe that our research may be of basic interest for particle trapping experiments. Compared to other solitary waves, the unique features of the ion-acoustic solitary waves are the existence of an ultra-low frequency regime for wave propagation and the high charging of the grains, which can fluctuate because of the collection of plasma currents onto dust surfaces. It is difficult to carry out a reasonable comparison between previous studies and the present work. Nevertheless, due to the flexibility provided by the nonextensive method, we suggest that the quantitative discrepancies between the theory and experiment can be reduced. Thus, the application of our model may be particularly interesting in some plasma environments, such as space-plasmas, laser-plasma interactions, and the plasma sheet boundary layer of the earth's magnetosphere.

Conclusions
In this paper, based on the basic system of equations of ion-acoustic solitary waves, we have obtained a new 3D Schamel-KdV equation by applying multi-scale analysis and the perturbation method. Then, based on the newly developed model and using the semi-inverse method and the fractional variational principle, a new 3D TSF-Schamel-KdV equation is obtained. We study the conservation laws and  (1) Based on the basic system of equations and using multi-scale analysis and the perturbation method, we have obtained a new 3D Schamel-KdV equation. This equation is more suitable than other models for the study of ion-acoustic solitary waves. Furthermore, based on the new integer-order model and using the semi-inverse method and the fractional variational principle, we obtain the 3D TSF-Schamel-KdV equation. The fractional model opens the door to the study of ion-acoustic solitary waves.
(2) Using the Riemann-Liouville fractional derivative, we study the conservation laws of the 3D TSF-Schamel-KdV equation. Then, we discuss the soliton solutions of the new fractional model. Using the multi-soliton solutions, we study the characteristics of motion of ion-acoustic solitary waves.

Appendix
When ∈ (0, 1) and = 1, we can obtain the following components of the conserved vectors: When = 1 and 1 = , we can obtain the following components of the conserved vectors: (A.2) When = 2 and 2 = , we can obtain the following components of conserved vectors:

(A.3)
When = 3 and 2 = , we can obtain the following components of the conserved vectors: (A.4) When = 4 and 4 = , we can obtain the following components of the conserved vectors: , we can obtain the following components of the conserved vectors: (A.6) When ∈ (1, 2) and = 2, we can obtain the following components of the conserved vectors: (A.7) When = 1 and 1 = , we can obtain the following components of the conserved vectors: (A.8) When = 2 and 2 = , we can obtain the following components of the conserved vectors: (A.9) When = 3 and 2 = , we can obtain the following components of the conserved vectors: (A.10) When = 4 and 4 = , we can obtain the following components of the conserved vectors: , we can obtain the following components of the conserved vectors: (A.12)

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.

Introduction
Multifractal analysis is concerned with the study of the regularity structure of processes, both from a local and global point of view. Multifractal Bayesian denoising is a technique on regularity-based enhancement and it acts by finding data that is close to the observations along with the multifractal data prescribed. This method depends on the tuning of a small set of parameters which are capable of providing different improvements pertaining to the observed noisy data. In many applications, this has been successfully utilized in cases in which irregularity carries important information. Abundant natural phenomena in fields like physics, finance, construction, environment, medicine, and biology have been shown to display a fractal behavior [1][2][3].
Being the third most frequent cause of death following heart disease and cancer in developed countries, stroke is among the most common causes of cognitive impairment and vascular dementia [4]. Stroke can be described as the quick loss of brain function owing to the disturbance occurring in the blood supply to the brain [5]. Being one of the foremost causes of death worldwide, stroke can be generally classified into two types. The clinical course types are defined as the ischemic stroke and hemorrhagic stroke [6].
In this study, we worked on the dataset of individuals who have been diagnoses with ischemic stroke: no stroke/TIA, large vessel, small vessel, cardioembolic, cryptogenic, dissection, other (moyamoya, FMD, hereditary, coagulopathy, vasculitis, other rare). No stroke/TIA is transient ischemic attack; it is a mini stroke or a minor stroke that happens when a blood clot blocks an artery for a brief period of time [7]. Large vessel is infarction happens owing to artery-toartery or low flow embolism in the ipsilateral arterial tree (intracranial or extracranial) segments of carotid or vertebrobasilar arteries, or proximal middle cerebral artery [8]. Small vessel diseases of the cerebral vasculature contribute to varied forms of brain dysfunction cell death and injury. Small vessel disease of the brain corresponds to ≈25% to 30% of strokes, and it is a primary cause of cognitive decline and disability due to age-related and hypertension-related reasons [9]. Cardioembolic stroke is mainly preventable, calling for efforts at primary prevention for major-risk cardioembolic sources. When stroke as a result of cardiac embolism has happened, the chances of recurrence are comparatively high for most cardioembolic sources. In addition, cryptogenic stroke is commonly seen in clinical practice. Cryptogenic stroke is defined as a brain infarction not attributable to a source of definite cardioembolism, small artery disease, or large artery atherosclerosis, even with a standard cardiac, vascular, and serologic evaluation [10]. Dissection stroke is cervical arterial dissection; it is a main reason for stroke experienced in young adults. However, the management thereof remains uncertain despite the standard treatment administered through anticoagulants or antiplatelet drugs [11]. Demographic information, medical history, results of the laboratory tests, treatments, and medications are among the most important data belonging to the patients, which can be categorized into the following major groups.
In recent years, the fractal and multifractal analysis in biomedical data has seen a growing interest. Regarding the relevant topics, in the following studies, Wang et al. [12], Yang et al. [13], Karaca and Cattani [14], Tsaneva [15], Doubal et al. [16], Shanmugavadivu et al. [17], and Ahammer et al. [18] underlined the significance of fractal and multifractal techniques for data analysis in medicine. It has also been acknowledged that the multifractal techniques have successful feature descriptor in stroke applications [19][20][21]. Numerous studies reveal successful clustering results regarding the stroke subtypes with applications of -means and FCM [22,23]. However, it has been also seen that there is a shortage in literature and subject matter as to studies with combined applications of numeric data [24,25], 2D multifractal denoising techniques, and machine learning approaches.
Our approach aims to be broader and more complete since this study of ours is large and comprehensive when compared with other studies done with stroke dataset [22,23] in literature, taking into consideration the dimension of 2204 (the number of patients with 7 different stroke subtypes) and 23 attributes. The 7 different stroke subtypes are as follows: no stroke/TIA, large vessel, small vessel, cardioembolic, cryptogenic, dissection, other (moyamoya, FMD, hereditary, coagulopathy, vasculitis, other rare). The attributes include demographic information, medical history, results of laboratory tests, treatments, and medications. The clustering of the subtypes of stroke is a remarkable challenge in its own term. Besides, all researches have been done on many different kinds of analysis regarding stroke dataset, but no work has been reported yet which relates attributes (demographic information, medical history, results of laboratory tests, Ischemic stroke treatments, and medications) through the 2D multifractal denoising techniques to fuzzy means and -means algorithms applied for clustering purposes. For this reason, 2D multifractal denoising techniques (mBd, mNold, mPumpD) have been administered for the identification of significant and efficient attributes belonging to the patients (among 23 of them) for the clustering of 7 subtypes of stroke. It is adapted well to the case in which the data to be recovered is very irregular and nowhere differentiable, a property relevant to fractal or self-similar structures. We obtained regularity-based enhancement from 2D multifractal denoising techniques datasets (mBd, mNold, mPumpD). These datasets are clustered using the -means and FCM algorithms. 2D mBd stroke dataset has yielded better clustering than stroke dataset, 2D mNold stroke dataset, and 2D mPumpD stroke dataset of stroke subtypes. When compared with studies mentioned above, our study is a comprehensive and comparative one since the stroke datasets as obtained from 2D multifractal denoised techniques have been applied for the first time in literature for -means and FCM clustering algorithms.
The paper is organized as follows: Section 2 provides Materials and Methods. Methods of our approach are basic facts on Hölder regularity and multifractal analysis, multifractal Bayesian denoising in S( , ), numerical experiments (stroke dataset experiments in multifractal Bayesian denoising technique), -means algorithm, and fuzzy means algorithm. As the last sections, results and discussion and conclusions are presented in Sections 3 and 4, respectively.

Patient Details. 2204 individuals from Massachusetts
Medical School, University of Worcester, Massachusetts, USA, were kept under observation in this study. Data were collected in the period between March 9, 2007, and October 2, 2016. The individuals had ischemic stroke diagnosis (see Figure 1). The ischemic strokes in the dominant hemisphere lead to more functional deficits compared to the strokes in the nondominant hemisphere as they are evaluated on the National Institutes of Health Stroke Scale (NIHSS).
A total of 2204 patients (414 males [labelled with (1)] and 1790 females [labelled with (0)]) were included in our experiments. Stroke patients are aged between 0 and 104, with seven subtypes of ischemic stroke being examined in this study. In this study, demographic information, medical history, results of laboratory tests, treatments, and medications data, as can be seen in Table 1, pertaining to 2204 stroke subtypes patients. Table 2 provides the main headings of attributes used for the stroke subtypes.
Baseline characteristics of the patients involved as stratified by infarct side are outlined in Table 3 regarding the stroke dataset.
Medications which are given to the patients are classified into broad categories: statin, antiHTN, antidiabetic, antiplatelet, anticoagulation. Attributes of CT perfusion and neurointervention are utilized for the treatment. The modified Rankin Scale (mRS) score was evaluated at 90 days by a physician who had training on strokes or a stroke nurse with knowledge of strokes and certified in mRS via in-person or via phone interview. We comply with the Strengthening the Reporting of Observational Studies in Epidemiology guideline (https://www.strobe-statement.org/). 0.09% of the individuals' disorder progressed to the hemorrhagic stroke.

Methods.
In this study, we have provided two potential contributions. We introduced the 2D mBd, 2D mNold, and 2D mPumpD which are relatively novel multifractal techniques calculating the regular data from stroke data. We proposed the use of stroke dataset and regular and denoised stroke datasets (2D mBd, 2D mNold, 2D mPumpD) to be trained with unsupervised -means and FCM algorithms for the clustering along with the aim of improving the clustering performance of the stroke subtypes. Our method is reliant on the steps specified below: (a) 2D multifractal denoising techniques (2D mBd, 2D mNold, 2D mPumpD) were applied to the stroke dataset (which can be seen in Table 2). In order to identify the stroke dataset significant regularity, which is the fundamental concept concerning multifractal denoising; the best is explained on a simple example. The aim is to eliminate "insignificant" irregularities while retaining "meaningful" singularities and denoised dataset.
(b) The stroke datasets obtained from stroke dataset and 2D multifractal denoising techniques (mBd, mNold, and mPumpD) were clustered by having been applied to the -means and FCM algorithms.
(c) The comparisons of datasets (stroke dataset, mBd stroke dataset, mNold stroke dataset, and mPumpD stroke dataset) were performed with the -means and FCM algorithms as to the clustering accuracies.

Basic Facts on Hölder Regularity and Multifractal Analysis.
We are concerned with the enhancement, or denoising, of complex data, which is the stroke dataset, relying on the analysis of the local Hölder regularity. We rather suppose that data enhancement is comparable to increasing the Hölder regularity at every point [27]. Such methods are adapted well to the case in which the data which would be recovered is highly irregular, for instance, nowhere differentiable with local regularity that varies rapidly. In this paper, our focus is on the pointwise Hölder exponent for simplifying the notations, and we assume that our data are not differentiable [28,29].
Let ∈ (0, 1) and 0 ∈ ⊂ R. A function : → R is in 0 if, for all in a neighborhood of 0 , where is a constant. Pointwise Hölder exponent of at 0 , denoted by ( 0 ), is the supremum of the for which (1) is valid.
In this paper, we shall concentrate on the statistical approach that brings about consideration of a quantity named the large deviation multifractal spectrum [29,30]. This spectrum can be defined as follows: Consider a stochastic process ( ), ∈ ⊂ on a probability space (Ω, , ). For convenience in terms of notation, we will assume without loss of generality that = [0, 1].
Set ( ) = #{ : − ≤ ≤ + }, where is the coarse-grained Hölder exponent that corresponds to the dyadic interval = [ 2 − , ( + 1)2 − ]; that is, = log | |/ − log . At this point, (#)is the number of nonempty boxes, and its measure is portrayed by exponents from the interval ( − , + ) [29]. At this point, is some quantity measuring the variation of in the interval . The choice fl (( + 1)2 − ) − ( 2 − ) brings about the simplest analytical computations. Another possibility, which shall be the one used in this paper, is to take to be the , of at scale and location [29]. This definition is convenient in various aspects, since it ensures the utilization of the wavelet bases' versatility. However, it also has a setback: As a matter of fact, the multifractal spectrum obtained accordingly will rely predominantly on the wavelet chosen . Hence, if one sets : , , it would not make sense to speak of the spectrum of without reference to the analyzing wavelet chosen. Along the paper, the wavelet coefficient of data is denoted by , , with being the scale and being the location [29,30].
Then coarse-grained multifractal large deviation spectrum is provided by the equation ( ) = lim →0 lim →∞ sup(log / log ). The definition of ( ) is connected with the large deviation theorem that provided the probabilistic interpretation for the multifractal spectrum.   [29].
The intuitive meaning of can be found as follows: For large enough, one has approximately ( ≃ ) ≃ 2 − (1− ( )) , in which denote the uniform distribution over {0, 1, . . . , 2 − 1}. Hence, for all such that < 1, 1 − ( ) Complexity 5 measures the exponential rate of decay of the probability of finding an interval with coarse-grained exponent equal to , when tends to infinity. As a whole, is a random function. In the applications, it is convenient to regard the following deterministic version of : See [29].
Here, we present technique based on the multifractal data instead of applying it to the use of the Hölder exponent merely. This approach can generally ensure more robust estimates because a higher level description is used for subsuming information on the entire data. Besides this, we also assume a semiparametric approach. To put it more specifically, it can be said that we put forth the assumption that the considered data belong to a given set of parameterized classes and are explained currently [31].
Let us state that is the set of lower semicontinuous functions from R + to R + ∪ {−∞} [29]. We regard classes of random functions as ( ), ∈ [0, 1], which can be defined as (Ω, , ) and described according to Definition 1 which can be seen below. Each cluster S( , ) is considered by the functional parameter ∈ and a wavelet . And the set { , } , constitutes a basis of 2 . Let be a positive constant, as defined based on For large sufficiently, the assumption that the wavelet coefficients ( , ) at scale are distributed identically requires the following: Consequently, Definition 1 yields a plain interpretation regarding multifractal analysis. We consider the set of random data for a given wavelet . Accordingly, the normalized data has deterministic multifractal data ( ) that equals 1 + along with the following further condition: is gained as a limit in lim sup and this limit has been attained in a uniform manner into . This condition confirms that the rescaled statistics of the , are sufficiently close to their limit for large enough , which allows a meaningful inference. The cluster S( , ) includes a wide variety of data. Denoising in S( , ). At this point, the key steps in the classical Maximum A Posteriori (MAP) approach in a Bayesian frame can be recalled, as adjusted to our setting. It is observed that the noisy data is , and it is assumed that = + , where is a noise independent from original data , with known law as (it should be noted that we use orthonormal wavelets; is denoted in equation as ). Hence, we have , = , + , . The map estimatê, of , from the observation , is defined to be an argument maximizing ( , / , ). Since ( , ) does not depend on , , using Bayes rules and maximizing ( , / , ) correspond to maximizing the product ( , / , ) ( , ) [29,30].

Multifractal Bayesian
The MAP estimate [30] is stated aŝ, = arg max [ ( , / ) ( )]. The term ( , / ) can be calculated from the law of easily if is assumed to be (0), since , share the same law as . It can also be recalled that orthonormal wavelets are used by us. The preceding ( , ) is inferred from our assumption that belongs to S( , ) as follows: for > 0, set ( ) = log 2 ( )/ − , This leads to the definition of an approximate Bayesian MAP estimate aŝ where sgn( ) is the sign of and̂= (sup > 0 sup ( , )) −1 . The estimate for can be justified in a heuristic way as follows: log 2 ( | , |)/ − ≃ with > 0 suggesting that | , | < 1 for all the couples ( , ).̂is chosen as the smallest normalizing factor requiring the latter inequality. In our experiments, we address the incident in which the noise is centered, Gaussian, with variance 2 . The MAP estimate can be seen in accordance witĥ ⋅ sgn ( , ) .
Equation (6) provides an explicit formula for denoising, but it often offers limited practical use. In fact, one is not aware of the multifractal data of in most of the applications. If there is no evaluation of , it would not be probable to use (6) for the purpose of obtaininĝ, . Moreover, one should bear in mind that in general depends on the analyzing wavelet. Hence, it is necessary to understand the shape of the data for a given wavelet. Moreover, the main goal of our approach is to remove the multifractal characteristics 6 Complexity of from the denoised datâ: regarding the Multifractal Bayesian approach use in our study, a strong justification is to be able to estimate in the following manner: (a) denoise , (b) evaluate the datânumerically, (c) set̂=̂. It will be obvious from this approach that it does not seem right to have the necessity of previous knowledge of in the Bayesian approach. Hence, we present a "degenerated" version of (6), and here the input is used as a single real parameter rather than the whole data. The heuristic reads as follows: from regularity perspective a significant piece of information in the data is concerned with its support, for example, the set of all the occurring Hölder exponents. 0 denotes the smallest regularity observed actually in the data [32,33]. The shapes of the spectra gained through different analyzing wavelets is reliant on the wavelet but their support is included in [ 0 , ∞). The "flat data," therefore, encompasses inherent information. It only relies on the positive real 0 . Rewriting (6) with a flat data gives the explicit simple expression as in 0 is really a priori information but it is possible to be predicted from the noisy observations. In view of that, it can be analogous to the threshold that is used in the classical soft or hard wavelet thresholding scheme. It would be beneficial to regard 0 as a tuning parameter in the applications. Increasing 0 offers a more smooth estimate (as it is assumed that the original data have a larger minimal exponent). It would be remarkable to make a comparison with the hard-thresholding policy on the wavelet coefficients (see more details in [29]).

Numerical Experiments.
We present some results regarding the stroke dataset. In each case, the result of the Bayesian multifractal denoising and the classical hard-thresholding technique is shown. For all procedures and stroke dataset, the parameters (see Tables 2 and 3) are set in order to obtain the best fit to the known original data. On the whole, the following conclusions can be inferred from these experiments. It is observed that for the irregular data, like the ones handled here, which belong to S( , ), the Bayesian method yields more satisfactory results compared to those of classical wavelet thresholding. This method particularly preserves a roughly correct regularity along the path, whereas the wavelet shrinkage yields data that have both too smooth and too irregular regions. In this study, numerical experiments were obtained with 2D mBd, 2D mNold, and 2D mPumpD techniques being applied to numerical experiments pertaining to the stroke dataset (see Table 2).
The steps of multifractal Bayesian denoising technique applied on the stroke dataset to be able to get the regular and denoised stroke dataset are provided below.
Step 1. We consider Step 2. For each attribute in the stroke dataset ( = =1,...,2204×23 ) MAP (̂, of , ) values are calculated. Here, and are the positive constants and , is a random variable supported in [0, 1]. All , are independent, having the same law levelwise. For instance, , and , are distributed identically with probability distribution for all , , . In addition, we assume that (0) < 1 for infinitely many .
In line with the law of the local regularity behavior of the functions in S, we will consider the particular case of data with uniformly distributed wavelet coefficients. ( ) = 1 for Step 3. Our second type of stroke data has one of the simplest fractal stochastic processes, which is the fractional Brownian motion (fBm) (for more information see [29]). As is well known, fBm is the zero mean Gaussian process ( ) with covariance function; where is a real number in (0, 1) and is a real number as well.
Step 4. Hence, the result of our denoising procedure will be wavelet-dependent in principle. The impact of the wavelet is controlled through the choice of the prior, that is, the multifractal, spectrum among all admissible ones. In practice, we have found out that few variations are observed if one uses a Daubechies wavelet [29] with length as 10 and a nonincreasing spectrum supported on [ , ∞) with ( ) = 1.
The result of the denoised stroke dataset obtained by having applied the steps between Steps 1 and 4 on the stroke dataset is presented in Figure 2(a), mesh plot display for 2D mBd stroke dataset.
In this study, 2D mBd technique is applied to main captions of attributes (as can be seen in Table 2) pertaining to the stroke subtypes. 2D mBd by multifractal technique is reliant on the fact that stroke data enhancement is comparable to increasing the Hölder regularity at each point. Stroke dataset ( ) was applied on FracLab [26] program to regularitybased enhancement (̂) from 2D multifractal denoising techniques. Regularity-based enhancement of 2D mBd stroke dataset̂was clustered with -means and FCM algorithms. Consequently, the most accurate clustering was attained for the subtypes of stroke.

-Means Algorithm.
= ( 1 , 2 , . . . , ) d-dimensional data is to be grouped into a set of clusters, = { , = 1, . . . , }. -means algorithm discovers a partition in which the squared error between the empirical mean of a cluster and the points in the cluster is reduced to minimum. With being the mean of cluster , the squared error between and the points in cluster can be defined based on the following equation [34][35][36]: The aim of -means is to minimize the sum of the squared error over all clusters in line with -means is initiated with a primary partition with clusters and assigns patterns to clusters so that the squared error can be reduced. The squared error all the time goes down with an increase in the number of clusters (with ( ) = 0); when = , it can be minimized only at a fixed number of clusters [37].
The clustering of the training set with the -means clustering algorithm can be seen in Algorithm 1.
The key steps of -means algorithm are as follows: Step 1. Choose an initial partition with clusters; repeat Steps 2 and 3 till the cluster membership stabilizes.
Steps 2-3. Form a new partition by assigning each pattern to its closest cluster center.
Step 4. Calculate the new cluster centers.
The clustering of the stroke dataset with the -means clustering algorithm can be seen as follows: Stroke dataset = ( 1 , 2 , . . . , 2204×23 ) was applied tomeans algorithm. The main steps of -means algorithm are as follows: Step 1. cluster is selected as an initial partition with 7 for = ( 1 , 2 , . . . , 2204×23 ); Steps 2 and 3 are repeated up until the stroke subtypes cluster membership stabilizes.
Steps 2-3. For 1000 iterations, the data of each patient in the stroke dataset is assigned to the closest cluster centroid.
Step 4. The new cluster centroids of stroke subtypes are calculated. The results of the clustering procedure as obtained from the application of -means algorithm (see Algorithm 1) on the stroke dataset are presented in Figure 4(a).  The clustering of the 2D mBd stroke dataset with themeans clustering algorithm can be described as follows: The stroke datasets (mBd, mNold, mPumpD) obtained from the 2D multifractal denoising techniqueŝ= (̂1,̂2, . . . ,̂2 204×23 ) were applied to -means algorithm. The main steps of -means algorithm are as follows: Step 1. cluster is selected as an initial partition with 7 for = (̂1,̂2, . . . ,̂2 204×23 ); Steps 2 and 3 are repeated up until the stroke subtypes cluster membership stabilizes.
Steps 2-3. For 1000 iterations, the data of each patient in the stroke dataset is assigned to the closest cluster centroid.
Step 4. The new cluster centroids of stroke subtypes are calculated accordingly. The results of the clustering procedure as obtained from the application of -means algorithm (see Algorithm 1) on the 2D mBd stroke dataset are presented in Figure 4(b).

Fuzzy Means Algorithm.
The FCM algorithm assigns data to each category through the use of fuzzy memberships [38][39][40][41]. Let = ( 1 , 2 , . . . , ), -dimensional data signifying with data to be split into clusters, in which it denotes the features data. The iterative optimization algorithm steps are provided below: where denotes the membership of data in the th cluster, is the th cluster center, ‖ ⋅ ‖ is norm metric, and is the denotation of a constant greater than 1.
The cost function is brought to the minimum when high membership values are assigned to the data that are close to the centroid of their clusters. In addition, the low membership values are assigned to the data which encompass data distant from the centroid. The membership function shows the probability that data belongs to a specific cluster.
As for FCM algorithm, the probability is dependent merely on the distance between the data and the cluster center of each stroke patient in the feature domain. The cluster centers and membership functions can be updated by using The clustering of the training set with the FCM clustering algorithm is stated as in Algorithm 2.
Beginning with a preliminary guess for each cluster center, the FCM converges a solution for , demonstrating the local minimum of the cost function. It is possible to detect such convergence through the comparison of the changes in the membership function or the cluster center at two consecutive iteration steps ( ).
The clustering of the stroke dataset with the FCM clustering algorithm can be depicted as follows: Stroke dataset = ( 1 , 2 , . . . , 2204×23 ) was applied to the FCM algorithm. The main steps of FCM algorithm can be seen in the following steps regarding the stroke dataset: (14)). Here, represents the membership of data in the th cluster, is the th cluster center, ‖ ⋅ ‖ is norm metric, and is chosen as 2.
Step 4. The cost function is brought to the minimum when high membership values are assigned to the stroke dataset close to the cluster centroid for 1000 iterations. The results of the clustering procedure as obtained from the application of FCM algorithm (see Algorithm 2) on the stroke dataset are presented in Figure 5(a). (2) while ( ) = 0 do Algorithm 1: -means algorithm in the stroke dataset and 2D mBd stroke datasets.
The clustering of the 2D mBd stroke dataset with the FCM clustering algorithm is as follows: = (̂1,̂2, . . . ,̂2 204×23 ) obtained from the 2D multifractal denoising techniques were applied to the FCM algorithm. The main steps of FCM algorithm are stated below: Step (1)(2)(3) shows the membership of datâin the th cluster, is the th cluster centroid, ‖ ⋅ ‖ is norm metric, and is chosen as 2.
Step (4). The cost function is minimized when high membership values are assigned to the stroke dataset close to the cluster centroids for 1000 iterations. The results of the clustering procedure as obtained from the application of FCM algorithm (see Algorithm 2) on the stroke dataset are presented in Figure 5(b).

Results and Discussion
In order to have a detailed vision of the relationship between the variables concerning the stroke dataset and 2D mBd stroke dataset in of this study, the demonstration is done as plot based on mesh function. includes the application of stroke dataset and regular and denoising 2D stroke dataset to the -means algorithm, and includes the calculations and results pertaining to the application of stroke dataset and regular and denoising 2D stroke dataset to the FCM algorithm.

Mesh Plot Display for Stroke Dataset and 2D mBd Stroke
Dataset. The stroke dataset in our study is a matrix with a dimension of 2204 × 23. Figure 2(a) displays the meaning attribute headings for the stroke dataset attributes (demographic information, medical history, results of laboratory tests, treatments, and medications) as well as the relationship between the stroke patients based on mesh function in plot. ( , )) are the wireframe grid lines' intersections; and stand for the columns and row of . Figure 2(a) displays the main headings of attributes concerned with the stroke dataset (demographic information, medical history, results of laboratory tests, treatments, and medications) and 2D mBd technique was applied to the data of stroke patients. Figure 2(b) shows the Bayesian regularity that matches each attribute in the stroke dataset (2204×23) as plot based on the mesh function result. Figure 2(b) presents the 2D mBd stroke dataset ( , , ) drawing a wireframe mesh and a contour plot under it, with color determined by . Thus, the color is proportional to the surface height. and are vectors length ( : 2D mBd attributes) = 23 and length ( : 2D mBd number of stroke subtypes) = 2204, where [23,2204] = size ( : 2D mBd attributes, 2D mBd number of stroke subtypes). Here, ( ( ), ( ), ( , )) are the wireframe grid lines' intersections; and stand for the columns and row of . In the stroke dataset of our study, Daubechies wavelet with a length ranging between 2 and 20 was applied. fBm with = 0.6 and denoised version with Gaussian noise was applied on the dataset. As Figure 2(b) shows, taking to be the theoretical spectrum is obtained with increments, or taking ( ) = 1 for ≥ .
In this study, for the clustering procedure of stroke subtypes, -means and FCM algorithms were applied to the stroke dataset (Figure 3(a)) as an initial step. The clustering procedure was obtained for the stroke subtypes (the details can be seen in Figure 3). In the second stage, multifractal denoising techniques (see Figure 4(b)) were applied to the same dataset (Figure 4(a)), stroke dataset. -means and FCM algorithms were applied to the regular and denoised stroke dataset obtained (Figure 4(d)). As a result, the clustering procedure was obtained for the stroke subtypes (the details of which can be seen in Figure 4). iterations (200, 300, 400, 500, and 1000) have been used for the clustering procedure of stroke dataset, 2D mBd stroke datasets through the -means algorithm. The most accurate results in the -means algorithm have been obtained for the 1000 iterations. As shown in Figure 5, each epoch corresponds to 200 for more vivid display of classification.

Application of -Means Clustering Algorithm. Different
The parameters pertaining to the 1000 iterations are presented in Table 4. Table 5: Result of -means algorithm clustering for 1000 iterations (as obtained from Figure 5).
Steps concerning epochs ( ) for stroke dataset (Figure 5(a)) ( ) for 2D mBd stroke dataset ( Figure 5(b In Figure 5(a) shows the best total sum of distance ( ( )) for the centroid values. The value ( ( )) did not change following this value. The clustering calculation was stopped in the 1000th iteration since no change was recorded in the results of the centroid values compared to the previous iteration.
In Figure 5(b), the result of best total sum of distance ( ( )) of the stroke 2D mBd dataset with -means algorithm was obtained as 19.2979. The value ( ( )) did not change after this value. After the 900th iteration the iteration was stopped since no change in the iteration calculation happened in the clustering calculation compared to that of the previous iteration. Both of the datasets ( ) have a 7-by-23 matrix that contains the final centroid locations. -means is used to calculate the distance from each centroid to points on a grid. To be able to do this, the centroids ( ) and points on a grid to -means are passed, and 1000 iterations of the algorithm are implemented ( Table 5).
The result of the best total sum of distance calculated (see Figure 5) reveals that 2D mBd stroke dataset has a better clustering accuracy than the stroke dataset.
The reason for mentioning such parameters is that these parameters were of help for the best cluster analysis to be performed in this study.
For the 1000 iterations in -means clustering algorithm by splitting to 200 iterations with corresponding epoch, the calculation result pertaining to the stroke dataset is presented in Figure 5(a) for stroke dataset (see Figure 2(a)) and in Figure 5(b) for 2D mBd stroke dataset (see Figure 2(b)).

The Application of FCM Clustering Algorithm.
Different iterations (200, 300, 400, 500, and 1000) have been used for the clustering procedure of stroke dataset, 2D mBd stroke datasets through the fuzzy means algorithm. The most accurate results in the fuzzy means algorithm have been obtained for the 1000 iterations. As shown in Figure 6, each epoch corresponds to 200 for more vivid display of classification.
For the 1000 iterations in FCM clustering algorithm, by splitting to 200 iterations with corresponding epoch, the calculation result pertaining to the stroke dataset is presented in Figure 6(a) for stroke dataset (see Figure 2(a)) and in Figure 6(b) for 2D mBd stroke dataset (see Figure 2(b)).
The parameters for the 1000 iterations are displayed in Table 6.
The reason for mentioning such parameters is that these parameters were of help for the best cluster analysis to be conducted in this study. The data in stroke dataset (2204 × 23) and 2D mBd stroke dataset (2204 × 23) as well as the distance between the cluster centroids and the data are computed (the distance between the data and the cluster center of each stroke patient in the feature domain). In addition, the results computed are stored in the matrix. The optimization stopped on the 1000th iteration for the stroke dataset (in Figure 6(a)) and 2D mBd stroke dataset (in Figure 6(b)), as the objective function improved by less than 1 − 3 between the final two iterations. The clustering process comes to an end when the maximum number of iterations is reached, or when the objective function improvement between two consecutive iterations is less than the specified minimum. In the calculations of matrix for 1000 iterations (see Figure 6), it has been revealed that the clustering accuracy of 2D mBd stroke dataset is better than that of the stroke dataset (Table 7).
Calculations given in Table 7 were performed using Matlab, Mosek, and FracLab [26] environment. The clustering accuracy rates obtained from -means and FCM algorithms applied to the stroke dataset and 2D mBd, 2D mNold, 2D mPumpD stroke datasets in the study are provided in Table 8.
Accurate clustering results have been obtained withmeans and FCM algorithms by applying 2D multifractal denoising techniques for the stroke dataset based on the clustering results of the stroke subtypes obtained in this study ( Table 8). The clustering accuracy of the stroke subtypes with -means and FCM algorithms through the 2D mBd stroke dataset proved to be better compared to the 2D mNold stroke dataset and 2D mPumpD stroke dataset. In the 2D mBd stroke dataset, the clustering accuracy of the -means algorithm proves to be better for the no stroke/TIA and cardioembolic stroke subtypes. In the 2D mBd stroke dataset, the clustering results regarding the FCM algorithm application proved to be more accurate for the large vessel, small vessel, cryptogenic, dissection, and other stroke subtypes.
In literature, very limited number of papers exist on mathematical modeling and clustering of stroke subtypes. In this study, seven subtypes among no stroke/TIA, large vessel, small vessel, cardioembolic, cryptogenic, dissection, and other subtypes have been analyzed. Four datasets (stroke   Figure 6).

Steps concerning epochs
Objective function vector for stroke dataset (Figure 6(a)) Objective function vector for 2D mBd stroke dataset ( Figure 6( dataset, 2D mBd stroke dataset, 2D mNold stroke dataset, 2D mPumpD stroke dataset) are totally performed on our new approach with -means and FCM algorithms.

Conclusions
The main contribution of this paper is that it has proposed a novel approach in stroke main headings regarding attributes with the use of 2D techniques of multifractal denoising. The clustering performances of 2D multifractal denoised techniques (mBd, mNold, mPumpD) for the stroke subtypes regarding a total of 2204 stroke patients' dataset have been provided in a comparative manner. When our study is compared with the other works [22,23], it is seen that first of all there is no attribute constraint for the clustering of 7 subtypes of stroke. Secondly, it is possible to select the efficient and significant attributes through the denoising techniques. Finally, popular FCM and -means algorithms are applied to the datasets comprised of efficient and significant attributes. The output datasets are provided in supervised learning and they are used to train the machine and get the desired outputs, whereas in unsupervised learning, no datasets are provided. Instead, the data is clustered into different classes. -means is one of the simplest unsupervised learning algorithms that solve the well-known clustering problem. The procedure follows a simple way to classify a given dataset through a certain number of clusters (assuming clusters) fixed a priori. -means is a simple algorithm that has been adapted to many problem domains. As we have seen, it is a good candidate for extension to work with fuzzy feature vectors [42]. Fuzzy means algorithm [43,44] yields the best result for overlapped dataset and is comparatively better thanmeans algorithm. Unlike -means, where data point must exclusively belong to one cluster center, here the data point is assigned membership to each cluster center as a result of which data point may belong to more than one cluster center. For the first time in literature, 2D multifractal denoising techniques and -means and FCM algorithms have been applied to the numeric data obtained from the attributes that belong to the patients with seven different stroke subtypes. The results reveal that the 2D Bayesian denoising technique application used in our study has proven to be much better compared to the other techniques and methods.

Conflicts of Interest
The authors declare that they have no conflicts of interest.

Introduction
Over the past few decades, differential equations of fractional order have got considerable attention from the researchers due to their significant applications in various disciplines of science and technology. Fractional derivatives introduce amazing instrument for the description of general properties of different materials and processes. This is the primary advantage of fractional derivatives in comparison with classical integer order models, in which such impacts are in fact ignored. The advantages of fractional derivatives become apparent in modeling mechanical and electrical properties of real materials, as well as in the description of properties of gases, liquids, rocks and in many other fields (see [1,2]). Since fractional order differential equations play important roles in modeling real world problems related to biology, viscoelasticity, physics, chemistry, control theory, economics, signal and image processing phenomenon, bioengineering, and so forth (for details, see [3][4][5][6][7]), it is investigated that fractional order differential equations model real world problems more accurately than differential equations of integer order. The area devoted to the study of existence and uniqueness of solutions to initial/boundary value problems for fractional order differential equations has been studied very well and plenty of papers are available on it in the literature. We refer the reader to few of them in [8][9][10][11][12][13][14] and the references therein. To model evolution process and phenomena which are experienced from sudden changes in their states, impulsive differential equations serve as a powerful mathematical tool to model them. In daily life, we observe several physical systems that suffer from impulsive behavior such as the pendulum clock action, heart function, mechanical systems subject to impacts, dynamic of system with automatic regulation, the maintenance of a species through periodic stocking or harvesting, the thrust impulse maneuver of a spacecraft, control of the satellite orbit, disturbances in cellular neural networks, fluctuations of economical systems, vibrations of percussive systems, and relaxational oscillations of the electromechanical systems. For details, see [15][16][17][18][19][20][21][22][23].
In some cases, nonlocal conditions are imposed instead of local conditions. It is sometimes better to impose nonlocal conditions since the measurements needed by a nonlocal condition may be more precise than the measurement given by a local condition in dynamical problems. Also, nonlocal boundary value problems have become an expeditiously growing area of research. The study of this type of problems is driven not only by a theoretical interest, but also by the fact that several phenomena in physics, engineering, and life sciences can be modeled in this manner, for example, problems with feedback controls such as the steady-states of a thermostat, where a controller at one of its ends adds or removes heat. For more applications, see [24] and references therein. Due to the aforesaid significant behavior, we prefer to take nonlocal boundary conditions.
The definitions of the fractional order derivative are not unique and there exist several definitions, including Grunwald-Letnikov [7], Riemann-Liouville [25], Weyl-Riesz [26], Erdlyi-Kober [27], and the Caputo [28] representation for fractional order derivative. In the Caputo case, the derivative of a constant is zero and we can define, properly, the initial conditions for the fractional differential equations which can be handled by using an analogy with the classical integer case. For these reasons, in this manuscript, we prefer to use the Caputo fractional derivative.
Tian and Bai [29] studied the existence and uniqueness of the following nonlinear impulsive boundary value problem: In 1940, Ulam posed the following problem about the stability of functional equations: "Under what conditions does there exist an additive mapping near an approximately additive mapping?" (see [30]). In the following year, Hyers gave an answer to the problem of Ulam for additive functions defined on Banach spaces [31]. Let B 1 , B 2 be two real Banach spaces and > 0. Then for each mapping : B 1 → B 2 satisfying for all , ∈ B 1 , there is a unique additive mapping : That is why the name of this stability is Ulam-Hyers stability. Later on, Hyers results are extended by many mathematicians; for details, reader may see [32][33][34][35][36][37][38][39] and the reference therein. The mentioned stability analysis is extremely helpful in numerous applications, for example, numerical analysis and optimization, where it is very tough to find the exact solution of a nonlinear problem. We notice that Ulam-Hyers stability concept is quite significant in realistic problems in numerical analysis, biology, and economics. The aforementioned stability has very recently attracted the attention of researchers; we refer the reader to some papers in [40][41][42][43][44][45]. Because fractional order system may have additional attractive feature over the integer order system, let us suppose the following example to show which one is more stable in the aforementioned (fractional order and integer order) systems.
Example (see [46]). We have the following two systems with initial condition (0) for V ∈ (0, 1), Then the analytical solutions of (5) and (6) are V + (0) and VΓ(V) V+ −1 /Γ(V + ) + (0), respectively. Clearly, the integer order system (5) is unstable for any 0 < V < 1. But the fractional dynamic system (6) is stable for each 0 < V ≤ 1− . So fractional order system may have better features than integer order system. Influenced from the above-mentioned work, we will study the existence, uniqueness, and different types of Ulam-Hyers stability of the following implicit impulsive fractional order differential equations with nonlocal boundary conditions, involving Caputo derivative The manuscript is structured as follows. In Section 2, we give some definitions, theorems, lemmas, and remarks. In Section 3, we built up some adequate conditions for the existence and uniqueness of solutions to the considered problem (7) through using fixed point theorems of Krasnoselski's and Banach contraction type. In Section 4, we establish applicable results under which the solution of the considered boundary value problem (7) satisfies the conditions of different kinds of Ulam-Hyers stability. The established results are illustrated by an example in Section 5.

Background Materials and Auxiliary Results
This section is devoted to some basic definitions, theorems, lemmas, and remarks which are useful in existence and stability results.
Lemma 4 (see [11]). Let > 0, then the solution of the differential equation will be in the following form: where = [ ] + 1.
Theorem 6 ((Krasnoselskii's fixed point theorem) see [47]). Consider U be a convex closed and nonempty subset of Banach space X. Suppose F * , G * be the operators such that (ii) F * is compact and continuous and G * is contraction mapping.
Then there is ∈ U such that = F * + G * .
Theorem 7 ((Banach fixed point theorem) see [47]). Suppose S be a nonempty closed subset of a Banach space B. Then any contraction mapping D from B into itself has a unique fixed point.

Main Results
Proof. Apply Lemma 4 to the differential equation (24) with using constants 0 , 1 ∈ R, such that and also Likewise, for ∈ ( 1 , 2 ], there are 0 , 1 ∈ R, such that Hence, it follows that Using the following impulsive conditions We obtain Hence, we get Going on the similar way, we get the following expression for the solution ( ) for ∈ ( , +1 ]: We define D : X → X by where ( ) = ( , ( ) , ( )) , Suppose that the following hold.
( 1 ) For any , ∈ X, there exist , ℎ , such that ( 2 ) For any , , , ∈ R and for each ∈ I, there exist > 0, 0 < < 1 such that The following result is based on Banach contraction theorem. (1 − ) Γ ( ) the considered problem (7) which implies that which implies that D is contraction. Hence, the considered problem (7) has a unique positive solution.

Complexity 7
The following result is based on Krasnoselskii's fixed point theorem.
Therefore, we get Using ( 4 ), ( 6 ), ( 7 ) and (55) in (52), we get Hence, we get that |D ( )| ≤ R which implies that D(S) ⊆ S. Now, for the contraction of G * , using ( 1 ) and ( Thus, G * will be contraction if the following assumption holds: Next F * is compact. The continuity of implies that F * is continuous. For , ∈ S By using (55), we get which indicates that F * is uniformly bounded on S. Let us take a bounded subset U of S and ∈ U. Then for , 0 ∈ I with 0 ≤ ≤ 0 ≤ 1, we have Therefore, the right hand side is Thus, F * is equicontinuous and by Arzela-Ascoli's Theorem [49], F * is compact. By using Theorem 6, the considered problem (7) has at least one positive solution.

Ulam Stability Results
In this section, we built up some sufficient conditions under which problem (7) satisfies the assumptions of various kinds of Ulam-Hyers stability.

Lemma 18.
If ∈ X is the solution of inequality (17) and 1 < ≤ 2, then is the solution of the following inequality: Proof. Since is the solution of inequality (17) So the solution of (65) will be in the following form: where ( ) = ( , ( ) , ( )) .
For convenience, we denote the sum of terms free of by ]( ), that is, Therefore, (66) becomes By using (i) of Remark 13, we get Then problem (7) will be Ulam-Hyers stable.
( 8 ) Suppose a function ∈ (I, R + ), which is increasing. Then there is > 0, such that, for every ∈ I, the following integral inequality holds.

Lemma 21.
Let assumption ( 8 ) hold and suppose ∈ X is the solution of inequality (18), then will be the solution of the following integral inequality: Proof. From Lemma 18, we have hold. Then the considered problem (7) will be Ulam-Hyers-Rassias stable.
Proof. Suppose ∈ X be any solution of inequality (17) and be the unique solution of the considered problem (7), then from (73) Now using Lemma 21 in (89), we obtain After arrangement, we obtain the following: Hence, we have Hence, the considered problem (7) is Ulam-Hyers-Rassias stable.

Example
Consider the following implicit impulsive fractional differential equations with nonlocal boundary conditions.

Conclusion
We have successfully built up some proper conditions for existence theory of implicit impulsive fractional order differential equations with nonlocal boundary conditions, by using different kinds of fixed point theorems which are stated in Section 2. The concerned theorems ensure the existence and uniqueness of solution. Further, we did settle some adequate conditions for different kinds of Ulam-Hyers stability (see Theorems 19 and 22 with Remarks 20 and 23, respectively) by using Definitions 8-11. The mentioned stability is rarely investigated for implicit impulsive fractional differential equations and also very important. Finally, we illustrated the main results by giving a suitable example.

Conflicts of Interest
There are no conflicts of interest regarding this research work.

Introduction
A shape memory alloy (SMA) shows temperature and stress induced martensitic phase transformation, drastically changing the material's mechanical properties. In other words, after an apparent plastic deformation, it will return to their original shape when heated. The same materials, in a certain temperature range, can be strained up to approx. 10% and still will return to their original shape when unloaded. These unusual effects are called thermal shape memory and superelasticity (elastic shape memory), respectively, which can use both sensor and actuator [1]; it is increasing range of potential applications, such as Biomedical [2], Robotics [3], and Aerospace. Stress-strain curve study reveals hysteretic phenomenon of SMAs. Hysteretic effect leads to nonlinearity. This property results in high damping capacity [2] which can be utilized for amplitude attenuation in vibrating system [4] so the application of these materials can be extended to control elements too.
Various constitutive models of shape memory oscillators (SMO) have been developed since 1990, in order to analyze the dynamical behavior [5][6][7][8][9]. The study reveals that SMOs can expose to chaotic region when it is treated as single degree-of-freedom system but can extend to hyperchaos when it is treated as two-degree-of-freedom system [10,11]. Similar property is noted down in other SMAs. According to Falk model strain and temperature are the only state variables; free energy is validated for various temperatures and shows unstable points and is not considering twinned martensite. Brinson's model describes the reorientation process clearly compared to other models, the numerical simulations reveals the internal subloops present in thermomechanical [12] loading. Tanaka model considered martensitic volumetric fraction so it can describe compressive behavior accurately [13]. The transient response and attractor's multistability are interesting property in nonlinear study, and the numerical analysis of SMOs nonlinear models shows these properties significantly [10]. Dynamical jumps of SMOs were studied and found that the response is not only depending on the forcing amplitude but also on the way the forcing frequency is modified [14].
Experimental dynamic analysis of SMAs was carried out and found nonlinearity in phase transformation leads to complicated dynamic behavior of the system [15]. Nonlinear dynamics of SMAs are investigated experimentally and it is noted that the actuator response shows unpredictability,  when low-to-moderate voltage was applied to the system and added that could affect the system design and can cause control difficulties in precise applications [3].
The real systems are fractional in nature, so there is no surprise on treatment of fractional order describing behavior of the system more accurate than the integer methods. Because of holding nonlocal property, that is, the next state of a system depends not only upon its current state but also upon all of its historical states, fractional calculus can provide more realistic results. Recently fractional order calculus used to refine the results in various fields like thermodynamics, mechatronics systems, chaos theory, and biomedical system as well. When a chaotic nonlinear system is treated as integer order, it demands minimum order of 3 for chaos to appear [16]. Conversely while the system is treated as fractional order, chaos can be identified with lesser order. For example, Chau's circuit with order 2.7 can produce chaotic attractor [17]. Recently the active and passive vibration damping capacity of SMAs are studied. Oberaigner model has been taken, and fractional order results reveal the chaotic behaviors precisely [18]. The generalized theory of electrothermoelasticity of fractional order heat transfer describes the behavior of the particles of an elastic body more realistically than the theory of generalized thermoelasticity with integer order [19]. Yu et al. developed the theory of fractional order generalized electro-magneto-thermoelasticity for anisotropic and linearly electro-magneto-thermo-elastic media and extended that the fractional order has great effect on the response when the material is imposed a sudden heating [20].

Dynamics of the Two-Degree-of-Freedom Shape Memory Alloy (2DOF SMA) Oscillator
Even though the authors of [11] have done a [0-1] test on different models of SMAs, the complete dynamical analysis of the system is less investigated. Hence we are interested in analyzing the two-degree-of-freedom (2DOF) SMA [21] shown in Figure 1 because of its higher dimension and complex behaviors. The dimensionless mathematical model of the 2DOF SMA polynomial constitutive model can be derived using the equation of motions and is given by [11,21] where 1 , 2 , 1 , 2 are the excitation parameters, 1 , 2 , 3 are the dissipation parameters, is the mass relation parameter, and V 21 , V 32 , 21 , 32 , 1 , 2 , 3 , 1 , 2 , 3 are the SMA properties. The states and are the dimensionless displacements, and and are the dimensionless velocity components of the 2DOF SMA. For the parameter values, and depending on the temperatures of the SMA given by the parameters 1 , 2 , 3 , the system changes the character as periodic, quasiperiodic, chaotic, and hyperchaotic [11]. Table 1 shows the different types of behavior of the SMA with various values of 1 , 2 , 3 with the respective finite time Lyapunov exponents (LEs) calculated using Wolf algorithm [22] for 20000s. Figure 2 shows the 2D phase portraits of the SMA system with initial conditions [0, 0, 0, 1] for various values of Bifurcation. Bifurcation analysis is helpful in investigating the qualitative changes in the states of a system with respect to the variation in a parameter [23]. Recent techniques have shown that bifurcation analysis can be done with experiments rather than only mathematical methods. Both continuous and discrete models can be analyzed by bifurcations. To analyze the 2DOF SMA using bifurcation plots, we fix all the other parameters like in (2) and consider 2 as the bifurcation parameter. The initial conditions for the first iteration are taken as [0, 0, 0, 1] and are reinitialized to the end values of state trajectories at each iteration. Figure 3(a) shows the bifurcation diagram of system (1) with respect to the parameter 2 . The parameter values of the SMA system are taken as in (2) with 1 = 1.5, 3 = 1.5, and initial conditions for the first iteration are taken as [0, 0, 0, 1] and are reinitialized to the end values of the state variables at the end of each iteration. It can be clearly observed from Figure 2(a) that the SMA shows periodic, quasiperiodic, chaotic, and hyperchaotic oscillations. For 2 < 0.45 the SMA system shows periodic limit cycles. For 2.9 ≤ 2 ≤ 3.75 the SMA systems show quasiperiodic/torus attractor. Figure 3(b) shows the bifurcation of 2DOF SMA for parameter 1 with 2 = 1.5, 3 = 1.5 which shows periodicity, chaos, hyperchaos, and quasiperiodicity. Period 2 limit cycles are seen for 0 ≤ 1 ≤ 0.75, period 3 limit cycles are exhibited for 2.2 < 1 ≤ 2.5 and 3 < 1 ≤ 3.25, and period 4 limit cycles are seen for 4.3 ≤ 1 ≤ 4.5. Chaotic oscillations are seen when 0.75 < 1 ≤ 1.25, 1.8 < 1 ≤ 2.2, 3.25 < 1 ≤ 3.5, and 3.75 < 1 ≤ 4.3 and hyperchaos is seen for a small range of 1.25 < 1 ≤ 1.8. Quasiperiodic oscillations are shown for 3.5 < 1 ≤ 3.75. Figure 3(c) shows the bifurcation of the 2DOFSMA system with 1 and 1 = 1.5, 2 = 1.5, 3 = 1.5. Chaotic and hyperchaotic regions are seen in the region 0.048 < 1 ≤ 0.075 and chaotic region for 0.035 < 1 ≤ 0.045. For 1 < 0.018 we could observe period 1 limit cycles and quasiperiodic oscillations for 0.018 < 1 ≤ 0.03.

Fractional Order Two-Degree-of-Freedom Shape Memory Alloy (FO2DOFSMA) Oscillator
Fractional order calculus is as old as integer order as seen from a letter written by Leibniz to L'Hopital [24]. A more common form of general order is given by where is the order. The three ways for simulating fractional order systems [25] are computational methods based on the mathematical equations, approximation through a rational system in discrete time, and approximation of the fractional system using rational function in continuous time. Most researchers prefer approximation through a rational system in discrete time in which the fractional order system is replaced by its discrete equivalent and approximated again by truncating the polynomial series, which reduces the requirement of infinite memory.
Many different approaches for numerical simulations of fractional order systems have been extensively investigated [26][27][28]; however there is always a tradeoff between computational efficiency, complexity, and the accuracy of the approximations. Recent researchers have been working on developing fast convolution quadrature algorithms relevant to fractional differential equations because fractional calculus operators work either in continuous or discrete convolution of some form.
There are three commonly used definitions of the fractional order differential operator: Grünwald-Letnikov, Riemann-Liouville, and Caputo [23,29,30]. We used the Grünwald-Letnikov (GL) definition to derive the fractional order model of 2DOFSMA. The Grünwald-Letnikov method is proceeding iteratively but the sum in the scheme becomes longer and longer, which reflects the memory effect. The binomial coefficients are recursively defined and show very smooth properties. The Grünwald-Letnikov derivative is used in many numerical schemes for discretizing fractional diffusion equations from a continuous Riemann-Liouville approach [31][32][33] and involves a discrete convolution between a "weight" or binomial coefficient function and the function of interest for differentiation. The analytical model of the binomial coefficients is well established in the literature [33].
Most of the literatures have used the Caputo method for numerical solutions of the fractional order systems. But GL method has benefits over the other methods of solving fractional orders due to the smoothness of the resultant approximations [34]. Hence we use the GL method to derive the fractional order 2DOF SMA. The GL derivative can be defined as where and are limits of the fractional order equation, Δ ℎ ( ) is generalized difference, ℎ is the step size, and is the fractional order of the differential equation.
For numerical calculations the above equation is modified as We use the short memory principle to limit the required memory for binomial coefficients using The binomial coefficients required for the numerical simulation is calculated as Let the general form of the 3D fractional order system be defined as In order to simulate system (8) using GL method we use the discretization method discussed [34,35], where is the binomial coefficients calculated using (11). The value of is taken as the truncation window size and as when all the available memory elements are used. Let us define the FO2DOFSMA oscillator as Using (9) in (10), the discrete form of the FO2DOFSMA is given by The value of is taken as the truncation window size and as when all the available memory elements are used. For the parameters values given by (2) and Table 2, the FO2DOFSMA oscillator shows different behaviors as like the integer order SMA. For initial conditions [0, 0, 0, 1], commensurate order = 0.99, and step size ℎ = 0.001, the different behaviors of the FO2DOFSMA are given in Figure 4.
We study the bifurcation of the FO2DOFSMA with fractional order and parameter 2 . The bifurcation of the FO2DOFSMA oscillator with the fractional order is shown in Figure 5(a). The initial conditions are taken as [0, 0, 0, 1] and are reinitialized in end of each iteration to the final values of the state variables. The FO2DOFSMA shows chaotic oscillations for ≥ 0.968. Figure 5(b) shows the bifurcation of the oscillator with 2 and shows chaotic oscillations for 0.47 < 2 ≤ 0.8, 1.1 < 2 ≤ 1.6. Hyperchaotic oscillations are seen for 0.8 < 2 ≤ 1.1, quasiperiodic oscillations for 1.6 < 2 ≤ 2.2, and period 4 oscillations for 2.2 < 2 ≤ 4. For Figure 5(b), we used initial conditions without reinitialization. Figure 6 shows the bifurcation plots of the oscillator with reinitialization of the initial conditions. Figure 6(b) shows zoomed in portion of Figure 6(a) to shows the period doubling route to chaos.

Conclusion
A two-degree-of-freedom polynomial constitutive shape memory alloy oscillator is investigated and it is shown that periodic, quasiperiodic, chaotic, and hyperchaotic behaviors are shown by the oscillator for various values of the excitation parameters and temperatures. Bifurcation plots are derived to show the existence of chaotic and periodic oscillations. The Grünwald-Letnikov derivative is used to derive the fractional order model of the oscillator and bifurcation plots for fractional order and parameter are derived to study the dynamical behavior of the oscillator. Compared to the integer order model, the fractional order model shows more complex chaotic behavior and the quasiperiodic region is extended as chaotic regions and the oscillator shows more extended regions of chaos with parameters. This proves that the fractional order model shows more complex behavior and hence designing controllers to suppress chaos in fractional orders is quiet complicated.