Fractional-Order Boosted Jellyfish Search Optimizer with Gaussian Mutation for Income Forecast of Rural Resident

The disposable income of residents can reflect the living standard of people in the area. For government departments, it is necessary to master the trend of rural resident income to formulate corresponding policies benefiting farmers. Thus, this paper proposes a grey model with an improved jellyfish search optimizer to predict the rural resident income in Shaanxi Province. Firstly, by applying fractional-order modified strategy and Gaussian mutation mechanism to the original algorithm, the proposed algorithm shows better performance in solving accuracy, stability, and convergence acceleration when compared with different classical methods on cec2017 and cec2019 test functions. Then, based on the fractional time-delayed grey model, a discrete fractional time-delayed grey model with triangular residual correction (TDFTDGM) is proposed by replacing the derivative with a first-order difference and introducing the triangular residual correction functions. Finally, the improved jellyfish search optimizer is used to explore the optimal order of the TDFTDGM model. The all-around performance of the forecast model is incomparable to additional grey models compared on four measure criteria, which means it is a practical approach for long-term prediction with small samples. Moreover, the forecast data of rural resident income in Shaanxi Province from 2021 to 2025 are given for reference.


Introduction
Agriculture, rural areas, and farmers are important issues for the long-term stability of the country in China [1]. In addition, the income of rural residents is a key index, which reflects the living standard of people in rural areas and the economic development of rural farmers [2]. Only by understanding the trend of rural residents' income, the government is able to formulate a series of policies to improve the income of rural residents [3]. However, there are only a few empirical studies on income prediction in the current literature. It is because that it is highly difficult and timeconsuming to get exact information about the disposable income of a region in a long period [4]. Meanwhile, due to the income being affected by policies, climate, and other uncertainty factors, it is a challenging task to predict the income accurately [5].
ough the forecasting models for resident income are scarce, there are many forecasting approaches for other areas. For example, Maaouane et al. used the multiple linear regression method to predict the industry energy demand in Morocco [6]. Radial neural network is also a popular tool, which was used for energy consumption forecasting and wind speed forecasting in [7,8], respectively. e authors in [9,10] used the ARIMA model to the daily production prediction of wells in Sulige and to forecast the rural population in China from 1970 to 2015. Although these approaches can complete the task of data prediction according to different features, there are some defects. A tremendous amount of sample data is required, which means the above methods are not suitable for problems with a small sample [11]. As a choice, the grey forecasting algorithm solves the prediction problem of a small sample data set. e grey model (GM) is an effective forecast approach with microscopic samples, which was proposed in 1982 [12]. It has the benefits of simple calculation, heightened precision, and wide application. As scholars have a deeper understanding of GM, some enhanced models were presented to enhance the accuracy. e classic grey model (GM) is mixed with the trigonometric residual modification strategy. Zhou et al. proposed a novel trigonometric grey prediction approach (TRGM) to forecast electricity needs and obtain effective results [13]. en, in [14], an unknown discrete grey forecasting model called the DGM was designed. It showed outstanding performance in predicting the long-term developing tendency of an information series. Meanwhile, Wu et al. proposed a novel nonlinear grey Bernoulli model with fractional-order accumulation, shortened as the FANGBM model in 2019 [15]. is model was used to predict increase trend of the future China's renewable consumption.
ough the curtain-raiser of fractional-order collection has created meaningful contributions to forecasting methods, some issues may also be mistaken as they do not consider the time-delayed effect.
us, the authors in [16] introduce a new fractional grey model, called the fractional delay grey model (FTDGM). We design a novel grey model to obtain better-predicted results considering the significance of the discrete model and trigonometric residual modification technique.
Moreover, there is a parameter to be determined in the fractional grey model, the fractional order. en, how to choose the most suitable parameters becomes another thorny problem. e authors in [16] provided a practical solution, which applied a metaheuristic algorithm to select parameters.
Metaheuristic methods have been grown rapidly in current years and show outstanding performance in solving continuous, discrete, or nonlinear optimizations problems [17,18]. Generally, metaheuristic algorithms can be categorized into four varieties, swarm intelligence (SI) algorithms, evolutionary algorithms (EAs), physics-based algorithms (PhAs), and human-based algorithms [19]. e cooperative and hunting behavior of social animals in nature inspire SI algorithms. Particle swarm optimization (PSO) is the most classical one, which has been employed to solve different problems [20]. With the exploration of animal habits in recent years, lots of SI algorithms have emerged. In 2015, Wang et al. proposed the monarch butterfly optimization (MBO) algorithm by simplifying and idealizing the migration of monarch butterflies [21]. After being compared with other algorithms on thirty-eight benchmark functions, the results showed the capability of the MBO method significantly outperformed the other five algorithms [21]. In 2020, inspired by a unique mathematical model that slime mould forms the optimal path for connecting food through the positive and negative feedback of the propagation wave, Li et al. proposed the slime mould algorithm (SMA) [22]. In addition, in 2021, by simulating the behavior of African vultures and emperor penguin, respectively, the African vulture optimization algorithm (AVOA) [23] and Aptenodytes Forsteri Optimization (AFO) [24] were designed and provided excellent performance. Similar algorithms are available for moth search algorithm (MSA) [25], colony predation algorithm (CPA) [26], and so on. EA algorithms are inspired by the natural laws of population development and evolution. Among EA algorithms for solving various optimization tasks, the genetic algorithm (GA) [27] and differential evolution algorithm (DE) [28] are undoubtedly the most touted. PhA algorithms rely on physical regulation to suggest solutions to optimization difficulties. Such as multiverse optimizer (MVO) was inspired by the multiverse theory in physics [29]. In addition, the Archimedes optimization algorithm (AOA) is a novel PhA algorithm created with motivations from an exciting regulation of physics Archimedes regulation [30]. In 2021, based on the logic of slope variations computed by the Runge-Kutta method, the Runge-Kutta optimizer (RUN) was proposed and offered outstanding performance on 50 mathematical test functions and four real-world engineering problems [31]. e last set of nature-inspired methods simulates some natural human behaviors. Such as teaching-based learning algorithm (TBLA) [32], socioevolution learning optimization algorithm (SELOA) [33], preaching optimization algorithm (POA) [34], and hunger games search (HGS) [35].
Jellyfish search (JS) optimizer is a high-profile metaheuristic algorithm suggested in 2020, which was roused by the conduct of jellyfish in the ocean [36]. After being compared with ten prominent metaheuristic algorithms on the encyclopedic set of mathematical standard functions and used in a sequence of structural engineering concerns, JS is potentially a flawless algorithm for solving optimization problems. Unavoidably, the original JS algorithm also suffers defects in solving accuracy and premature convergence.
us, this paper introduces the fractional-order modified strategy, and Gaussian mutation mechanism into the original JS algorithm, jellyfish search algorithm based on fractionalorder modified, and Gaussian mutation mechanism (FOGJS). In addition, we apply the improved algorithm to the novel grey model to obtain the optimal order of the forecast model. e contribution of this paper can be outlined as follows: An enhanced version of the jellyfish search algorithm with fractional-order modified and Gaussian mutation mechanism is proposed. And the validity of the improved algorithm is discussed on test functions of cec2017 and cec2019 by being compared with the original JS and other ten additional algorithms. Based on the fractional time-delayed grey model, we alternate the derivative with a first-order difference. It introduces the trigonometric residual modification technique to design a novel forecast model-a discrete fractional time-delayed grey model with triangular residual correction (TDFTDGM). Taking the rural income data of Shaanxi Province as an example, apply the FOGJS to TDFTDGM to search the most suitable fractional order of the forecast model. en, compared with other optimization algorithms and grey models, the fitting and predicted errors of the FOGJS + TDFTDGM approach are discussed. 2 Computational Intelligence and Neuroscience e rest of the paper is systematized as tracks. In Section 2, we give the theory of jellyfish search optimizer. e improved JS algorithm is proposed in Section 3. Section 4 examinations the performance of improved algorithms on miscellaneous test functions. Section 5 presents the novel grey model and predicts the income of rural residents by different models. Eventually, the work of this paper is abstracted in Section 6.

The Theory of Jellyfish Search Optimizer
e jellyfish search (JS) is a recent swarm intelligence method founded on jellyfish demeanor in the ocean. e jellyfish's search behavior and movement mode in the ocean encourages the algorithm [36]. As Figure 1 shows, the jellyfish will move with the ocean current or move in the population. Firstly, it is affected by the ocean current. Each jellyfish will follow the ocean current to form a jellyfish gathering. Secondly, once the surrounding food changes, jellyfish will move within the group. ese motions are switched by using a time control mechanism. e different initialization distributions of exact solutions in the search space will affect whether they will eventually fall into local solutions. After observing the typical random methods, it is finally found that the jellyfish search optimizer performs well under the logistic map. e mathematical description is as follows [36]: where Z i represents the i-th jellyfish logistic chaotic value, Z 0 is the first people of developed jellyfish, Z 0 is set between 0 and 1, but Z 0 cannot take some particular values, such as 0.0, 0.25, 0.5, 0.75, 1.0, and α is selected to 4.0. e ocean currents contain a lot of nutrients and are easier to survive, attracting jellyfish. erefore, the current ocean direction is usually specified by the average vectors from all jellyfish in the ocean group to the jellyfish presently in the optimal situation. e mathematical term of ocean current is as tails [36]: where r 1 and r 2 are the accidental numerals between 0 and 1, Z * is a jellyfish in the most suitable place at present, β > 0 is a distribution coefficient, and β is usually taken as 3. e movement in the jellyfish group can be divided into active and passive movements. Initially, when the jellyfish group was just formed, most jellyfish showed passive movement. Passive motion is the movement of jellyfish near its place. At this time, the updated connected place of individual position is where ub is the upper attached and lb is the lower bound of the tracking area, in addition, c is a move coefficient and its value > 0, which is connected to the move length of the jellyfish near its place, and the original algorithm usually takes c � 0.1, and r 3 is an arbitrary number in the range of 0 and 1. e active movement relies on comparing the food quantity of two jellyfish to judge whether there is relative movement. When the food quantity near the other jellyfish is higher, it will move towards it. e expression of active motion is as tails [36]: where r 4 is a random number between 0 and 1, and Direction ����������→ is used to determine the movement direction of the current jellyfish in the next iteration. is movement always moves in the direction of better food position, and represents the following formula [36]: where k is the index of a jellyfish determined haphazardly, and f is an objective function. A time control tool is presented to control the tendency of people between observing the ocean current and driving within the people. Figure 2 is the allocation of activity within the people. e time control instrument contains a time control function c(t) and a regular c o , and the time control function is an arbitrary matter that fluctuates from 0 to 1. e term of the time control instrument is as follows [36]: where t is the recent iterations, t max is the total iterations, and r 5 is a random number between 0 and 1. When c(t) ≥ c o , swim inward with other passive or active actions; when the randomly generated random number is more significant than (1 − c (t)), the current people use the inactive motion. Otherwise, the active movement is involved.

An Improved Jellyfish Search Optimizer
An improved JS algorithm based on fractional-order modified and Gaussian mutation mechanism (FOGJS) is proposed to solve the problem that the jellyfish search algorithm quickly falls into optimal local solution and improves accuracy.

Fractional-Order Modified Strategy.
Fractional calculus extends the order of calculus from integer to fraction and ... Computational Intelligence and Neuroscience 3 recursively deduces the solution limit through the difference approximation of integer calculus, that is, the differentiation and integration with the order of fraction. ere are many definitions of derivatives, such as Grünwald-Letnikov, Caputo fractional derivatives [37], Riesz potential [38], and so on. e most commonly used definitions is Grünwald-Letnikov (G-L) [39] definition: where α is the fractional coefficient of a public signal z(t), Γ is a gamma function, and T is the truncation term. e discrete expression of G-L can be expressed as Taking advantage of the fractional learning and training algorithm is easy to leap out of the local extreme points. e jellyfish search algorithm is integrated with the fractionalorder modified strategy to adjust the fractional-order by updating the jellyfish position in the ocean current and the jellyfish passive motion position. Let α � 4 in equation (8) have the following: e fractional derivative results are related to the current term and previous state values, and the influence of past events decreases with time. e position update of jellyfish moving under the effect of ocean current and the position update of people passive movement when jellyfish group just formed are equations (2) and (3), respectively. Figure 3 shows how fractional-order correction affects the update. e left side of equations (2) and (3) is fractional-order G-L, which defines the discrete form when order α is 1, and T is 1; that is, erefore, the position update formula of jellyfish affected by ocean current after fractional-order modified is as follows: e update formula of passive motion position of jellyfish after fractional-order modified is It is worth noting that when the terms of 1/120 or 1/720 or higher are multiplied by the remaining terms of equation (8), the values of these terms become negligible and hardly affect the update of position. erefore, the higher-order terms are discarded.

Gaussian Mutation Mechanism.
e Gaussian distribution, also understood as the standard distribution, is a substantial probability distribution in mathematics, artificial intelligence, and other related fields [40]. e Gaussian mutation tool is employed to generate a further variant, which retains a better position by comparing it with the target value of the current optimal individual situation. is mechanism makes the algorithm's local results and global results well balanced [41]. e probability density function (PDF) formula of Gaussian distribution is as tails: where μ is the anticipation of Gaussian distribution and σ is defined as the standard deviation of Gaussian distribution. We can describe the resulting new mutation location as Step=rand (0,1)×Direction Step=rand (0,1)×Direction Figure 2: e direction of movement of jellyfish.  Computational Intelligence and Neuroscience where θ is a declining random integer in the range of 0 and 1, G(0, 1) is the expected Gaussian distribution, and Z best i,j is the best location for the current iteration.

Explicit Steps for the Improved Jellyfish Search Optimizer.
e fractional-order modified strategy and Gaussian mutation mechanism are presented in JS.
e convergence precision of the JS algorithm is effectively enhanced, and the tracking performance of the original JS algorithm is improved. e exact steps of FOGJS are as follows: Step 1. Give some parameters associated with FOGJS, such as people dimensions N, varying dimension Dim, upper set range ub, lower set range lb, total iterations M iter , the distribution coefficient β, and the motion coefficient c.
Step 2. Aimlessly initialize N people size according to chaotic logistic map (equation (1)), and set time t � 1.
Step 3. Compute the fitness value for each people, document the optimal fitness value and the related optimal place Z * .
Step 4. While t < M iter , compute the control time c(t) according to equation (6), if c(t) ≥ 0.5, jellyfish will follow ocean current, update the position of jellyfish by equation (11).
Step 5. If c(t) < 0.5, randomly generate an accidental number r, ranging from 0 to 1, if r > (1 − c(t)), the jellyfish will update its position by passive motion of equation (12). If r ≤ (1 − c(t)), the position of the jellyfish is computed by active motion according to equation (4).
Step 6. Judge whether the updated new location crosses the boundary. If yes, the site is set near the border by default. At the same time, the fitness value of individual jellyfish is estimated. If it is more undersized than the optimal value, it is accepted as the new optimal value, and the related position is accepted as the recent optimal position Z * .
rough the Gaussian mutation mechanism of equation (14), the optimal position is mutated to produce a new solution, and judge its fitness with the optimal solution, to choose whether to update the mutated new solution to the optimal solution.
Step 8. Update the value of t (t � t + 1), if t < M iter , continue Step 4. Otherwise, output the optimal value and the optimal place.
Step 9. Otherwise, output the optimal value and the optimal place.
To express the enhanced algorithm more obviously, Algorithm 1 gives the pseudocode of the FOGJS. Among them, line 1 is to update the candidate solution position through the logistic chaotic map, lines 9 and lines 14 are the update operation process of applying the fractional modified update formula, and lines 22-26 are the update process after generating the mutation solution through the Gaussian mutation mechanism. Figure 4 shows the flowchart of the proposed FOGJS algorithm. It can be found that it first defines the parameters listed above and then executes the updated JS based on fractional-order modified. e optimal solution is taken as the initial point of mutation of the Gaussian mutation mechanism. e loop will continue, jump out of the loop when the termination conditions are met and then output the optimal solution of the search.

e Complexity Analysis of FOGJS.
e computational complexity of FOGJS mainly relies on three aspects: initialization stage, fitness function evaluation, and coordinated update. e complexity of the initialization stage is O(N), where N is the number of candidate solutions. Different problems lead to additional complexity of the fitness function, so we will not be concerned about it here. Finally, the complexity update location is O(N × M), where M represents maximum iterations. erefore, the computational complexity of FOGJS the algorithm is O(N × M). In the following sections, we will use diverse benchmark functions and actual optimization problems to verify the implementation of FOGJS in dealing with different optimization problems.

Numerical Examples and Analysis
is part evaluates the comprehensive performance of the improved FOGJS algorithm in solving challenging test functions. Two series of popular functions are used, including 29 cec2017 benchmarks and 10 cec2019 benchmarks [42]. e optimization results of FOGJS are analyzed and compared with other famous optimization algorithms. Meanwhile, to ensure the consistency and reliability of the improved FOGJS, 20 independent tests were conducted on FOGJS and other algorithms. e mean (mean), standard deviation (STD), the worst solution to date, and the best solution to date are reported. To achieve a reasonable comparison, other algorithms considered achieving the same number of iterations 1000 and population 30 as the FOGJS.

Evaluation Index.
e following section will analyze the performance differences between the improved FOGJS algorithm and other comparison algorithms through the following evaluation indicators.

e Best Results
best � min best 1 , best 2 , . . . , best runs , (15) where best i is the best result of each run. Runs is the number of runs of each algorithm in each benchmark function, which is taken as 20 in this paper.

Computational Intelligence and Neuroscience
Input: e parameters of JS such as the distribution coefficient β, and the motion coefficient c. People size N, dimension Dim, and maximum iterations M iter . Output: Optimal fitness value. Construct the initial value for the population through logistic chaotic map (Z i ).

While ( t < M iter ) do
Calculate the fitness function for an individual population. Choose the best location Z * .
Compute the time control c(t) using equation (6) For i � 1 to N do If c(t) ≥ 0.5 then Jellyfish follows ocean current e jellyfish position is updated by equation (11) with fractional-order modified. Else Jellyfish move inside a swarm If rand(0, 1) ≥ (1 − c (t)) then Jellyfish exhibits type passive motion. e jellyfish position is updated by equation (12) with fractional-order modified. Else Jellyfish exhibits type active motion. e jellyfish position is updated by equation (4). End if End if Check the boundary of the jellyfish location and calculate the new area. Update the location of the jellyfish and the most location Z * Gaussian mutation mechanism  Figure 4: Flowchart for the proposed FOGJS optimization algorithm. 6 Computational Intelligence and Neuroscience

Wilcoxon's Rank-Sum Test.
Wilcoxon's rank-sum test is a nonparametric examination used to test the statistical difference between two sample data sets. Wilcoxon's ranksum is an effective tool to check whether the FOGJS is significantly better than other comparison algorithms in the general distribution of the examination results.

Sensitivity Analysis of Parameters.
Several JS algorithms introduce parameters that affect the performance when working with improvements, such as the distribution coefficient β parameter related to the trend length and the motion coefficient c parameter associated with the update of the position motion length. A sensitivity analysis of the response to parameter changes is now performed to properly understand the effectiveness of these parameters on improving the FOGJS situation.  [29] WEP_Max, WEP_Min 1, 0.   Computational Intelligence and Neuroscience e coefficient β was changed from 0.1 to 1 in steps of 0.1. e coefficient c was changed from 1 to 10 in Step 1. e algorithm was run on 29 cec2017 test functions, where the dimension was set to 30, and the population was set to 30. Figure 5 shows the average rank obtained for each analysis for the two parameters (β, c).
e optimal values of the coefficients include for β equal to 0.4 and c equal to 2. It is important to note that we set the parameters only when the two strategies of fractional order and Gaussian variation are more effective in improving the JS algorithm. ere will be differences with the parameters provided by the JS algorithm. Experiments prove that the FOGJS algorithm is feasible and suitable at β � 0.4 and c � 2.

Exploration and Exploitation.
e distance between individuals in different dimensions and the overall trend can determine whether the whole tends to be divergence or aggregation. When there is a trend of separation, the difference between each individual in multiple dimensions will become more prominent, which means that each individual explores the space in a differentiated way. is trend will make the algorithm do a more comprehensive exploration of the solution space through the temporary characteristics of the population. In addition, when there is a tendency of aggregation, the population is based on a widely recognized partial exploration space, reducing the differentiation of each individual and doing more detailed exploitation of the region. At the same time, maintaining a good balance between these two modes of exploration and exploitation is a necessary guarantee to find the optimal solution. Usually, algorithms with poorer methods fail to produce satisfactory results.
Studying the trend of convergence plots and the statistical mean, best, worst, and standard deviation over multiple runs do not help us understand an algorithm's exploration process, leading to the failure to solve the speed of accuracy problem faced by an algorithm. erefore, we resorted to the dimensional diversity measure proposed by Hussain et al. in [58], which calculates the variability between individuals and populations in terms of dimensionality by where Median(z j ) is the median of jth dimension in the whole group, z j i is the jth dimension of the people i, and n is the number of the population. After calculating the average Div j of each individual, divide the result by Dim to estimate the diversity index of the average dimension.
By obtaining the average diversity of the estimated population after each iteration, we can calculate the exploration and utilization percentage of all iterations according to the following formula: where Div is the average diversity of the population in each iteration and Div max is the maximum of the average variety in all iterations. Exploration% and Exploitation% are the exploration and development percentages of iterations, respectively. Figure 6 shows the exploration and exploitation diagrams of FOGJS for some of the cec2017 test functions (cec03, cec06, cec09, cec10, cec11, cec12, cec13, cec14, cec16, cec17, cec19, cec20, cec22, cec24, and cec27). From cec03, cec11, cec20, and cec24, we can find that FOGJS has a high exploration rate at the beginning of the iteration and a rapid transition to a high exploitation rate in the middle and finally finishes the iteration with a high exploitation profile. is process indicates that the higher exploration rate in the early stage of FOGJS ensures the global search and prevents getting into the optimal local solution, while the higher exploitation rate in the later stage ensures the higher accuracy of the optimal solution. And in cec10, cec14, cec17, and cec20 test functions, the exploration and exploitation maintain almost the same ratio throughout the iterations, indicating that FOGJS can ensure the balance between the two when handling these test functions. Whereas in cec06, cec09, cec12, cec13, and cec19, the exploitation rate has been dominant throughout the iterative process, and in cec27, the opposite is true.
In summary, FOGJS has different strategies in dealing with varying test functions and therefore has a solid dynamic and adaptable nature. Figure 7 shows the histogram of the percentage of both exploration and exploitation for different test functions. As can be seen, there are 12 test functions with more than 50% exploitation, while at cec10, cec20, and cec27 while more focused on exploration.

Comparison Results Using cec2017 Test Functions.
To further prove the performance of the improved JS algorithm, a challenging set of test functions named cec2017 is selected to be solved. It contains 29 functions, at least half of which are challenging mixed and combined functions. To evaluate the stability of the improved FOGJS performance, these functions are used to test FOGJS. Meanwhile, the FOGJS has achieved 1000 iterations and 30 population sizes for 20 independent operations. e results are compared with other excellent algorithms, including JS [36], GSA [44], MVO [29], WOA [46], GOA [56], LSO [57], HHO [52], ASO [53], AOA [43], and AO [19]. e AVE (average value) and STD (standard deviation) values of the benchmark test bench considered are calculated through the executed algorithm. In addition, to make the data more convincing, the last few rows of the table contain the statistical results of the Wilcoxon rank-sum test, where "+" indicates that the results of other algorithms are better than FOGJS, "−" means the opposite, and "�" suggests that there is no significant contrast between FOGJS and different comparison methods. e best average obtained by 12 algorithms in tables is also drawn in bold. Table 2 provides the evaluation results of the best solution to date. e average rank of the FOGJS is 2.28, ranking first. Among them, the FOGJS provides the best results of all algorithms in 10 functions and also shows strong competitiveness in other functions. For other algorithms, MVO ranks second, second only to FOGJS, and has successfully implemented 10 functions, but it has poor competitiveness in other functions. JS and CSO successfully implemented 6 and 3 functions, respectively, while GSA, WOA, GOA, LSO, HHO, ASO, AOA, and AO did not show the best performance. In conclusion, the FOGJS is superior to other optimization algorithms in accuracy and occupies the first place. In addition, the ranking of the original JS is 2.69, which is lower than that of the FOGJS. It can be seen that the fractional-order modified strategy and Gaussian mutation mechanism improve the effective exploration ability of the JS in finding the optimal solution. e FOGJS algorithm can search for better solutions with faster convergence rates established on the actual algorithm.  Figure 8 displays the average convergence process of 29 test functions in 30-dimensions. By analyzing the curve, it can be found that the iterative curve of FOGJS can escape the local solution and connect to the approximate optimal solution in the early phase of iteration, and the optimization will be found near the optimal solution in the later development stage. Specifically, in the 30-dimensional convergence curve, the FOGJS shows that the convergence effects of cec01, cec05, cec16, cec21, and cec29 are more obvious. is observation shows that the FOGJS can be regarded as one of the most reliable algorithms. Figure 9 shows the box plots of different algorithms. Obviously, in most test functions, the box of the FOGJS algorithm is more concentrated, and its target distribution is smaller than other improved algorithms, which shows that the enhanced algorithm has good stability.
e results of the Wilcoxon rank-sum test are shown in Table 3. Table 4 shows the number of function evaluations and execution times for each comparison algorithm. It can be found that FOGJS has a longer execution time compared to the original algorithm, which is due to the added improvement strategy. FOGJS has a shorter execution time than the GOA and ASO algorithm but still requires a long running time. NFFEs are the number of equation evaluations of an algorithm, which is another expression of the execution efficiency of an algorithm. JS, GSA, MVO, WOA, ASO, and AOA all have smaller NFFEs, while FOGJS has 60030 NFFEs in one run.

Comparison Results Using cec2019 Test Functions.
is section describes in detail the analysis of FOGJS results when tested with the ten functions of the latest CEC benchmark (cec2019). All results are obtained after 20 independently runs by set the population size as 30 Iteration   100  200  300  400  500  600  700  800  900 1000   Iteration   100  200  300  400  500  600  700  800  900 1000   Iteration   100  200  300 400  500 600  700  800  900 1000   Iteration   Iteration   100  200  300  400  500  600  700  800  900 200  300  400  500  600  700  800  900 1000   Iteration   100  200  300  400  500  600  700  800  900 1000  100  200  300  400  500  600  700  800  900 1000   Iteration   100  200  300  400  500  600  700  800  900 1000   Iteration   100  200  300  400  500  600  700  800  900 1000   Iteration   100  200  300  400  500  600  700  800  900      [36], PSO [45], DE [28], LSA [49], GBO [48], SOA [55], HGS [35], SSA [47], HBO [54], WHOA [50], and AOA [43]. As shown in Table 5, in the whole process of the function, the worst value, average value, the best value, and the STD value are compared through the considered algorithm. In addition, the results of the Wilcoxon rank-sum test are shown in Table 6. According to the data results in Table 5, the average rank of the FOGJS is 2.8, ranking first. FOGJS provides the best results in cec01, cec02, cec08, and cec10. Compared with other algorithms, FOGJS is also very competitive in other test functions. WHOA ranks second, ranking worse than FOGJS. At the same time, it successfully implements one function and shows good competitiveness in other functions. For other comparison functions, HBO successfully implemented four functions, while SSA and HGS successfully implemented one function. e results show that the improved FOGJS has better convergence accuracy and accuracy than other algorithms. e last row of Table 5 gives the average and rank ranking. e Wilcoxon rank-sum test results of JS, PSO, DE, GBO, LSA, and SOA algorithms are 1/5/4, 0/2/8, 0/0/10, 2/3/ 5, 1/4/5, and 0/0/10, respectively. e Wilcoxon rank-sum test results of SSA, HGS, HBO, WHOA, and AOA are 0/5/5, 0/6/ 4, 4/2/4, 1/5/4, and 0/0/10, respectively. Figure 10 shows the convergence curve of 10 test functions. FOGJS can get closer to the optimal solution faster than other algorithms by analyzing the curve. At the same time, it will find the optimization near the optimal solution in the middle development stage of iteration and replace the original solution with the newly found better solution. Specifically, FOGJS shows that the convergence effects of cec01, cec02, cec04, cec06, and cec08 are apparent in the convergence curve. Figure 11 shows the box plot of 12 algorithms in 10 test functions. We can find that the length of the FOGJS box is small, and there are few outliers when combining the results of 20 runs, thus showing the stability and balance of FOGJS. Table 7 shows the number of function evaluations and execution times for each comparison algorithm.

Income Forecast Model of Rural Resident Based on Improved Jellyfish Search Optimizer
Agriculture, rural areas, and farmers are important issues for the long-term stability of the country. In order to adopt a series of corresponding policies to benefit and support        Computational Intelligence and Neuroscience agriculture, the forecast of farmers' income trend becomes the task need to be solve. Disposable income represents the sum of the ultimate consumer spending and savings obtained by residents, which can play an important role in estimating per capita consumption power and understanding the productivity of an area. us, this paper discusses the per capita disposable income of rural residents in Shaanxi Province. For ease to be understood, per capita disposable income of rural resident in this paper is simply referred to as income of rural resident. As Figure 12 shows, income of rural resident in Shaanxi Province is increasing from 1989 to 2020. In this section, a novel discrete fractional time-delayed grey model is established to solve the problem of income forecast.

Establishment of the TDFTDGM Model.
Grey model (GM) is a popular predicted approach by establishing a grey differential prediction model through a small amount of incomplete information and the development trend of the internal system is described, which has been widely applied to population forecast, economy forecast, and climate prediction. In [16], a fractional time-delayed grey model was proposed. However, the conversion from the discrete equation to the continuous equation will bring conversion error, which will decrease the accuracy of prediction. us, this paper establishes a novel discrete fractional timedelayed grey model with triangular residual correction (TDFTDGM). Firstly, given a raw nonnegative data set Z (0) � (z (0) (1), z (0) (2), · · · , z (0) (n)). en, calculate the morder accumulated sequence Z (m) � (z (m) (1), z (m) (2), · · · , z (m) (n)) by where Differential equation (23) is the basic fractional grey model (FGM) with order m:   Computational Intelligence and Neuroscience For the derivative of the left of equation (22), the first backward difference can be approximately expressed by equation (23) when t � k: us, when t � k, the left of (22) can be approximated by follows: Substituting equations (24) into (22), equation (25) can be obtained: where en, discrete formula (25) can be obtained by letting Equation (26) is the discrete fractional time-delayed grey model (FTDGM), which can be expressed by matrix: If the fractional order m was determined, parameters β1, β2, and β3 can be estimated by the least squares solution as equation (28): en, the value of z (r) (k) can be obtained by equation (29) after determining the parameters β1, β2, and β3:  Computational Intelligence and Neuroscience 23    Computational Intelligence and Neuroscience    (35). e data set used in this experiment is the income of rural residents from 1989 to 2020 in Shaanxi Province. e data from 1989 to 2013 are used as training data, and the rest is regarded as test data. en, the process for FOGJS to solve the rural resident's income forecasting model is displayed in Figure 13:  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO   DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO   DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO  DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO   DE  LSA  GBO  SOA  HGS  WHOA  AOA  FOGJS  HBO  SSA  JS  PSO   cec04   cec07 cec09 cec08     algorithm (SCA) [59], grey wolf optimizer (GWO) [60], rat swarm optimizer (RSO) [61], seagull optimization algorithm (SOA) [55], DE [28], and PSO algorithm [45]. Meanwhile, to reflect the efficiency of the algorithm, the number of iterations is set as 30 times to highlight the ability of the algorithm to solve problems in a short time. Here, the parameters of the FOGJS algorithm are the same as the results obtained by the sensitivity analysis in Section 4.3. at is, β � 0.4 and c � 2. All algorithms run 10 times independently setting the size of the population as 50. en, Table 8 provides the results after 10 times runs, including the best value (Best), the average value (Mean), the worst value (Worst), and the standard deviation (Std). According to the error between predicted data and the real data in Table 8, the FOGJS algorithm is the most suitable one to solve the predicted model than others, because it has the best performance on all measure indexes. ough JS, DE GWO, and PSO also perform well on the best values, they are much worse than the FOGJS algorithm in terms of average values. It illustrates other algorithms are highly susceptible to local optimums, thus causing instability of the solutions. Figure 14 also supports the above conclusion through boxplots. e lowest position and the smallest height of the box plotted by the obtained results indicate that the FOGJS algorithm has strong robustness in solving the problem. Both JS and DE algorithms have some outliers, showing that the quality of the solutions is easily affected by other factors, which needs to be avoided in practice applications. Moreover, Figure 15 shows the convergence curves of different algorithms in solving the predicted model of resident income.
e FOGJS algorithm has the fastest convergence speed in the early period of iteration, especially being compared with the original JS algorithm. at is, the fractional-order modified mechanism improves the quality of the whole population, speeding up the convergence rate to the optimal solution. In the later stages of the search, Gaussian mutation also plays a role in avoiding local optimums, which can be observed in the enlarged subplot.

e Comparison Results of Different Predicted Models.
According to the analysis of Section 5.2, the predicted model based on FOGJS algorithm is effective in solving the predicted problem of resident income. en, the TDFTDGM + FOGJS approach also needs to be compared with other classical predicted models. In this section, the other six predicted approaches are selected, including GM [12], DGM [14], TRGM [13], FANGBM [15], FTDGM [16], and DFTDGM. Table 9 displays the measure indicators in the prediction of resident income to illustrate the difference of all models. To compare different models more fairly, two types of errors will be calculated. e fitting error is the error between data obtained by models and training data. And the predicted error represents the error between data obtained by models and test data. Table 10 displays the fitting results obtained by different forecast models. In addition, Table 11 shows the fitting error compared with the actual data. Obviously, the TDFTDGM predicted model shows better performance than others on two evaluation indicators, which is marked in bold. Compared with GM, TRGM has a smaller fitting error. Meanwhile, under similar results in MAPE and RMSPE, TDFTDGM performs significantly better than DFTDGM on MAE and MSE, which illustrates that applying the triangular residual correction method into the original model is an effective method to improve prediction accuracy.

32
Computational Intelligence and Neuroscience     Figure 19: Continued. Figure 16 shows the fitting curve of different models. e FTDGM and FANGBM models perform well before about 2004. However, after that, the fitting data of FTDGM and FANGBM differ greatly from the real income data. at is, these two models have great defects when facing a set of data with large fluctuation. After discretizing the FTDGM model, DFTDGM overcomes this disadvantage as shown in the yellow curve in Figure 16. However, after 2010, the gap between the income data obtained using the DFTDGM model and the real income become larger. e red curve of the TDFTDGM model is closer to the real data for all fitting data with time growth, which means the trigonometric correction function further improves the accuracy of the prediction model to let it more suitable for long-term forecasting. e analysis of the combined results shows that the fitted data obtained by TDFTDGM model are more in line with the actual income changes in the process. Table 12 shows the predicted results obtained by all models, and the predicted error is summarized in Table 13. For the predicted error, the advantages of TDFTDGM are even more obvious on all four evaluation indexes. In addition, the predicted error of the FANGBM model is large, though it provides great performance in fitting results. e mean absolute percentage error (MAPE) and root mean square percentage error (RMSPE) of FANGBM are all over 10, which means its poor prediction ability.
Meanwhile, it can be observed in Figure 17 that the predicted curve of TRGM is moving away from the real curve with the increase of years. us, though the TRGM model also has smaller MAPE and RMSPE, the predicted precision will decrease if the predicted data after 2020 is still needed. However, the predicted data obtained by the TDFTDGM model is able to maintain fluctuations in the vicinity of the real data. Hence, the proposed model with triangular residual correction is superior to others and can provide more reliable and informative predictions. e income of rural residents in the next five years (2021-2025) is also predicted and shown in Table 14. In addition, Figure 18 is plotted by the predicted data. e curves of TRGM and FTDGM are growing too fast and slow, respectively, which are not conform to the trend of income growth. e red curve's growth rate of the TDFTDGM model is more stable, which is more suitable for the longterm forecast of rural residents' income. Moreover, due to the different performance on fitting and predicted error, Figure 19 draws the bar graphs to discuss the comprehensive performance of different models. On the four-measure indexes, the proposed TDFTDGM model has outstanding advantages over other models. at is, by introducing the trigonometric correction function and FOGJS into the discrete fractional time-delayed grey model, it becomes a competitive forecasting method in practical application.

Conclusion and Future Work
is paper predicts the rural resident income by combing the TDFTDGM model and FOGJS algorithm. Firstly, the fractional-order modified and Gaussian mutation mechanisms are introduced into the original JS algorithm. After analyzing the effect of different parameters, more suitable parameters are selected in the FOGJS algorithm to improve its capacity. en, from the exploration and exploitation Computational Intelligence and Neuroscience diagrams, the FOGJS algorithm keeps a balance between the two capacities. Meanwhile, by being compared with different kinds of algorithms on classical test functions, the FOGJS algorithm offers outstanding performance. For the solution precision, the improved algorithm ranks first in terms of mean rank on both cec2017 and cec2019. From the convergence curves and box plots, it can be observed that the FOGJS algorithm has advantages of convergence speed and stability. Moreover, the results of the Wilcoxon rank-sum test further support the conclusion that the introduction of improvement strategies forms the special search mechanism to provide superior performance. Secondly, the discrete fractional time-delayed grey model with triangular residual correction is established. In addition, the FOGJS algorithm is used to optimize the order of the model. e experiment of income forecasting is divided into two parts. On the one hand, the original JS algorithm and the other seven popular algorithms are selected to solve the TDFTDGM model to be compared with the FOGJS algorithm. Results show that FOGJS algorithm has outstanding performance on precision and convergence speed, which indicates that the FOGJS + TDFTDGM approach is an effective tool for prediction of resident income. On the other hand, the FOGJS + TDFTDGM approach is compared with other six prediction models. Experiments show that TDFTDGM model obtains the predicted data closer to the real income data. us, a conclusion can be deduced that TDFTDGM model is more suitable for the long-term prediction of volatile data. e combination between TDFTDGM and FOGJS algorithm also provides an idea to determine the parameters in the forecast model.
In future work, the fractional-order modified and Gaussian mutation mechanism may also be suitable choices for some metaheuristic algorithms (e.g., MRFO algorithm [62] and hybrid arithmetic optimization algorithm [63]) to improve their performance. Moreover, the TDFTDGM model can deal with forecast problems in other fields, such as population forecast, climate forecast, and resource forecast.

Data Availability
All data generated or analyzed during this study are included in this published article (and its supplementary information files).

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.