On the Multiwavelets Galerkin Solution of the Volterra–Fredholm Integral Equations by an Efficient Algorithm

We develop the multiwavelet Galerkin method to solve the Volterra–Fredholm integral equations. To this end, we represent the Volterra and Fredholm operators in multiwavelet bases. Then, we reduce the problem to a linear or nonlinear system of algebraic equations. The interesting results arise in the linear type where thresholding is employed to decrease the nonzero entries of the coeﬃcient matrix, and thus, this leads to reduction in computational eﬀorts. The convergence analysis is investigated, and numerical experiments guarantee it. To show the applicability of the method, we compare it with other methods and it can be shown that our results are better than others.


Introduction
A mathematical model of the spatiotemporal development of an epidemic yields the following Volterra-Fredholm integral equation (VFIE): x 0 k 1 (x, s, u(s))ds where the given functions f: Ω ⟶ R and k 1 : S × R ⟶ R with S ≔ (x, s): x, s ∈ Ω { } are assumed to be continuous functions. Furthermore, we consider k 2 ≔ p(x, s)h(u(s)) to be integrable, where h(x) is a nonlinear function and p(x, s): S ⟶ R is a continuous function. Besides, the given functions are selected so that equation (1) has a unique solution.
e parabolic boundary value problems lead to these types of integral equations and widely arise from various physical and biological models. e VFIE also appears in the literature in mixed form as where f and k are analytic functions. Many authors studied the mixed form of VFIE numerically. Among these, we can mention collocation method [1], projection method [2], spline collocation method [3], wavelet collocation method [4], Adomian decomposition method [5], and so on [6][7][8].
Among these studies, we focus on a paper that uses the multiwavelet Galerkin method to solve linear mixed VFIE as mentioned in [9]. In [9], the wavelet transform matrix and the operational matrix of integration are utilized to reduce the problem of linear mixed VFIE to a sparse linear system of algebraic equations. By searching among the sources, we can find a small number of papers in the field of numerical and analytical solutions to problem (1). In [10], the Lagrange collocation method is employed to solve this problem. Wang and Wang [11] applied the Taylor collocation method to find the numerical solution of the equation. Also, the convergence analysis is investigated for the proposed method. e Taylor collocation method was applied by Karamete and Sezer [12] to solve this equation as well as the high-order linear Fredholm-Volterra integrodifferential equations [13]. e Bell polynomials have been employed for solving this equation [14]. e motivation of our work is to develop the multiwavelet Galerkin method to solve (1). We split the problem into two configurations, linear and nonlinear. After using the multiwavelet Galerkin method, both types reduce to the system of the linear and nonlinear algebraic equations, respectively.
e interesting results arise in the linear type where thresholding is employed to decrease the nonzero entries of the coefficient matrix. is gives the sparse system. is property is very useful to reduce the computational cost. We use Alpert's multiwavelet bases constructed in [15] following [16], and these bases have been used to solved PDEs, ODEs, and integral equations [9,[17][18][19]. ese bases allow us to have high vanishing moments, compact support, and properties such as orthogonality and interpolation [15]. ese characteristics of Alpert's multiwavelets lead to a sparse representation of differential and integral operators [16,20].
is paper is organized as follows: In Section 2, we briefly introduce Alpert's multiwavelet bases. In Section 3, the Multiwavelets Galerkin method is used to solve this proble, and the conditions for convergence of the proposed method are discussed. Section 5 contains some numerical results to confirm the validity and efficiency of the proposed method, and Section 6 contains a few concluding remarks.

Alpert's Multiwavelet Bases
J as a space of piecewise polynomial bases of degree less than multiplicity parameter r that is spanned by where D and T are the dilation and translation operators, respectively, and ϕ k k∈R are the primal interpolating scaling bases introduced by Alpert et al. [16]. Given nodes τ k k∈R , which are the roots of Legendre polynomial of degree r, the interpolating scaling bases are defined as where L k (t) k∈R are the Lagrange interpolating polynomials at the point τ k k∈R and ω k k∈R are the Gauss-Legendre quadrature weights [16,18]. ese bases form an orthonormal bases on Ω with respect to the L 2 -inner product. Due to the definition of the space V r J , the spaces V r j j∈Z + ∪ 0 { } have dimension N: � 2 J r and obviously are nested: Hence, we consider the complement subspace W r where ⊕ denotes orthogonal sums. According to (6), the space V J may be inductively decomposed to e complement subspace W r J has dimension 2 J r and is spanned by multiwavelet bases ψ k k∈R , as Because Alpert's multiwavelets are completely introduced in [16], we avoid this and refer the readers to [15,16,19].
Every function p ∈ L 2 (Ω) can be represented in the form where P r J is the orthogonal projection that maps L 2 (Ω) onto the subspace V r J . To find the coefficients p k J,b that are determined by 〈p, we shall compute the integrals. We apply the r-point Gauss-Legendre quadrature by a suitable choice of the weights ω k and nodes τ k for k ∈ R to avoid these integrals [16,19], via Convergence analysis of the projection P r J (p) is investigated for the r-times continuously on differentiable function p ∈ C r (Ω): For the full proof of this approximation and further details, we refer the readers to [15]. us, we can conclude that P r J (p) converges to p with rate of convergence O(2 − Jr ).

Assume that the vector function
includes the scaling functions and is called multiscaling function. Approximation (9) may be rewritten using the vector P that includes entries P br+k+1 : � p k J,b as follows: where P is an N-dimensional vector. e building blocks of these bases construction can be applied to approximate a higher-dimensional function. To this end, one can introduce the two-dimensional subspace V r,2 us, by this assumption, to derive an approximation of where the components of the square matrix P of order N are obtained by where τ k � (τ k + 1)/2. Consider the 2r-th partial derivatives of p: Ω 2 ⟶ R to be continuous. Utilizing this assumption, the error of this approximation can be bounded as follows: where M max is a constant. Given orthogonal projection operator Q r j � P r j+1 − P r j that maps L 2 (Ω) onto W r j , the multiscale projection operator M r J can be represented as and consequently, any function p ∈ L 2 (Ω) can be approximated as a linear combination of multiwavelets and singlescale interpolating scaling functions as where Note that we can compute the coefficients p k 0,0 by using (10). But in many cases, multiwavelet coefficients from zero up to higher-level J − 1 must be evaluated numerically. To avoid this, we use multiwavelet transform matrix T J , introduced in [18,19].
is matrix connects multiwavelet bases and multiscaling functions, via where . is representation helps to rewrite equation (18) as to form where we have the N-dimensional vector P J whose entries are p k 0,0 and p k j,b and is given by employing the multiwavelet transform matrix T J as P J � T J P J . e multiwavelet coefficients (details) become small when the underlying function is smooth (locally) with increasing refinement levels. If the multiwavelet bases have N r ψ vanishing moments [19,21], then details decay at the rate of 2 − JN r ψ [17]. Because vanishing moments of Alpert's multiwavelets are equal to r, consequently, one can obtain . is allows us to truncate the full wavelet transforms while preserving most of the necessary data. us, we can set to zero all details that satisfy a certain constraint ε using thresholding operator C ε : C ε P J � P J , (22) and the elements of P J are determined by where D ε : � (j, b, k): |p k j,b | > ε . Now, we can bound the approximation error after thresholding [17] via where P r J,D ε (p) is the projection operator after thresholding with the threshold ε and C thr > 0 is a constant independent of Jandε.

Multiwavelet Galerkin Method
In order to obtain multiwavelet Galerkin solution of (1), assume that solution can be approximated as an expansion of the Alpert's multiwavelets, i.e., where the N dimension vector U of unknowns must be specified. is solution is selected such that it satisfies (1) approximately. Also, it is obtained from the solution of the minimization problem Since L 2 (Ω) is an inner product space with finite dimension, it can be shown that this minimization problem has a unique solution [22].
Let us rewrite (1) in the operator form where g(x): ds is assumed to be compact on L 2 (Ω) to L 2 (Ω) and k 1 is a given continuous function. Due to these assumptions, V(u)(x) is a continuous function and, consequently, g is also continuous function. Due to (11), the function g(x) can be approximated at a rate of at least 2 − Jr : According to (35), we obtain the system of linear or nonlinear equations. Due to the higher vanishing moments r and increasing refinement level J, for the linear type of this equation, we can discard coefficients by hard thresholding introduced in the previous section. We can reduce the computational efforts using proper methods for this type of system such as the GMRES method. e GMRES method is introduced by Saad and Shultz [23] for solving sparse and large linear systems. e GMRES generates an approximate solution whose residual norm is minimum by using a Krylov subspace. In this paper, we use restarted GMRES Algorithm 2 [24]. To use this method, we must first define Arnoldi's algorithm. Arnoldi' s procedure is an algorithm for building an orthogonal basis of the Krylov subspace κ m . e N-th Krylov subspace is defined as follows: Here, we assume that system (35) of the linear type to be of form ΛU � D and W m is a N × m matrix with column vectors w 1 , . . . , w m . Also, H m is a (m + 1 × m) Hessenberg matrix whose nonzero entries h i,j are defined by Algorithm 1.
(1) Compute r 0 � D − ΛU 0 , β � ‖r 0 ‖ 2 and w 1 � r 0 /β (2) Generate the Arnoldi basis and the matrix H m using the Arnoldi algorithm starting with w 1 (3) Compute y m the minimizer of ‖βe 1 − H m y‖ 2 and U m � U 0 + W m y m (4) If satisfied then stop, else set U 0 � U m and go to 1 To investigate the convergence analysis, one can prove

Numerical Experiments
To verify the accuracy and efficiency of the proposed method, we consider a series of numerical examples. In the linear type of equation (1), we aim to generate a sparse matrix to reduce the computational costs. We illustrate the rate of sparsity S ε which is defined by [21,25] where ε is the threshold (small positive number) and N ε and N 0 are the number of elements remaining after thresholding and the total number of elements, respectively. e exact solution is e − x [11]. e effects of the multiplicity parameter r, the refinement level J, and thresholding with different thresholds are reported in Table 1 and Figure 1. e results confirm the theoretical claims and demonstrate the effectiveness of the method. Note that the L 2 error decreases as parameters r and J increase due to the rate of convergence O(2 − Jr ). We compare the error of the proposed method, the Lagrange collocation method [10], Taylor collocation method [11], and Taylor polynomial method [26] in Table 2. Due to  Table 2, our method is flexible than other methods, and without changing the multiplicity parameter r, we can improve the results. Figure 2 illustrate the effect of thresholding with different threshold parameters ε on the coefficient matrix. It can be seen that the number of matrix elements decreases when the threshold parameter increases. (51) In Table 3 and Figure 3, we show the effect of the parameters r, J, and ε on sparsity and L 2 -error. It is obvious that increasing the parameters r and J reduces the error. A comparison of the proposed method with other methods such as Taylor method [26] and the Lagrange collocation [10] is reported in Table 4. In Figure 4, we illustrate the effect of thresholding on the coefficient matrix by taking different threshold ε when r � 8 and J � 2.
(52) e exact solution of this equation is u(x) � sin(x). In Figure 5, we illustrate the effect of parameter r and J on the L 2 -error and the error of approximation is plotted in Figure 6 taking r � 7 and J � 2. (54)

Journal of Mathematics
(1) Choose a vector w 1 , such that ‖w 1 ‖ 2 � 1 (2) For j � 1, 2, . . . , m do (3) Compute h i,j � (Λw j , w i ) for i � 1, 2, . . . , j (4) w j � (Λw j , w i ) for i � 1, 2, . . . , j         Table 5 is reported to show the efficiency and accuracy of the proposed method. We observe when the refinement level J and multiplicity parameter r increase, the L 2 -errors decrease. Figure 7 shows the error of proposed method on taking r � 6 and J � 2.

Conclusion
We have employed the multiwavelet Galerkin method to solve the Volterra-Fredholm integral equations. To this end, the Volterra and Fredholm operators are represented in multiwavelet bases. Applying this method leads to a linear or nonlinear system of algebraic equations. In the linear type, we obtain a new sparse system using thresholding due to the decay in the wavelet coefficients. e convergence analysis is investigated, and one can show that the rate of convergence is O(2 − Jr ). e numerical examples illustrate the efficiency and accuracy of the method.
Data Availability e raw/processed data required to reproduce these findings cannot be shared at this time due to legal or ethical reasons.