Some Recent Developments in Applied Functional Analysis

From its early stages, the intensive development of functional analysis and the remarkable advances of its methods cannot be explained without its link with other areas of mathematics and, above all, its role as an essential framework for numerical analysis and computer simulation, PDEs, modeling realworld phenomena, variational inequalities, or optimization, just to name a few. In this special issue we highlight some aspects of functional analysis which are used in connection with other branches of mathematics or science, either as a direct application or as a theoretical result which is essential for such an application. Although it is not possible to collect here the huge production of the research activity on this vast field of modern mathematics, the selected works gather together a range of topics which reflect some of the current research on applied functional analysis: bases in Banach spaces, wavelet transforms, fixed point theory, and applications to ODEs, electronic circuit simulation, or numerical solution of PDEs, integral equations, or problems on option pricing in mathematical finance. In this way, we have achieved one of our purposes, which is the exchange of ideas among researchers working both in abstract and applied functional analysis.

From its early stages, the intensive development of functional analysis and the remarkable advances of its methods cannot be explained without its link with other areas of mathematics and, above all, its role as an essential framework for numerical analysis and computer simulation, PDEs, modeling realworld phenomena, variational inequalities, or optimization, just to name a few.
In this special issue we highlight some aspects of functional analysis which are used in connection with other branches of mathematics or science, either as a direct application or as a theoretical result which is essential for such an application.
Although it is not possible to collect here the huge production of the research activity on this vast field of modern mathematics, the selected works gather together a range of topics which reflect some of the current research on applied functional analysis: bases in Banach spaces, wavelet transforms, fixed point theory, and applications to ODEs, electronic circuit simulation, or numerical solution of PDEs, integral equations, or problems on option pricing in mathematical finance. In this way, we have achieved one of our purposes, which is the exchange of ideas among researchers working both in abstract and applied functional analysis.

Introduction
An option is a financial instrument that gives the holder the right, but not the obligation, to buy (call option) or to sell (put option) an agreed quantity of a specified asset at a fixed price (exercise or strike price) on (European option) or before (American option) a given date (expiry date). It was shown by Black-Scholes [1] that the value of a European option is governed by a second-order parabolic partial differential equation with respect to the time and the underlying asset price. The value of an American option is determined by a linear complementarity problem involving the Black-Scholes operator [2,3]. Since this complementarity problem is, in general, not analytically solvable, numerical approximation to the solution is normally sought in practice.
Various numerical methods have been proposed for the valuation of single-factor American options. Among them, the lattice method [4], the Monte Carlo method [5], the finite difference method [6][7][8], the finite element method [9,10], and the finite volume method [11][12][13] are the most popular ones in both practice and research.
Finite difference methods applied to the multifactor American option valuation have also been developed. S. O'Sullivan and C. O'Sullivan [14] presented explicit finite difference methods with an acceleration technique for option pricing. Clarke and Parrott [15] and Oosterlee [16] used finite difference schemes along with a projected full approximation scheme (PFAS) multigrid for pricing American options under stochastic volatility. Ikonen and Toivanen [17][18][19] proposed finite difference methods with componentwise splitting methods on nonuniform grids for pricing American options under stochastic volatility. Hout and Foulon [20] and Zhu and Chen [21] applied finite difference schemes based on the ADI method to price American options under stochastic volatility. Le et al. [22] presented an upwind difference scheme for the valuation of perpetual American put options under stochastic volatility. Yousuf [23] developed an exponential time differencing scheme with a splitting technique for pricing American options under stochastic volatility. Nielsen et al. [24] and Zhang et al. [25] analyzed finite difference schemes with penalty methods for pricing American two-asset options, but their difference methods are first-order convergent.
In part of the domain, the differential operator of the twoasset American option pricing model becomes a convectiondominated operator. The differential operator also contains a second-order mixed derivative term. The classical finite difference methods lead to some off-diagonal elements in the coefficient matrix of the discrete operator due to the dominating first-order derivatives and the mixed derivative. These elements can lead to nonphysical oscillations in the computed solution [17,18]. In this paper, we present an accurate finite difference scheme for pricing two-asset American options. We use the central difference method for space derivatives and the implicit Euler method for the time derivative. Under certain mesh step size limitations, we obtain a coefficient matrix with an M-matrix property, which ensures that the solutions are oscillation-free. We apply the maximum principle to the discrete linear complementarity problem in two mesh sets and derive the error estimates. We will show that the scheme is second-order convergent with respect to the spatial variables.
The rest of the paper is organized as follows. In the next section, we describe some theoretical results on the continuous complementarity problem for the two-asset American put option pricing model. In Section 3, the discretization method is described. In Section 4, we present a stability and error analysis for the finite difference scheme. In Section 5, numerical experiments are provided to support these theoretical results.
and ( 1 , 2 ) is the final (payoff) condition defined by Here, is the value of the option, is the value of the th underlying asset, ∈ [−1, 0) ∪ (0, 1] is the correlation of two underlying assets, is the risk-free interest rate, and (⋅, ⋅) is a given function providing suitable boundary conditions. Typically, (⋅, ⋅) is determined by solving the associated onedimensional American put option problem where ℓ denotes the one-dimensional Black-Scholes operator defined by

Discretization
The operator contains a second-order mixed derivative term. Usual finite difference approximations lead to some positive off-diagonal elements in the matrix associated with the discrete operator due to the mixed derivative, which may lead to nonphysical oscillations in the computed solution. Hence, it is not easy to construct a discretization with good properties and accuracy for problems with mixed derivatives. There are some works dealing with stable difference approximations of mixed derivatives [27,28]. In this paper, we present an accurate finite difference scheme to discretize the operator . We use the technique of [22] to give the mesh step size limitation, which guarantees that the coefficient matrix corresponding to the discrete operator is an -matrix. The discretization is performed using a uniform mesh We discretize the differential operator using the central difference scheme on the previous uniform mesh. We set Thus, we apply the central difference scheme on the uniform mesh to approximate the parabolic complementarity problem (8) as follows: (22) From (8), we have the result ( , , ) ≥ 0, ( , , ) ∈ Ω (1) , ( , , ) = 0, ( , , ) ∈ Ω (2) .

Numerical Experiments
In this section, we verify experimentally the theoretical results obtained in the preceding section. Errors and convergence rates for the second-order finite difference scheme are presented for two test problems.  To solve the linear inequality system (13), we use the projection scheme used in [32, page 433]. Since mesh steps need to satisfy conditions (14) and (15), we choose the number of mesh steps in the direction  where is the number of mesh steps in the direction. The exact solutions of the test problems are not available. Therefore, we use the double mesh principle to estimate the errors and compute the experiment convergence rates in our computed solution. We measure the accuracy in the discrete maximum norm , , = max , , , , , , and the convergence rate , , = log 2 ( , , 2 ,2 , ) .
The error estimates and convergence rates in our computed solutions of Tests 1 and 2 are listed in Tables 1 and 2,  respectively. From Tables 1 and 2, we see that , , / 2 ,2 , is close to 4 for sufficiently large , which supports the convergence estimate of Theorem 3. However, the numerical results of Nielsen et al. [24] and Zhang et al. [25] verify that their schemes are only first-order convergent. Hence, our scheme is more accurate. [

Introduction
Numerical simulation plays an important role in electronics, helping engineers to verify correctness and debug circuits during their design, and so avoiding breadboarding and physical prototyping. The advantages of numerical simulation are especially significant in integrated circuits design, where manufacturing is expensive and probing internal nodes is difficult or prohibitive. Circuit simulation has emerged in the early 1970's, and many numerical techniques have been developed and improved along the years. Radio frequency (RF) and microwave system design is a field that was an important driver for numerical simulation development, and continues to be so nowadays. Indeed, computing the solution of some current electronic circuits, as is the case of modern wireless communication systems, is still today a hot topic. In effect, serious difficulties arise when these nonlinear systems are highly heterogeneous circuits operating in multiple time scales. Current examples of these are wireless RF integrated circuits (RFICs), or systems-on-a-chip (SoC), combining RF, baseband analog, and digital blocks in the same the circuit.
Signals handed by wireless communication systems can usually be described by a high frequency RF carrier modulated by some kind of slowly varying baseband information signal. Hence, the analysis of any statistically relevant information time frame requires the processing of thousands or millions of time points of the composite modulated signal, turning any conventional numerical integration of the circuit's system of ordinary differential equations (ODEs) highly inefficient, or even impractical. However, if the waveforms produced by the circuit are not excessively demanding on the number of harmonics for a convenient frequency-domain representation, this class of problems can be efficiently simulated with hybrid time-frequency techniques. Handling the response to the slowly varying baseband information signal in the conventional time step by time-step basis, but representing the reaction to the periodic RF carrier as a small set of Fourier components (a harmonic balance algorithm for computing the steady-state response to the 2 Journal of Function Spaces and Applications carrier) new circuit simulators are taking an enormous profit from functional analysis techniques. But, beyond overcoming the signals' time-scale disparity, one of the recently proposed hybrid time-frequency techniques is also able to deal with highly heterogeneous RF circuits in an efficient way, by applying different numerical strategies to state variables in different parts (blocks) of the circuits.

Mathematical Model of an Electronic
Circuit. The behavior of an electronic circuit can be described with a system of equations involving voltages, currents, charges, and fluxes. This system of equations can be constructed from a circuit description using, for example, nodal analysis, which involves applying the Kirchhoff current law to each node in the circuit, and applying the constitutive or branch equations to each circuit element. Systems generated this way have, in general, the following form where x( ) ∈ R and y( ) ∈ R stand for the excitation (independent voltage or current sources) and state variable (node voltages and branch currents) vectors, respectively. p : R → R stands for all linear or nonlinear elements, as resistors, nonlinear voltage-controlled current sources, and so forth, while q : R → R models dynamic linear or nonlinear elements, as capacitors (represented as linear or nonlinear voltage-dependent electric charges), or inductors (represented as linear or nonlinear current-dependent magnetic fluxes).
The system of (1) is, in general, a differential algebraic equations' (DAE) system, which represents the general mathematical formulation of lumped problems. However, as reviewed in [1], this DAE circuit model formulation could even be extended to include linear distributed elements. For that, these are substituted, one-by-one, by their lumped-element equivalent circuit models or are replaced, as whole sub-circuits, by reduced order models derived from their frequency-domain characteristics whenever larger distributed linear networks are dealt with.
The substitution of distributed devices by lumpedequivalent models is especially reasonable when the size of the circuit elements is small in comparison to the wavelengths, as is the case of most emerging RF technologies (e.g., new systems on chip (SoCs), or systems in package (SiPs), integrating digital high-speed CMOS baseband processing and RFCMOS hardware).

Steady-State Simulation.
The most natural way of simulating an electronic circuit is to numerically time-step integrate, in time domain, the ordinary differential system describing its operation. This straightforward technique was used in the first digital computer programs of circuit analysis and is still widely used nowadays. It is the core of all SPICE (which means simulation program with integrated circuit emphasis) [2] or SPICE-like computer programs.
The dilemma is that these tools focus on transient analysis, and sometimes electronics designers, as is the case of RF and microwave designers, are not interested in the circuits' transient response, but, instead, in their steady-state regimes. This is because certain aspects of circuits' performance are better characterized, or simply only defined, in steady-state (e.g., distortion, noise, power, gain, impedance, etc.). Timestep integration engines, as linear multistep methods, or Runge-Kutta methods, which were tailored for finding the circuit's transient response, are not adequate for computing the steady-state because they have to pass through the lengthy process of integrating all transients and expecting them to vanish. In circuits presenting extremely different time constants, or high Q resonances, as is typically the case of RF and microwave circuits, time-step integration can be very inefficient. Indeed, in such cases, frequencies in steady-state response are much higher than the rate at which the circuit approaches steady-state or the ratio between the highest and the lowest frequency is very large. Thus, the number of discretization time steps used by the numerical integration scheme will be enormous because the time interval over which the differential equations must be numerically integrated is set by the lowest frequency or by how long the circuit takes to achieve steady-state, while the size of the time steps is constrained by the highest frequency component.
It must be noted that there are several different kinds of steady-state behavior that may be of interest. The first one is DC steady-state. Here, the solution does not vary with time. Stable linear circuits driven by sinusoidal sources may exhibit a sinusoidal steady-state regime, which is characterized as being purely sinusoidal except, possibly, for some DC offset. If the steady-state response of a circuit consists of generic waveforms presenting a common period, then the circuit is said to be in a periodic steady-state. Directly computing the periodic steady-state response of an electronic circuit, without having to first integrate its transient response, involves finding the initial condition, y( 0 ), for the differential system that describe the circuit's operation, such that the solution at the end of one period matches the initial condition, that is, y( 0 ) = y( 0 + ), where is the period. Problems of this form, those of finding the solution to a system of ordinary differential equations that satisfies constraints at two or more distinct points in time, are referred to as boundary value problems. In this particular case, we have a periodic boundary value problem that can be formulated as p (y ( )) + q (y ( )) = x ( ) , y ( 0 ) = y ( 0 + ) , where the condition y( 0 ) = y( 0 + ) is known as the periodic boundary condition.
In the following, we will focus our attention to the most widely used technique for computing the periodic steadystate solution of RF and microwave electronic circuits: the harmonic balance method [3][4][5].

Harmonic Balance.
Harmonic balance (HB) is a mature computer steady-state simulation tool that operates in the Journal of Function Spaces and Applications 3 frequency domain [3]. Frequency-domain methods differ from time-domain steady-state techniques in the way that, instead of representing waveforms as a collection of time samples, they represent them using coefficients of sinusoids in trigonometric series. The main advantage of the trigonometric series approach is that the steady-state solution can often be represented accurately with a small number of terms. For example, if the circuit is linear and its inputs are all sinusoidal of the same frequency, only two terms (magnitude and phase) of the trigonometric series will represent the solution exactly, whereas an approximate time-domain solution would require a much larger number of sample points.
Another advantage of operating directly in the frequencydomain is that linear dynamic operations, like differentiation or integration, are converted into simple algebraic operations, such as multiplying or dividing by frequency, respectively. For example, when analyzing linear time-invariant circuit devices, the coefficients of the response are easily evaluated by exploiting superposition within phasor analysis [6]. Computing the response of nonlinear devices is obviously more difficult than for linear devices, in part because superposition no longer applies, and also because, in general, the coefficients of the response cannot be computed directly from the coefficients of the stimulus. Nevertheless, in the case of moderate nonlinearities, the steady-state solution is typically achieved much more easily in frequency-domain than in time-domain simulators.
HB handles the circuit, its excitation and its state variables in the frequency domain, which is the format normally adopted by RF designers. Because of that, it also benefits from allowing the direct inclusion of distributed devices (like dispersive transmission lines) or other circuit elements described by frequency-domain measurement data, for which we cannot find an exact time-domain representation.
In order to provide a brief and illustrative explanation of the conventional HB theory, let us start by considering again the boundary value problem of (2), describing the periodic steady-state regime of an electronic circuit. For simplicity, let us momentarily suppose that we are dealing with a scalar problem, that is, that we have a simple circuit described with a unique state variable ( ), and that this circuit is driven by a single source ( ), verifying the periodic condition ( ) = ( + ). Since the steady-state response of the circuit will be also periodic with period , both the excitation and the steady-state solution can be expressed as the Fourier series where 0 = 2 / is the fundamental frequency. By substituting (3) into (2), and adopting a convenient harmonic truncation at some order = , we will obtain The HB method consists in converting this differential system into the frequency domain, in way to obtain an algebraic system of 2 + 1 equations, in which the unknowns are the Fourier coefficients . It must be noted that since and are, in general, nonlinear functions, it is not possible to directly compute the Fourier coefficients in this system. In fact, we only know a priori the trivial solution ( ) = 0 for ( ) = 0. So, we can possibly guess an initial estimate to ( ) and then adopt an iterative procedure to compute the steady-state response of the circuit. For that, we use a first-order Taylor-series expansion, in which each initial expansion point corresponds to the previous iterated solution. Indeed, we expand the left hand side of the DAE system in (2) to obtain which results in The difficulty now arising in solving (6) is that we want to transform this system entirely into the frequency domain, but we do not know how to compute the Fourier coefficients of (⋅), (⋅), (⋅), and (⋅) at each iteration . So, one possible way to do that consists of computing each of these nonlinear functions in the time domain and then calculate their Fourier coefficients. Therefore, according to the properties of the Fourier transform, the time-domain products [ ] ( )] will become spectral convolutions, which can be represented as matrix-vector products using the conversion matrix formulation [5,7]. This way, and because of the orthogonality of the Fourier series, (6) can be expressed in the form , Ω = diag (− 0 , . . . , 0, . . . , 0 ) .
In (7), P and Q are vectors containing the Fourier coefficients of ( ( )) and ( ( )), respectively, and G and C denote the (2 + 1) × (2 + 1) conversion matrices (Toeplitz) [5,7] corresponding to ( ( )) and ( ( )). If we rewrite (7) as we can obtain in which is known as the harmonic balance equation, and the (2 +1)× (2 + 1) composite conversion matrix is known as the Jacobian matrix of the error function F(Y). The iterative procedure of (5)-(12) is the so-called harmonic-Newton algorithm. In order to achieve the final solution of the problem, we have to do the following operations at each iteration : (i) perform inverse ; (iv) solve the linear system of (2 + 1) algebraic equations of (10) to compute the next estimate Y [ +1] . Consecutive iterations will be conducted until a final solution Y [ ] satisfies the HB equation of (11) with a desired accuracy, that is, until where tol is an allowed error ceiling and ‖F(⋅)‖ stands for some norm of the error function F(⋅).
Since in a digital computer, both time and frequency domains are represented by discrete quantities, the mathematical tools used to perform Fourier and inverse Fourier transformations are, respectively, the discrete Fourier transform (DFT) and the inverse discrete Fourier transform (IDFT) or their fast algorithms, the fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT).
The system of (10) is typically a sparse linear system in the case of a generic circuit with state variables. In general, several methods can be used to solve this system, such as direct solvers, sparse solvers, or iterative solvers. However, for very large systems, iterative solvers are usually preferred. Krylov subspace techniques [8] are a class of iterative methods for solving sparse linear systems of equations. An advantage of Krylov techniques is that (10) does not need to be fully solved in each iteration. The iterative process needs only to proceed until decreases the error function. This approach to the solution, called inexact Newton, can provide significantly improved efficiency. Today, there is a general consensus that a technique called the generalized minimum residual (GMRES) [9] is the preferred one among the many available Krylov subspace techniques, for harmonic-balance analysis [10][11][12].
The generalization of the above described harmonic-Newton algorithm to the case of a generic electronic circuit with state variables is obviously straightforward. Indeed, in such case we will simply have where each one of the Y , = 1, . . . , , is a (2 + 1) × 1 vector containing the Fourier coefficients of the corresponding state variable ( ). The Ω matrix will be defined as and the Jacobian matrix J(Y) = F(Y)/ Y will have a block structure, consisting of an × matrix of square submatrices (blocks), each of one with dimension (2 + 1). Each block contains information on the sensitivity of changes in a component of the error function F(Y), resulting from changes in a component of Y. The general block of row and column can be expressed as where P (Y)/ Y and Q (Y)/ Y denote, respectively, the Toeplitz conversion matrices [7] of the vectors containing the Fourier coefficients of ( ( ))/ ( ) and ( ( ))/ ( ).

Modulated Signals.
Signals containing components that vary at two or more widely separated rates are usually referred Journal of Function Spaces and Applications 5 to as multirate signals and have a special incidence in RF and microwave applications, such as mixers (up/down converters), modulators, demodulators, power amplifiers, and so forth. Multirate signals can appear in RF systems due to the existence of excitation regimes of widely separated time scales (e.g., baseband stimuli and high frequency local oscillators) or because the stimuli can be, themselves, multirate signals (e.g., circuits driven by modulated signals). The general form of an amplitude and phase-modulated signal can be defined as where ( ) and ( ) are, respectively, the amplitude, or envelope, and phase slowly varying baseband signals, modulating the cos( ) fast-varying carrier. Circuits driven by this kind of signals, or presenting themselves state variables of this type, are common in RF and microwave applications.
Since the baseband signals have a spectral content of much lower frequency than the carrier, that is, because they are typically slowly varying signals while the carrier is a fastvarying entity, simulating nonlinear circuits containing this kind of signals is often a very challenging issue. Because the aperiodic nature of the signals obviates the use of any steady-state technique, one might think that conventional time-step integration would be the natural method for simulating such circuits. However, the large time constants of the bias networks determine long transient regimes and, as a result, the obligation of simulating a large number of carrier periods. In addition, computing the RF carrier oscillations long enough to obtain information about its envelope and phase properties is, itself, a colossal task. Time-step integration is thus inadequate for simulating this kind of problems because it is computationally expensive or prohibitive.

Hybrid Time-Frequency ETHB Technique.
The envelope transient harmonic balance (ETHB) [13][14][15][16] is a hybrid timefrequency technique that was conceived to overcome the inefficiency revealed by SPICE-like engines (time-step integration schemes) when simulating circuits driven by modulated signals or presenting state variables of this type. It consists in calculating the response of the circuit to the baseband and the carrier by treating the envelope and phase in the time domain and the carrier in the frequency domain. For that, it assumes that the envelope and phase baseband signals are extremely slow when compared to the carrier, so that they can be considered as practically constant during many carrier periods. Taking this into account, ETHB samples the baseband signals in an appropriately slow time rate and assumes a staircase version of both amplitude and phase, which will conduct to a new modulated version of these signals. The steady-state response of the circuit to this new modulated version is then computed at each time step with the frequency-domain HB engine.
In order to provide a very brief theoretical description of the ETHB technique, let us suppose that we have a circuit driven by a single source of the form of ( ) in (17). If we rewrite ( ) as and assume that the circuit is stable, then all its state variables can be expressed as time-varying Fourier series where ( ) represents the time-varying Fourier coefficients of ( ), which are slowly varying in the baseband time scale. Now, if we take into consideration the disparity between the baseband and the carrier time scales and assume that they are also uncorrelated, which is normally the case, then we can rewrite (17) and (19) as where is the slow baseband time scale and is the fast carrier time scale. Then, if we discretize the slow baseband time scale using a grid of successive time instants , and adopt a convenient harmonic truncation at some order = , we will obtain for each , a periodic boundary value problem that can be solved in the frequency domain with HB. In order to compute the whole response of the circuit, a set of successive HB equations of the form has to be solved, in which X( , ) and Y( , ) represent the vectors containing the time-varying Fourier coefficients of the excitation and the solution, respectively. Two different ways can be conceived to evidence the system's dynamics to the time-varying envelope, depending on whether the circuit's elements' constitutive relations are described in the frequency domain or they can be formulated in the time domain.
In one possibility, we rely on the frequency-domain description of each of the constitutive elements, and so of the entire system represented in (22). Assuming that the envelope time evolution is much slower than that of the carrier, we no longer consider that each harmonic component of the carrier occupies a single frequency (constant amplitude and phase carrier) but spreads through its vicinity (slowly varying amplitude and phase modulation). For example, any dynamic linear component whose frequency-domain representation is can be approximated by a Taylor series (or any other polynomial or rational function) in the vicinity of each of the carrier harmonics, , that is, = + , where is a slight frequency perturbation, as which leads to with̃and̃being the low-pass equivalent of and . Since ( )/ is a constant, and ( )̃( ) can be interpreted as the m'th order derivative of the time-domain ( ) with respect to time , (25) can be rewriten as which, substituted in (22), would evidence the desired system's dynamics to the amplitude and phase modulations. Therefore, the ETHB technique consists in the transient simulation, in an envelope time-step by time step basis, , , , +1 , . . ., of the harmonic balance equation of (22). This formulation of ETHB is, nowadays, a mature technique in the RF simulation community. However, its basic assumption constitutes also its major drawback. By requiring the envelope and phase to be extremely slowly varying signals when compared to the carrier frequency, this mixed frequency-time technique becomes restricted to circuits whose stimuli occupy only a small fraction of the available bandwidth.
In an alternative ETHB formulation, we assume that every element can be described in the time domain. Hence, we can substitute the time-varying Fourier description of (21) into (1) and then treat the carrier time, , in the frequency domain-converting the DAE system into an algebraic onebut keeping the envelope time, , in the time domain. This way, we obtain another hybrid time-frequency description of the system that no longer suffers from the narrow bandwidth restriction just mentioned and whose formulation and solution will be discussed in more detail in Section 3.4.

Multivariate Formulation.
We will now introduce a powerful strategy for analyzing nonlinear circuits handling amplitude and/or phase modulated signals, as with any other kind of multirate signals. This strategy consists in using multiple time variables to describe the multirate behavior, and it is based on the fact that multirate signals can be represented much more efficiently if they are defined as functions of two or more time variables, that is, if they are defined as multivariate functions [17,18]. With this multivariate formulation, circuits will be no longer described by ordinary differential algebraic equations in the one-dimensional time but, instead, by partial differential algebraic systems. Let us consider the amplitude and phase-modulated signal of (17), and let us define its bivariate form aŝ where 1 is the slow envelope time scale and 2 is the fast carrier time scale. As can be seen,̂( 1 , 2 ) is a periodic function with respect to 2 but not to 1 , that is, and, in general, this bivariate form requires far fewer points to represent numerically the original signal, especially when the 1 and 2 time scales are widely separated [17,18]. Let us now consider the differential algebraic equations' (DAEs) system of (1), describing the behavior of a generic RF circuit driven by the envelope-modulated signal of (17). Taking the above considerations into account, we will adopt the following procedure: for the slowly varying parts (envelope time scale) of the expressions of vectors x( ) and y( ), is replaced by 1 ; for the fast-varying parts (RF carrier time scale), is replaced by 2 . The application of this bivariate strategy to the DAE system of (1) converts it into the following multirate partial differential algebraic equations' (MPDAEs) system [17,18]: The mathematical relation between (1) and (29) establishes that ifx( 1 , 2 ) andŷ( 1 , 2 ) satisfy (29), then the univariate forms x( ) =x( , ) and y( ) =ŷ( , ) satisfy (1) [18]. Therefore, the univariate solutions of (1) are available on diagonal lines univariate solution in a generic [0, Final ] interval due to the periodicity of the problem in the 2 dimension we will have on the rectangular domain [0, Final ] × [0, 2 ], where mod 2 represents the remainder of division of by 2 . The main advantage of this MPDAE approach is that it can result in significant improvements in simulation speed when compared to DAE-based alternatives [17][18][19][20].
Envelope-modulated responses to excitations of the form of (17) correspond to a combination of initial and periodic boundary conditions for the MPDAE. This means that the bivariate forms of these solutions can be obtained by numerically solving the following initial-boundary value problem [18]

Multitime Envelope Transient Harmonic Balance.
Multitime envelope transient harmonic balance is an improved version of the previously described ETHB technique, which is based on the multivariate formulation [21,22]. For achieving an intuitive explanation of the multitime envelope transient harmonic balance let us consider the initial-boundary value problem of (31), and let us also consider the semidiscretization of the rectangular domain [0, Final ]×[0, 2 ] in the 1 slow time dimension defined by the grid where 1 is the total number of steps in 1 . If we replace the derivatives of the MPDAE in 1 with a finite-differences approximation (e.g., the Backward Euler rule), then we obtain for each slow time instant 1, , from = 1 to = 1 , the periodic boundary value problem defined by whereŷ ( 2 ) ≃ŷ( 1, , 2 ). This means that, onceŷ −1 ( 2 ) is known, the solution on the next slow time instant,ŷ ( 2 ), is obtained by solving (33). Thus, for obtaining the whole solutionŷ in the entire domain [0, Final ] × [0, 2 ], a total of 1 boundary value problems have to be solved. With multitime ETHB, each one of these periodic boundary value problems is solved using the harmonic balance method. The corresponding HB system for each slow time instant 1, is the × (2 + 1) algebraic equations set given by whereX( 1, ) andŶ( 1, ) are the vectors containing the Fourier coefficients of the excitation sources and of the solution (the state variables), respectively, at 1 = 1, . P(⋅) and Q(⋅) are unknown functions, Ω is the diagonal matrix (15), and thê Y( 1, ) vector can be expressed aŝ where each one of the state variable frequency components, As seen in Section 2.3, since p(⋅) and q(⋅) are in general nonlinear functions, one possible way to compute P(⋅) and Q(⋅) in (34) consists in evaluating p(⋅) and q(⋅) in the time domain and then calculate its Fourier coefficients. The HB system of (34) can be rewriten as or, in its simplified form, as in which F(Ŷ( 1, )) is the error function at 1 = 1, . In order to solve the nonlinear algebraic system of (38) a Newton-Raphson iterative solver is usually used. In this case, the Newton-Raphson algorithm conducts us to 8

Journal of Function Spaces and Applications
which means that at each iteration , we have to solve a linear system of × (2 + 1) equations to compute the new estimateŶ [ +1] ( 1, ). Consecutive Newton iterations will be computed until a desired accuracy is achieved, that is, until ‖F(Ŷ( 1, ))‖ < tol, where tol is the allowed error ceiling.
The system of (39) involves the derivative of the vector F(Ŷ( 1, )), with respect to the vectorŶ( 1, ). The result is a matrix, the so-called Jacobian of F(Ŷ( 1, )), In the same way as in Section 2.3, this matrix has a block structure, consisting of an × matrix of square submatrices (blocks), each one with dimension (2 + 1). The general block of row and column can now be expressed as In summary, multitime ETHB handles the solution dependence on 2 in frequency domain, while treating the course of the solution to 1 in time domain. So, it is a hybrid time-frequency technique which is similar to the ETHB engine previously reported in Section 3.2. However, an important advantage of multitime ETHB over conventional ETHB is that it does not suffer from bandwidth limitations [21]. For example, in circuits driven by envelope modulated signals, the only restriction that has to be imposed is that the modulating signal and the carrier must not be correlated in time (which is typically the case).

Advanced Hybrid Time-Frequency Simulation
One limitation of the ETHB and multitime ETHB engines is that they do not perform any distinction between nodes or blocks within the circuit, that is to say that they treat all the circuit's state variables in the same way. Thus, if the circuit evidences some heterogeneity, as is the case of modern wireless architectures combining radio frequency, baseband analog, and digital blocks in the same circuit, these tools cannot benefit from such feature. To overcome this difficulty an innovative mixed mode time-frequency technique was recently proposed by the authors [23,24]. This technique splits the circuit's state variables (node voltages and mesh currents) into fast and slowly varying subsets, treating the former with multitime ETHB and the later with a SPICE-like engine (a time-step integration scheme). This way, the strong nonlinearities of the circuit are appropriately evaluated in the time domain, while the moderate ones are computed in the frequency domain [23,24].

Time-Domain Latency within the Multivariate Formulation.
In order to provide an illustrative explanation of the issues under discussion in this section, let us start by considering an RF circuit in which some of its state variables (node voltages and branch currents) are fast carrier envelopemodulated waveforms, while the remaining state variables are slowly varying aperiodic signals. For concreteness, let us suppose that the signals are two distinct state variables in different parts of the circuit. ( ) represents the Fourier coefficients of 1 ( ), which are slowly varying in the baseband time scale, is the carrier frequency, and ( ) is a slowly varying aperiodic baseband function. We will denote signals of the form of 1 ( ) as active and signals of the form of 2 ( ) as latent. The latency revealed by 2 ( ) indicates that this variable belongs to a circuit block where there are no fluctuations dictated by the fast carrier. Consequently, due to its slowness, it can be represented efficiently with much less sample points than 1 ( ). On the other hand, since it does not evidence any periodicity, it cannot be processed with harmonic balance. On the contrary, if the number of harmonics K is not too large, the fast carrier oscillation components of 1 ( ) can be efficiently computed in the frequency domain. Therefore, it is straightforward to conclude that if we want to simulate circuits having such signal format disparities in an efficient way, distinct numerical strategies will be required.

Local oscillator
Baseband signal Let us now consider the bivariate forms of 1 ( ) and 2 ( ) denoted bŷ1( 1 , 2 ) and̂2( 1 , 2 ) and defined aŝ where 1 and 2 are, respectively, the slow envelope time dimension and the fast carrier time dimension. As we can see, 2 ( 1 , 2 ) has no dependence on 2 , so it has no fluctuations in the fast time axis. In fact, it is so because 2 ( ) does not oscillate at the carrier frequency. Consequently, for each slow time instant 1, defined on the grid of (32), whilê 1 ( 1, , 2 ) is a waveform that has to be represented by a certain quantity = − , . . . , of harmonic components, 2 ( 1, , 2 ) is merely a constant (DC) signal that can be simply represented by the = 0 component. Therefore, there is no necessity to perform the conversion between time and frequency domains for̂2( 1, , 2 ), which means that this state variable can be processed in a purely time-domain scheme.

Mixed Mode Time-Frequency Technique.
In the above, we illustrated that bivariate forms of latent state variables have no undulations in the 2 fast time scale. So, while active state variables have to be represented by a set of (2 + 1) harmonic components arranged in vectors of the form of (36), latent state variables can be represented as scalar quantities, that is,Ŷ By considering this, it is straightforward to conclude that the size of theŶ( 1, ) vector defined by (35) PM (t) Figure 2: RF polar transmitter with a hybrid envelope amplifier [23]. in the HB system of (37). An additional and crucial detail is that there is no longer obligation to perform the conversion between time and frequency domains for the latent state variables expressed in the form of (44), as well as for the components of F(Ŷ( 1, )) corresponding to latent blocks of the circuit. Since the = 0 order Fourier coefficient ,0 ( 1, ) is exactly the same as the constant 2 time valuê( 1, ), the use of the discrete Fourier transform (DFT) and the inverse discrete Fourier transform (IDFT)-or their fast algorithms, that is, the fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT)-will be required only for components in the HB system of (37) having dependence on active state variables. Significant Jacobian matrix size reductions will be achieved, too. In effect, by taking into consideration this multirate characteristic (the subset circuit latency), some of the blocks of (40) will be merely 1 × 1 scalar elements that contain dc information on the sensitivity of changes in components of F(Ŷ( 1, )) resulting from changes in latent components ofŶ( 1, ).
With this strategy of partitioning the circuit into active and latent subcircuits (blocks), significant computation and memory savings can be achieved when finding the solution of (37). Indeed, with the state variableŶ( 1, ) and the error function F(Ŷ( 1, )) vector size reductions, as also the resulting Jacobian J(Ŷ( 1, )) matrix size reduction, it is possible to avoid dealing with large linear systems in the iterations of (39). Thus, a less computationally expensive Newton-Raphson iterative solver is required.

Performance of the Methods
The performance and the efficiency of the ETHB and multitime ETHB techniques were already attested and recognized by the RF and microwave community. In the same way, the performance and the efficiency of the advanced hybrid technique described in the previous section (the mixed mode time-frequency simulation technique) were also already demonstrated through its application to several illustrative examples of practical relevance. Indeed, electronic circuits with distinct configurations and levels of complexity were especially selected to illustrate the significant gains in computational speed that can be achieved when simulating the circuits with this method [23,24]. Nevertheless, in order to provide the reader with a realistic idea of the potential of this recently proposed technique, we included in this section a brief comparison between this method and the previous state-of-the-art multitime ETHB. For that, we considered two distinct circuits: the resistive FET mixer depicted in Figure 1 and the RF polar transmitter described in [23] and depicted in Figure 2.
The circuits were simulated in MATLAB with the mixed mode time-frequency simulation technique versus the multitime ETHB. In our experiments a dynamic step size control tool was used in the 1 slow time scale, and we considered = 9 as the maximum harmonic order for the HB evaluations. Numerical computation times (in seconds) for simulations in the [0, 0.5 s] and [0, 5.0 s] intervals are presented in Tables  1 and 2. As we can see, speedups of approximately 2 times were obtained for the simulation of the resistive FET mixer, and speedups of more than one order of magnitude were obtained for the RF polar transmitter. These efficiency gains were achieved without compromising accuracy. Indeed, for both cases, the maximum discrepancy between solutions (for all the circuits' state variables) was on the order of 10 −8 .
The choice of these two circuits, which have different levels of complexity, was to illustrate how the computational efficiency is more evident as the ratio between the number of active and latent state variables is increased. In the first example, this ratio is 1, whereas in the second one this ratio is 4.5.

Conclusion
Although significant advancement has been made in RF and microwave circuit simulation along the years, the use of more elaborate functional analysis techniques has kept this subject a hot topic of scientific and practical engineering interest. Indeed, emerging wireless communication technologies continuously bring new challenges to this scientific field, as is now the case of heterogeneous RF circuits containing state variables of distinct formats and running on widely separated time scales. Taking into account the popularity of HB, but mostly ETHB, in the RF and microwave community, in this paper we have briefly reviewed the use of some functional analysis methods to address numerical simulation challenges using hybrid time-frequency techniques. A comparison between two state-of-the-art hybrid techniques in terms of computational speed is also included to evidence the efficiency gains that can be achieved by partitioning heterogeneous circuits into blocks, treating latent blocks in a one-dimensional space, and active ones in a bidimensional space.

Introduction
It is widely recognized that the assumption of log-normal stock diffusion with constant volatility in the standard Black-Scholes model [1] of option pricing is not ideally consistent with that of the market price movement. In particular, the probability distribution of realized asset returns oen exhibits features that are not taken into account by the standard Black-Scholes model: heavy tails, volatility clustering, and volatility smile [2]. In order to explain these phenomena, extensions of the Black-Scholes model have been proposed. Generally speaking, two different classes of models have been studied in the �nance literature: the stochastic volatility models [3,4] and the jump-diffusion models [2,5]. Contrary to the Black-Scholes model, the jump diffusion models allow for a more realistic representation of price dynamics and greater �exibility in modeling. During the last twenty years, research on models with jumps has become very active. Most of such models have been proposed; see [2] and references therein.
Here we focus on a jump-diffusion model with �nite jump activity proposed by Kou in [6].
Unlike the standard Black-Scholes equation, the valuation of options under jump-diffusion models requires solving a partial integrodifferential equation. A fully implicit scheme would lead to full matrices due to the integral term, which makes many methods computationally too expensive. Several numerical methods based on the �nite difference method have been proposed for pricing options under jump-diffusion models. Amin [7] gave a multinomial tree method for pricing options under jump-diffusion models, which is actually an explicit type �nite difference approach. Zhang [8] and Cont and Voltchkova [9] used implicit-explicit �nite difference methods for pricing options under jump-diffusion models. Andersen and Andreasen [10] and Almendral and Oosterlee [11] proposed operator splitting methods coupled with a fast Fourier transformation (FFT) technique for pricing options with jump diffusion processes. d'Halluin et al. [12,13] developed a second-order accurate numerical method with a �xed-point iteration method and an implicit �nite difference scheme along with a penalty method for pricing American options under jump diffusion processes. Toivanen et al. [14][15][16] introduced a high-order front-�xing �nite difference method and an arti�cial volatility scheme along with an iterative method for pricing American options under jumpdiffusion models. Zhang and Wang [17,18] proposed �tted �nite volume schemes coupled with the Crank-�icolson time stepping method for pricing options under jump diffusion processes.
It is well known that the Black-Scholes partial differential operator at is degenerative. e Black-Scholes partial differential operator becomes a convectiondominated operator when the volatility or the asset price is small. Hence, numerical difficulty can be caused when the standard methods such as the central difference and piecewise linear �nite element methods are used to solve those problems. A common and widely used approach by many authors dealing with �nite difference�volume�element methods for the Black-Scholes partial differential equaion is to apply an Euler transformation to remove the singularity of the differential operator when the parameters of the Black-Scholes equation are constant or space-independent; see for example, [2,19]. As a result of the Euler transformation, the transformed interval becomes (−∞, ∞). However, the truncation on the le�-hand side of the domain to arti�cially remove the degeneracy may cause computational errors. Furthermore, the uniform mesh on the transformed interval will lead to the originally grid points concentrating around inappropriately. Moreover, when a problem is spacedependent, this transformation is impossible, and thus the Black-Scholes equation in the original form needs to be solved [20]. e same problem also appears in the partial integral-differential equations resulting from jump-diffusion models [9,11,18]. Wang [21] and Angermann and Wang [22] applied a stable �tted �nite volume method to deal with the degeneracy and singularity of the Black-Scholes operator. In this paper, we will present a stable �nite difference method with a second-order convergency with respect to the spatial variable for solving the partial integrodifferential equation de�ned on ( , +∞) for arbitrary volatility and arbitrary interest rate.
e penalty method was introduced by Zvan et al. [23] for pricing American options with stochastic volatility by adding a source term to the discrete equation. Nielsen et al. [24] presented a re�nement of their work by adding a penalty term to the continuous equation and illustrated the performance of various numerical schemes. By adding a penalty term, the linear complementarity problem for pricing the American options can be transformed into a nonlinear parabolic partial differential equation. As the solution approaches the pay-off function at expiry, the penalty term forces the solution to stay above it. When the solution is far from the barrier, the term is small and thus the Black-Scholes equation is approximatively satis�ed in this region.
In [25] we have presented a robust difference scheme for the penalized Black-Scholes equation governing American put option pricing. In this paper we present a stable �nite difference scheme on a piecewise uniform mesh along with a power penalty method for pricing American put options under Kou's jump-diffusion model. By adding a penalty term the partial integrodifferential complementarity problem arising from pricing American put options under Kou's jump-diffusion model is transformed into a nonlinear parabolic integrodifferential equation. en a �nite difference scheme is proposed to solve the penalized integrodifferential equation, which combines a central difference scheme on a piecewise uniform mesh with respect to the spatial variable with an implicit-explicit time stepping technique. is leads to the solution of problems with a tridiagonalmatrix. It is proved that the difference scheme satis�es the early exercise constraint. Furthermore, it is proved that the scheme is oscillation-free and is second-order convergent with respect to the spatial variable. Numerical results support the theoretical results. e rest of the paper is organized as follows. In the next section, we describe some theoretical results on the continuous problem for pricing American put options under Kou's jump-diffusion model. e discretization method is described in Section 3. In Section 4 we prove that the difference scheme satis�es the early exercise constraint. In Section 5, we present a stability and error analysis for the �nite difference scheme. In Section 6, numerical experiments are provided to support these theoretical results. Finally, a discussion is indicated in Section 7.

The Continuous Problem
Let denote the value of an American put option with strike price on the underlying asset and time . It is known that the price under a jump-diffusion model satis�es the following partial integrodifferential complementarity problem [14][15][16][17]: where denotes the partial integrodifferential operator de�ned by is the volatility of the underlying asset, is the risk free interest rate, is the current time, is the maturity date, and ( ) is the probability function of the jump amplitude with the obvious properties that for all , ( ) ≥ and ∫ ∞ ( )d 1, the constant is given by In Kou's model, ( ) is the following log-doubleexponential density 2 where 1 > 1 and 2 are positive constants such that 1. It can be shown that, in this case, 1 /( 1 − 1) 2 /( 2 1) − 1. When the jump rate is zero, the partial integrodifferential operator reduces to the standard Black-Scholes operator [1]. In this paper, we assume that − . e above linear complementarity problem (1)-(6) can be solved by a penalty approach. Let < 1 be a small regularization parameter and consider the following initialboundary value problem, where max is a positive constant and ( ) − . By adding a penalty term the linear complementarity problem for pricing American options can be transformed into a nonlinear parabolic integrodifferential equation. Essentially, it is of order in regions where ( ) ( ), and hence the partial integrodifferential equation is approximately satis�ed. When ( ) approaches ( ), this term is approximately equal to assuring that the early exercise constraint is not violated. For the continuous case, the convergence and the positivity constraint of the penalty method have been proved in [26]. In this paper, we consider a second-order �nite difference scheme to discretize the semilinear partial integrodifferential equation (10) and prove that the approximate option values generated by the scheme satis�es a discrete version of (2).
For applying the numerical method, we truncate the domain ( ) into ( max ). Based on Wilmott et al. 's estimate [19] that the upper bound of the asset price is typically three or four times the strike price, it is reasonable for us to set max 4 . e boundary condition at max is chosen to be ( max ) . Normally, this truncation of the domain leads to a negligible error in the value of the option [27].
erefore, in the remaining of this paper, we will consider the following nonlinear parabolic integrodifferential equation:

Discretization
We now consider the approximation of the solution to the semilinear partial integrodifferential equation (12) For the time discretization, we use a uniform mesh Ω on [ ] with mesh elements. en the piecewise uniform mesh Ω × on Ω ( max ) × ( ) is de�ned to be the tensor product Ω × Ω × Ω . It is easy to see that the mesh sizes ℎ − −1 and − −1 satisfy respectively. e space derivatives of (12) are approximated with central differences on the above piecewise-uniform mesh: e integral term d (20) of (12) can be approximated by a fast method as in [14][15][16]. By making the change of variable / , we have By using the linear interpolation, we can obtain an approximation of at each mesh point for = , , , . A fully implicit scheme would lead to full matrices due to the integral term, which makes many methods computationally too expensive. Our technique is similar in some respects to Zhang [8], though less constrained in terms of stability restrictions. e integral term is treated explicitly in time, while the differential terms are treated implicitly. is leads to the solution of problems with a tridiagonalmatrix. We will prove that the resulting time stepping method is unconditionally stable.
Our implicit-explicit scheme to discretize the integrodifferential equation (12)-(14) is where , ≡ ℎ ℎ ℎ ℎ en from the above solution , we can obtain the optimal stopping price which is the maximum asset price such that = * for each .
Remark. Our method can be extended to the more general case of an in�nite-activity process, like that of the Lévy-type models such as VG model [28] or CGMY model [29]. For example, the American put option price under a generalized VG process satis�es the following partial integrodifferential complementarity problem [30,31]: where denotes the partial integrodifferential operator de�ned by is some "compensation constant" given by ( ) is known as the Lévy density. It is noted in Cont and Tankov [2] and Cont and Voltchkova [9] that for the Lévy densities (in the usual case where the density decays exponentially for large jumps) an in�nite-activity process can be arbitrarily well approximated by a �nite-activity process and an adjusted volatility. Having done this, our numerical method for the �nite-activity process can be used to value options under in�nite-activity processes. However, the matrix associated with a discrete operator may not be an -matrix.

Positivity Constraint
In this section, we will prove that our scheme satis�es the early exercise constraint. Let Journal of Function Spaces and Applications

5
We can obtain �ext, by de�ning let be an index such that = For = , it follows from (35) that where we have used (31). Since 6 Journal of Function Spaces and Applications we conclude that If we assume that from (40) we can get where Obviously, where we have used the assumption ≥ ( + max ). Hence, from (42) we can obtain ≥ 0.
Consequently, by induction on , it follows from (34) that for , , 1, 0. Next we prove that ≥ 0. As above, we de�ne min , and let be an index such that .

Error Estimates
To investigate the convergence of the method, note that the error functions − (0 ≤ ≤ , 0 ≤ ≤ ) are the solutions of the discrete problem e operator , satis�es the following discrete maximum principle. Hence, the difference scheme is stable and oscillation-free for arbitrary volatility and arbitrary interest rate.
Proof. e discrete operator , can be written as follows: where , , and are de�ned in Section 4. From (31) we have Clearly, Furthermore, we have Hence, we verify that the matrix associated with , is an -matrix; see for example [32]. us, by the same argument as [33, Lemma 3.1] the result follows.
Now we can get the following error estimates. eorem 3. Let be the solution of (12)- (14) and the solution of the �nite di�erence scheme (23)- (25). en one has the following error estimates, where is a positive constant independent of and .
where 2 is a positive constant independent of and . Using this, we have where 3 and 4 are positive constants independent of and . Hence, we use the Taylor expansion to obtain for < < , ≤ < , where 5 and 6 are also positive constants independent of and . erefore, using the barrier function = ( −2 ) (with the constant sufficiently large), Lemma 2 implies that which completes the proof.

Numerical Experiments
In this section, we verify experimentally the theoretical results obtained in the preceding section. Errors and convergence rates for the �nite difference scheme are presented for two test problems.
For Tests 1 and 2 we choose = , C = S max . e computed option value and the constraint − * with = 2 and = 64 for Test 1 are depicted in Figures 1 and 2, respectively. e computed option value and the constraint − * with = 2 and = 64 for Test 2 are depicted in Figures 3 and 4, respectively.
e exact solutions of our test problems are not available. We use the approximated solution of = 24 and = 4 6 as the exact solution. We present the error estimates for different and at = . Because we only know "the exact solution" on mesh points, we use the linear interpolation to get solutions at other points. In this paper ( , ) denotes "the exact solution" which is a linear interpolation of the approximated solution 24,4 6 . We e error estimates and the convergence rates in our computed solutions of Tests 1 and 2 are listed in Tables 1 and 2, respectively. From the �gures, it is seen that the numerical solutions by our method are nonoscillatory. From Tables 1 and 2, we see that / 2 is close to 4, which supports the convergence estimate of eorem 3. e numerical results of Zhang and Wang [17,18] verify that their schemes are also second-order convergent. However, their penalty term is nonsmooth and a smoothing technique is needed for solving the nonlinear discretization system.

e Crank-Nicolson Scheme.
For the time discretization, we can use the Crank-Nicolson scheme to improve the accuracy of the scheme. en the implicit �nite difference scheme on the above piecewise-uniform mesh for the integrodifferential equation (12)- (14) is e implicit scheme would lead to full matrices due to the integral term, which makes many methods computationally expensive. Furthermore, in order to satisfy the positivity constraint and to avoid spurious oscillations in the Crank-Nicolson method [34][35][36], the time mesh size should satisfy the following constraint condition [24,35] 2ℎ 2 2 2 max max ℎ ℎ 2 (2 ) ℎ 2 . (73)

Power Penalty
Methods. Some papers [12,22,37] have used power penalty methods to the linear complementarity problem arising from pricing American options. Wang et al. [37] prove that the solution to their penalized equation converges to that of the variational inequality problem with an arbitrary order. e power penalty methods [12,22,37] can also be applied to valuate American options under Kou's jump-diffusion model. e discretization method can be same as that of Section 3. erefore, our difference scheme for the power penalty equation is and are the penalty parameters. Since the power penalty term is nonsmooth, a smoothing technique is needed for solving the nonlinear discretization system.

Introduction
In this paper we consider the following nonlinear mixed Fredholm-Volterra-Hammerstein integral equation: where y 0 : α, α β → R, g 1 , g 2 : α, α β ×R → R and the kernels k 1 , k 2 : α, α β 2 → R are assumed to be known continuous functions, and x : α, α β → R is the unknown function to be determined. Equation 1.1 arises in a variety of applications in many fields, including continuum mechanics, potential theory, electricity and magnetism, three-dimensional contact problems, and fluid mechanics, and so forth see, e.g., 1-4 . Several numerical methods for approximating the solution of integral, and integrodifferential equations are known see, e.g., 5-8 . For Fredholm-Volterra-Hammerstein integral equations, the classical method of successive approximations was introduced in 9 . An optimal control problem method was presented in 10 , and a collocation-type method was developed in 11-13 . Computational methods based on Bernstein operational matrices and the Chebyshev approximation method were presented in 14, 15 , respectively.
The use of fixed point techniques and Schauder bases, in the field of numerical resolution of differential, integral and integro-differential equations, allows for the development of new methods providing significant improvements upon other known methods see 16-23 . In this work we make an analysis of the error committed upon having obtained the approximate solution of the nonlinear Fredholm-Volterra-Hammerstein integral equation, using the theorem of Banach fixed point and Schauder bases see 21 , for a detailed description of the numerical method used in a more general equation .
In order to recall the aforementioned numerical method, let C α, α β and C α, α β 2 be the Banach spaces of all continuous and real-valued functions on α, α β and α, α β 2 endowed with their usual supnorms. Throughout this paper we will make the following assumptions on k i and g i for i ∈ {1, 2}.
i Since k i ∈ C α, α β 2 , there exists M k i ≥ 0 such that |k i t, s | ≤ M k i for all t, s ∈ α, α β 2 .
ii g i : α, α β × R → R are functions such that there exists L g i > 0 such that |g i s, y − g i s, z | ≤ L g i |y − z| for s ∈ α, α β and for all y, z ∈ R.
iii β 2 i 1 M k i L g i < 1. We organize this paper as follows. In Section 2, we reformulate 1.1 in terms of a convenient integral operator T and we describe the numerical method used. The study of the error is described in Section 3. Finally, in Section 4 we show some illustrative examples.

Analytical Preliminaries
In this section we recall, in a summarized form, the concepts and results relative to the numerical method used for the study of the error that we carried out.
Let us start by observing that 1.1 is equivalent to the problem of finding fixed points of the operator T : C α, α β → C α, α β defined by Tx t : y 0 t α β α k 1 t, s g 1 s, x s ds t α k 2 t, s g 2 s, x s ds, t ∈ α, α β , x ∈ C α, α β .

A direct calculation over T leads to
for all y 1 , y 2 ∈ C α, α β , where we denote M : β 2 i 1 M k i L g i . As the operator T defined in 2.1 satisfies 2.2 , under condition iii and from the Banach fixed-point theorem, it follows that there exists a unique fixed point x ∈ C α, α β for T that is the unique solution of 1.1 . In addition, for each x ∈ C α, α β , we have and in particular x lim m T m x. But it is not possible, in an explicit way, to calculate the sequence of iterations {T m } m≥1 , to obtain the unique sequence x of 1.1 , for which reason a numerical method is needed in order to approximate the fixed point of T . Now we recall the concrete Schauder bases in the spaces C α, α β and C α, α β 2 . Let {t n } n≥1 be a dense sequence of distinct points in α, α β such that t 1 α and t 2 α β. We set b 1 t : 1 for t ∈ α, α β , and for n ≥ 1, and we let b n be a piecewise linear continuous function on α, α β with nodes at {t j : 1 ≤ j ≤ n}, uniquely determined by the relations b n t n 1 and b n t k 0 for k < n. We denote by {P n } n≥1 the sequence of associated projections and {b * n } n≥1 the coordinate functionals. It is easy to check that {b n } n≥1 is a Schauder basis in C α, α β see 24 .
From the Schauder basis {b n } n≥1 in C α, α β , we can build another Schauder basis {B n } n≥1 of C α, α β 2 see 25, 26 . It is sufficient to consider B n t, s : b i t b j s for all t, s ∈ α, α β , with τ n i, j , where for a real number p, p will denote its integer part and τ τ 1 , τ 2 : N → N × N is the bijective mapping defined by τ n :

2.4
We denote by {Q n } n≥1 the sequence of associated projections and by {B * n } n≥1 the coordinate functionals. The Schauder basis {B n } n≥1 of C α, α β 2 has similar properties to the ones for the one-dimensional case. See Table 1 and note under some weak conditions see the last row, which is derived easily from the third row of Table 1, resp., and the Mean-Value theorems for one and two variables we can estimate the rate of the convergence of the sequence of projections in the one and two-dimensional cases, where we consider the dense subset {t i } i≥1 of distinct points in α, α β , T n as the set {t 1 , . . . , t n } ordered in an increasing way for n ≥ 2, and ΔT n denotes the maximum distance between two consecutive points of T n .

Journal of Function Spaces and Applications
The equality 2.5 enables us to determine, in an elemental way, the image of any continuous function under the operator T . However, it does not seem to be a usable expression due to the two infinite sums appearing in it. For this reason, the aforementioned sums are truncated.

Study of the Error
In this section we realize a new study of the error, obtaining one bound of it. Supposing conditions of regularity in the functions data, we improve and complete the study realized in 21 .
Proposition 3.1. The sequence {x r } r≥1 is uniformly bounded.

Journal of Function Spaces and Applications 5
For the monotonicity of the Schauder basis, we have x r−1 ds.

3.6
Therefore, Applying recursively this process we get
In the result below we show that the sequence defined in 3.4 approximates the exact solution of 1.1 as well as giving an upper bound of the error committed. Theorem 3.5. With the previous notation and the same hypothesis as in Proposition 3.3, let m ∈ N, n r ∈ N, n r ≥ 2, and {ε 1 , . . . , ε m } be a set of positive numbers such that for all r ∈ {1, . . . , m} we have Then,

3.18
Moreover, if x is the exact solution of the integral equation 1.1 , then the error x − x m is given by

3.20
To conclude the proof, we derive 3.19 . From 2.3 , we have and in addition, on the other hand, applying recursively 2.2 and 3.18 , we obtain

3.22
Then we use the triangular inequality 3.23 and the proof is complete in view of 3.21 and 3.22 .
Remark 3.6. Under the hypotheses of Theorem 3.5, let us observe that by the inequality 3. 19 we have The first sumand on the right hand side approximates zero when m increases; with respect to the second sumand, since the points of the partition can be chosen in such a way that ΔT n r becomes so close to zero as we desire, the ε r s can become so small as we desire, arriving in this way at an explicit control of the error committed.
Therefore, given ε > 0, there exists m ≥ 1 such that x − x m < ε when choosing ε r sufficiently small.

Numerical Examples
In this last section we illustrate the results previously developed, stressing the significance of inequality 3.19 in Theorem 3.5, as mentioned in Remark 3.6. First of all, we show how the numerical method works, because we use it later in the estimation of the error. For solving the numerical example, Mathematica 7 is used, and to construct the Schauder basis in C 0, 1 2 , we considered the particular choice t 1 0, t 2 1 and for n ∈ N ∪ {0}, t i 1 2k 1 /2 n 1 if i 2 n k 1 where 0 ≤ k < 2 n are integers. To define the sequence {x r } r≥1 , we take x 0 t y 0 t and n r j for all r ≥ 1 . In Tables 2 and 3, we exhibit, for j 9, 17, and 33, the absolute errors committed in eight representative points of 0, 1 when we approximate the exact solution x by the iteration x 4 . Its numerical results are also given in Figures 1 and 2, respectively.    Now we realize that the choice of a particular j, determining the dyadic partition of the interval 0, 1 from the first 2 j 1 nodes, and in such a way that the error is less than a fixed positive ε, that is, x − x m < ε, can be easily determined practically: it suffices to compute, once again by means of Mathematica 7, the error. To this end, since it is measured in terms of the supnorm, we consider the nodes 0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1 and maximum of the absolute values of the differences between the values of the exact solution and the approximation obtained for the third iteration m 3 . The numerical tests are given in Table 4 and correspond to the nonlinear mixed Fredhol-Volterra-Hammerstein equations considered in Examples 4.1 and 4.2, respectively.

Introduction
In this paper, we consider the telegraph equation of the following form: over a region Ω { x, t : 0 < x < 1 and 0 < t < T} and α, β are known constant coefficients with initial conditions as

Journal of Function Spaces and Applications
where u x, t can be voltage or current through the wire at position x and time t. In 1.1 , we have where G is conductance of resistor, R is resistance of resistor, L is inductance of capacitor, C is capacitance of capacitor, and u x, t can be considered as a function depending on distance x and time t, and constants are depending on a given problem and f, φ, and ψ are known continuous functions. The hyperbolic partial differential equations model the vibrations of structures e.g., buildings, beams, and machines and are the basis for fundamental equations of atomic physics. Equation 1.1 , referred to as the second-order telegraph equation with constant coefficients, models a mixture between diffusion and wave propagation by introducing a term that accounts for effects of finite velocity to the standard heat or mass transport equation 1 . However, 1.1 is commonly used in signal analysis for transmission and propagation of electrical signals 2, 3 .
In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of second-order hyperbolic equations, see, for example, 4-11 . These methods are conditionally stable. In 12 , Mohanty carried over a new technique to solve 1.1 , which is unconditionally stable and is of second-order accuracy in both the time and space components. Mohebbi and Dehghan 13 presented a high-order accurate method for solving one-space-dimensional linear hyperbolic equations and proved the high-order accuracy due to the fourth-order discretization of spatial derivative and unconditional stability. A compact finite difference approximation was presented in 14 by using the fourth order discretizing spatial derivatives of the linear hyperbolic equation and collocation method for the time component. Another solution is approximated by suing a polynomial at each grid point such that its coefficients were determined by solving a linear system of equations 15 . By using collocation points and approximating the solution by using a thin plate splines radial basis function was presented in 16 . In In this paper, the RKHSM 25-47 will be used to investigate the telegraph equation 1.1 . Several researches have been devoted to the application of RKHSM to a wide class of stochastic and deterministic problems involving fractional differential equation, nonlinear oscillator with discontinuity, singular nonlinear two-point periodic boundary value problems, integral equations, and nonlinear partial differential equations 27-41 . The method is well suited to physical problems.

Journal of Function Spaces and Applications 3
The efficiency of the method was used by many authors to investigate several scientific applications. Geng and Cui 27 applied the RKHSM to handle the second-order boundary value problems. Yao  For more details about RKHSM and the modified forms and their effectiveness, see 25-47 . In the present work, we use the following equation: by transformation for homogeneous initial conditions of 1.1 and 1.2 , we get as follows 1.5 : The paper is organized as follows. Section 2 is devoted to several reproducing kernel spaces and a linear operator is introduced. The solution representation in W Ω has been presented in Section 3. We prove that the approximate solution converges to the exact solution uniformly. Some numerical examples are illustrated in Section 4. We provide some conclusions in the last sections.

Preliminaries
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base. Since finite-dimensional Hilbert spaces are fully understood in linear algebra, and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Aleph-null κ 0 dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Aleph-null and its morphisms. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven 48 .

Reproducing Kernel Spaces
In this section, we define some useful reproducing kernel spaces. The last condition is called "the reproducing property" as the value of the function ϕ at the point t is reproduced by the inner product of ϕ with K ·, t . Then we need some notation that we use in the development of the paper. In the next we define several spaces with inner product over those spaces. Thus the space defined as is a Hilbert space. The inner product and the norm in W 3 2 0, 1 are defined by v, g W 3 respectively. Thus the space W 3 2 0, 1 is a reproducing kernel space, that is, for each fixed y ∈ 0, 1 and any v ∈ W 3 2 0, 1 , there exists a function R y such that v y v x , R y x W 3 2 ,

2.3
and similarly we define the space The inner product and the norm in T 3 2 0, 1 are defined by v, g T 3 Journal of Function Spaces and Applications 5 respectively. The space T 3 2 0, 1 is a reproducing kernel Hilbert space and its reproducing kernel function r s is given by 26 as and the space is a Hilbert space where the inner product and the norm in G 1 2 0, 1 are defined by v, g G 1 respectively. The space G 1 2 0, 1 is a reproducing kernel space and its reproducing kernel function Q y is given by 26 as Similarly, the space H 1 2 0, 1 defined by is a Hilbert space and then inner product and the norm in T 1 2 0, 1 are defined by v, g H 1 respectively. The space H 1 2 0, 1 is a reproducing kernel space and its reproducing kernel function q s is given by 26 as

Journal of Function Spaces and Applications
Now we have the following theorem.
Theorem 2.2. The space W 3 2 0, 1 is a complete reproducing kernel space whose reproducing kernel R y is given by

2.16
Note the property of the reproducing kernel as

Journal of Function Spaces and Applications
Now we note that the space given in 26 as is a binary reproducing kernel Hilbert space. The inner product and the norm in

2.28
Similarly, the space

2.29
Journal of Function Spaces and Applications 9 is a binary reproducing kernel Hilbert space. The inner product and the norm in W Ω are defined by 26 as respectively. W Ω is a reproducing kernel space and its reproducing kernel function G y,s is G y,s Q y q s . 2.31

Solution Representation in W(Ω)
In this section, the solution of 1.1 is given in the reproducing kernel space W Ω . We define the linear operator L : Model problem 1.1 changes to the following problem: Lv f x, t M x, t , x,t ∈ 0, 1 , v x, 0 0, ∂v x, 0 ∂t 0.

3.2
Lemma 3.1. The operator L is a bounded linear operator.
Proof. Since

3.6
Therefore we conclude Now, if we choose a countable dense subset { x 1 , t 1 , x 2 , t 2 , . . .} in Ω 0, 1 × 0, 1 and define where L * is the adjoint operator of L, then the orthonormal system Then we have the following theorem.

3.11
That is clearly

3.13
Note that { x i , t i } ∞ i 1 is dense in Ω, hence, Lv x, t 0. It follows that v 0 from the existence of L −1 . So the proof is complete.

3.15
Now the approximate solution v n x, t can be obtained from the n-term intercept of the exact solution v x, t and v n x, t Of course it is also easy to show that v n x, t − v x, t −→ 0, n −→ ∞ . 3.17

Convergence Analysis
We assume that {x i , t i } ∞ i 1 is dense in Ω 0, 1 × 0, 1 . We discuss the convergence of the approximate solutions constructed in Section 3. Let v be the exact solution of 1.1 and v n the n-term approximation solution of 1.1 . Then we have the following theorem.
Moreover a sequence v − v n W Ω is monotonically decreasing in n.
Proof. From 3.14 and 3.16 , it follows that

3.19
Journal of Function Spaces and Applications Thus In addition,

3.21
Then clearly, v − v n W Ω is monotonically decreasing in n.

Experimental Results for the Telegraph Equation
In this section, three numerical examples are provided to show the accuracy of the present method. All the computations were performed by Maple 13. Since, the RKHSM does not require discretization of the variables, that is, time and space, it is also not effected by computation round-off errors and no need to face with necessity of large computer memory and time. The accuracy of the RKHSM for the problem 1.1 is controllable and absolute errors are small with present choice of x and t see Tables 1, 2, 3, 4, 5, and 6 . Thus the numerical results which we obtain justify the advantage of this methodology. Note that the solutions are very rapidly convergent by utilizing the RKHSM. Further, the series solution methodology can be applied to various types of linear or nonlinear system of partial differential equations and single partial differential equations, see, for example, 25-30 . Using our method we choose 100 points in 0, 1 × 0, 1 . In Tables 2, 4 and 6 we compute the absolute errors |u x, t − u n x, t | at the following points:   In Table 7, we compute the following relative errors: ∂u ∂t π 2 u ∂ 2 u ∂x 2 π 2 sin πx sin πt 2 cos πt , Then the exact solution is given as u x, t sin πx sin πt .

4.5
Then we have the estimation in Table 1.  The exact solution u x, t e −πt sin πx. If we apply v x, t u x, t − sin πx tπ sin πx to 4.6 , then we obtain then similarly, in Table 3 we have the estimation among the exact and approximate solutions and the error terms.
Then the comparison yields in Table 4. We have Figures 4, 5, and 6 for this example.

4.8
The exact solution u x, t e 2t x 4 x − to 4.8 then 4.10 is obtained as

4.10
We have Figures 7,8,9,10, and 11 for this example.  Figure 11 . One may view Tables 1, 2, 3, 4, 5 and 6 for the confidence of the method and the comparison with the other methods. In Table 7, computing time with relative error is also given for each example.

Conclusion
In this paper, the RKHSM was used for the telegraph equation with initial conditions. The approximate solutions to the equations have been calculated by using the RKHSM without any need to transformation techniques and linearization or perturbation of the equations. In closing, the RKHSM avoids the difficulties and massive computational work by determining the analytic solutions. We compare our solutions with the exact solutions and the results of 19 .

21
A clear conclusion can be drawn from the numerical results as the RKHSM algorithm provides highly accurate numerical solutions without spatial discretizations for the nonlinear partial differential equations. It is also worth noting that the advantage of this methodology displays a fast convergence of the solutions. The illustrations show the dependence of the rapid convergence depends on the character and behavior of the solutions just as in a closed form solutions.

Introduction
Optical integral transforms have been studied in several works, for example, 1-8 . However, is the Fresnel transform among all the great importance 5, 9 where for which the kernel takes the form of a complex exponential function exp i/2c ax 2 1 bx 2 2 , for some constants a, b and c. The generalization of the Fresnel transform called the linear canonical transform was introduced in 10 and has recently attracted considerable attention in optics, see 4, 11 . One of the very well-known linear transform is the wavelet transform, see 12, 13 we have

Journal of Function Spaces and Applications
where ψ x is named as the mother wavelet such that R dxψ x 0, 1.2 μ ∈ R and λ ∈ R are the transform dilate and translate of the wavelet ψ and ψ * being the complex conjugate of ψ. The optical diffraction transform is described by the Fresnel integration in 5, 9 as follows: The parameters α 1 , γ 1 , γ 2 , and α 2 are elements of more ray transfer Matrix M describing optical systems, α 1 α 2 − γ 1 γ 2 1. For a details of Fresnel integrals, see 14, 15 . Note that many familiar transforms can be considered as special cases of the diffraction Fresnel transform. For example, if the parameters α 1 , γ 1 , γ 2 , and α 2 are written in the following matrix form: then the diffraction Fresnel transform, the generalized Fresnel Transform becomes a fractional Fourier transform, see 11, 16, 17 . In the present work, we consider a combined optical transform of Fresnel and wavelet transforms, namely, the optical Fresnel-wavelet transform defined by 9 The parameters α 1 , γ 1 , γ 2 , and α 2 appearing in 1.5 are elements of 2 × 2 matrix with unit determinant.
As the general single-mode squeezing operator of the generalized Fresnel transform is in wave optics, further its applications are having a faithful representation in the optical Fresnel-wavelet transform, see 9 . Therefore the combined optical Fresnel-wavelet transform can be more conveniently studied by the general single-mode squeezed operation.
However, our discussion is somewhat different and making more interesting. Since the theory of the optical Fresnel-wavelet transform of generalized functions has not been reported in the literature. Thus, we extend the optical Fresnel-wavelet transform to a specific space of generalized functions, namely, known as Boehmian space. In Section 2, we observe that the kernel function of the Fresnel-wavelet transform is a smooth function, and therefore

Introduction
In this paper, we discuss the existence of positive ω-periodic solutions of the second-order ordinary differential equation with first-order derivative term in the nonlinearity u t f t, u t , u t , t ∈ R, 1.1 where the nonlinearity f : R × 0, ∞ × R → R is a continuous function, which is ω-periodic in t and f t, u, v may be singular at u 0. The existence problems of periodic solutions for nonlinear second-order ordinary differential equations have attracted many authors' attention and concern, and most works are on the special equation In recent years, the fixed point theorems of cone mapping, especially the fixed point theorem of Krasnoselskii's cone expansion or compression type, have been extensively applied to two-point boundary value problems of second-order ordinary differential equations, and some results of existence and multiplicity of positive solutions have been obtained, see [11][12][13][14][15] . Lately, the authors of 16-18 have also applied the Krasnoselskii's fixed point theorem to periodic problems of second-order nonlinear ordinary differential equations, and obtained existence results of positive periodic solutions. In these works, the new discovered positivity of Green function of the corresponding linear second-order periodic boundary value problems plays an important role. The positivity guarantees that the integral operators of the second-order periodic problems are cone-preserving in the cone in the Banach space C 0, ω , where σ > 0 is a constant. Hence the fixed point theorems of cone mapping can be applied to the second-order periodic problems. For more precise results using the theory of the fixed point index in cones to discuss the existence of positive periodic solutions of second-order ordinary differential equation, see [19][20][21][22] . However, all of these works are on the special second-order equation 1.2 , and few people consider the existence of the positive periodic solutions for the general second-order equation 1.1 that explicitly contains the first order derivative term. The purpose of this paper is to extend the results of 16-22 to the general secondorder equation 1.1 . We will use the theory of the fixed point index in cones to discuss the existence of positive periodic solutions of 1.1 . For the periodic problem of 1.1 , since the corresponding integral operator has no definition on the cone K 0 in C 0, ω , the argument methods used in 16-22 are not applicable. We will use a completely different method to treat 1.1 . Our main results will be given in Section 3. Some preliminaries to discuss 1.1 are presented in Section 2.

Preliminaries
Let C ω R denote the Banach space of all continuous ω-periodic function u t with norm u C max 0≤t≤ω |u t |. Let C 1 ω R be the Banach space of all continuous differentiable ωperiodic function u t with the norm Generally, C n ω R denotes the nth-order continuous differentiable ω-periodic function space for n ∈ N. Let C ω R be the cone of all nonnegative functions in C ω R .
Let M ∈ 0, π 2 /ω 2 be a constant. For h ∈ C ω R , we consider the linear second-order differential equation The ω-periodic solutions of 2.2 are closely related with the linear second-order boundary value problem Proof. Taking the derivative in 2.5 and using the boundary condition of U t , we obtain that

2.6
Therefore, u t satisfies 2.2 . Let τ s ω; it follows from 2.5 that

2.7
Hence, u t is an ω-periodic solution of 2.2 . From the maximum principle for second-order periodic boundary value problems 4 , it is easy to see that u t is the unique ω-periodic solution of 2.2 . From 2.5 and 2.6 , we easily see that S : C ω R → C 2 ω R is a linear bounded operator. By the compactness of the embedding C 2 ω R → C 1 ω R , S : C ω R → C 1 ω R is a completely continuous operator.
Since U t > 0 for every t ∈ 0, ω , by 2.5 , if h ∈ C ω R and h t / ≡ 0, then the ωperiodic solution of 2.2 u t > 0 for every t ∈ R, and we term it the positive ω-periodic solution. Let

2.8
Define the cone K in C 1 ω R by We have the following Lemma.
Proof. Let h ∈ C ω R , u Sh. For every t ∈ R, from 2.5 it follows that
Now we consider the nonlinear equation 1.1 . Hereafter, we assume that the nonlinearity f satisfies the following condition.

2.15
Let f 1 t, x, y f t, x, y Mx, then f 1 t, x, y ≥ 0 for x > 0, t, y ∈ R, and 1.1 is rewritten to u t Mu t f 1 t, u t , u t , t ∈ R.

2.16
For u ∈ K, if u / 0, then u C > 0 and by the definition of K, u t ≥ σ u C > 0 for every t ∈ R. Hence We will find the nonzero fixed point of A by using the fixed point index theory in cones. Since the singularity of f at x 0 implies that A has no definition at u 0, the fixed point index theory in the cone K cannot be directly applied to A. We need to make some Preliminaries.
We recall some concepts and conclusions on the fixed point index in 23,24 . Let E be a Banach space and K ⊂ E a closed convex cone in E. Assume Ω is a bounded open subset of E with boundary ∂Ω, and K ∩ Ω / ∅. Let A : K ∩ Ω → K be a completely continuous mapping. If Au / u for any u ∈ K ∩ ∂Ω, then the fixed point index i A, K ∩ Ω, K has a definition. One important fact is that if i A, K ∩ Ω, K / 0, then A has a fixed point in K ∩ Ω. The following two lemmas are needed in our argument. ≥ f t, u 1 t , u 1 t ≥ ε 1 u 1 t − C 2 , t ∈ R.

3.18
Integrating this inequality on 0, ω and using the periodicity of u 1 , we get that ω 0 u 1 t dt ≤ C 2 ε 1 .

3.19
Since u 1 ∈ K ∩ ∂Ω 2 , by the definition of K, we have 3.20 By the first inequality of 3.20 , we have ω 0 u 1 t dt ≥ ωσ u 1 C .

3.21
From this and 3.19 , it follows that

3.22
By this and the second inequality of 3.20 , we have Therefore, choose R > max{R, δ}, then A satisfies the Condition 2 of Theorem 2.6. Now by the first part of Theorem 2.6, A has a fixed point in K ∩ Ω 2 \ Ω 1 , which is a positive ω-periodic solution of 1.1 .

Proof of Theorem 3.2.
Let Ω 1 , Ω 2 ⊂ C 1 ω R be defined by 3.4 . We use Theorem 2.6 to prove that the operator A has a fixed point in K ∩ Ω 2 \ Ω 1 if r is small enough and R large enough.

3.26
By this, 3.25 , and the definition of f 1 , we have u 0 t Mu 0 t f 1 t, u 0 t , u 0 t Mτ 0 ≥ M ε u 0 t , t ∈ R.

3.27
Integrating this inequality on 0, ω and using the periodicity of u 0 t , we obtain that Since ω 0 u 0 t dt ≥ ωσ u 0 C > 0, from this inequality it follows that M ≥ M ε, which is a contradiction. Hence A satisfies the Condition 3 of Theorem 2.6.

3.30
Since u 1 ∈ K ∩ ∂Ω 2 , by the definition of K, u 1 satisfies 3.20 . By the second inequality of 3.20 , we have Consequently, By 3.32 and the first inequality of 3.20 , we have

3.33
From this, the second inequality of 3.20 and 3.29 , it follows that f t, u 1 t , u 1 t ≤ −ε 1 u 1 t , t ∈ R.

3.34
By this and 3.30 , we have

3.35
Integrating this inequality on 0, ω and using the periodicity of u 1 t , we obtain that Since ω 0 u 1 t dt ≥ ωσ u 1 C > 0, from this inequality it follows that M ≤ M − ε 1 , which is a contradiction. This means that A satisfies the Condition 4 of Theorem 2.6.
By the second part of Theorem 2.6, A has a fixed point in K ∩ Ω 2 \ Ω 1 , which is a positive ω-periodic solution of 1.1 .

Example 3.3.
Consider the second-order differential equation u a 1 t u a 2 t u 2 a 3 t u 2 u, t ∈ R, 3.37 where a i t ∈ C ω R , i 1, 2, 3. If −π 2 / ω 2 < a 1 t < 0 and a 2 t , a 3 t > 0 for t ∈ 0, ω , then f t, x, y a 1 t x a 2 t x 2 a 3 xy 2 satisfies the conditions F0 and F1 . By Theorem 3.1, 3.37 has at least one positive ω-periodic solution. where a t , b t , c t ∈ C ω R . If −π 2 /ω 2 < a t < 0 and b t , c t > 0 for t ∈ 0, ω , then f t, x, y a t x b t x c t y 2 /x 2 satisfies the conditions F0 and F2 . By Theorem 3.2, the 3.38 has a positive ω-periodic solution.

Remarks
Our discussion on the existence of the positive ω-periodic solutions to 1.1 is applicable to the following ordinary differential equation: −u t f t, u t , u t , t ∈ R, 4.1 where the nonlinearity f : R × 0, ∞ × R → R is continuous and f t, x, y is ω-periodic in t. For 4.1 , we need the following assumption.

4.2
Similarly to Lemma 2.1, we have the following conclusion. we renew to define σ and C 0 by Now, using the similar arguments to Theorems 3.1 and 3.2, we can obtain the following results.

Introduction
Basis properties of classical system of exponents {e int } n∈Z Z is the set of all integers in Lebesgue spaces L p −π, π , 1 ≤ p < ∞, are well studied in the literature see 1-4 . Bari in her fundamental work 5 raised the issue of the existence of normalized basis in L 2 which is not Riesz basis. The first example of this was given by Babenko 6 . He proved that the degenerate system of exponents {|t| α e int } n∈Z with |α| < 1/2 forms a basis for L 2 −π, π but is not Riesz basis when α / 0. This result has been extended by Gaposhkin 7 . In 8 , the condition on the weight ρ was found which make the system {e int } n∈Z forms a basis for the weight space L p,ρ −π, π with a norm f p,ρ π −π |f t | p ρ t dt 1/p . Basis properties of a degenerate system of exponents are closely related to the similar properties of an ordinary system of exponents in corresponding weight space. In all the mentioned works, the authors consider the cases when the weight or the degenerate coefficient satisfies the Muckenhoupt condition see, e.g., 9 . It should be noted that the above stated is true for the systems of sines and cosines, too. Basis properties of the system of exponents and sines with the linear phase in weighted Lebesgue spaces have been studied in 10-12 . Those of the systems of exponents with degenerate coefficients have been studied in 13, 14 . Similar questions have previously been considered in papers 15-18 . In this work, we study the frame properties of the system of sines with degenerate coefficient in Lebesgue spaces, when the degenerate coefficient, generally speaking, does not satisfy the Muckenhoupt condition.

Needful Information
To obtain our main results, we will use some concepts and facts from the theory of bases.
We will use the standard notation. N will be the set of all positive integers; ∃ will mean "there exist s "; ⇒ will mean "it follows"; ⇔ will mean "if and only if"; ∃! will mean "there exists unique"; K ≡ R or K ≡ C will stand for the set of real or complex numbers, respectively; δ nk is Kronecker symbol, δ k {δ kn } k∈N .
Let X be some Banach space with a norm · X . Then X * will denote its dual with a norm · X * . By L M , we denote the linear span of the set M ⊂ X, and M will stand for the closure of M.
System {x n } n∈N ⊂ X is said to be uniformly minimal in X if ∃δ > 0 inf ∀u∈L {xn} n / k x k − u X ≥ δ x k X , ∀k ∈ N.

2.1
System {x n } n∈N ⊂ X is said to be complete in X if L {x n } n∈N X. It is called minimal in X if x k / ∈ L {x n } n / k , for all k ∈ N. The following criteria of completeness and minimality are available.
Criterion 1 Hahn-Banach theorem . System {x n } n∈N ⊂ X is complete in X if f x n 0, for all n ∈ N, f ∈ X * ⇒ f 0.
Criterion 2 see 19 . System {x n } n∈N ⊂ X is minimal in X ⇔ it has a biorthogonal system {f n } n∈N ⊂ X * , that is, f n x k δ nk , for all n, k ∈ N.
Criterion 3. Complete system {x n } n∈N ⊂ X is uniformly minimal in X ⇔ sup n x n X y n X * < ∞, where {y n } n∈N ⊂ X * is a system biorthogonal to it. System {x n } n∈N ⊂ X is said to be a basis for X if for all x ∈ X, ∃!{λ n } n∈N ⊂ K : x ∞ n 1 λ n x n . If system {x n } n∈N ⊂ X forms a basis for X, then it is uniformly minimal. ii ∃A, B > 0: A f X ≤ g k f k∈N K ≤ B f X , ∀f ∈ X; 2.2 iii f ∞ k 1 g k f f k , for all f ∈ X.