SENSITIVITY ANALYSIS FOR RELAXED COCOERCIVE NONLINEAR QUASIVARIATIONAL INCLUSIONS

Variational inequality methods whether based on numerous available new algorithms or otherwise have been applied vigorously, especially to model equilibria problems in economics, optimization and control theory, operations research, transportation network modeling, and mathematical programming, while a considerable progress to developing general methods for the sensitivity analysis for variational inequalities is made. Tobin [7] presented the sensitivity analysis for variational inequalities allowing the calculation of derivatives of solution variables with respect to perturbation parameters, where perturbations are of both the variational inequality function and the feasible region. Kyparisis [5] under appropriate second-order and regularity conditions has shown that the perturbed solution to a parametric variational inequality problem is continuous and directionally differentiable with respect to the perturbation parameter. Recently, Agarwal et al. [1] studied the sensitivity analysis for qusivariational inclusions involving strongly monotone mappings applying the resolvent operator technique, without differentiability assumptions on solution variables with respect to perturbation parameters. The aim of this paper is to present the sensitivity analysis for the relaxed cocoercive quasivariational inclusions based on the resolvent operator technique. The obtained results generalize the results on the sensitivity analysis for strongly monotone quasivariational inclusions [1, 2, 6] and others since the class of relaxed cocoercive mappings is more general than the strong monotone mappings, and furthermore, it is less explored. Some suitable


Introduction and preliminaries
Variational inequality methods whether based on numerous available new algorithms or otherwise have been applied vigorously, especially to model equilibria problems in economics, optimization and control theory, operations research, transportation network modeling, and mathematical programming, while a considerable progress to developing general methods for the sensitivity analysis for variational inequalities is made.Tobin [7] presented the sensitivity analysis for variational inequalities allowing the calculation of derivatives of solution variables with respect to perturbation parameters, where perturbations are of both the variational inequality function and the feasible region.Kyparisis [5] under appropriate second-order and regularity conditions has shown that the perturbed solution to a parametric variational inequality problem is continuous and directionally differentiable with respect to the perturbation parameter.Recently, Agarwal et al. [1] studied the sensitivity analysis for qusivariational inclusions involving strongly monotone mappings applying the resolvent operator technique, without differentiability assumptions on solution variables with respect to perturbation parameters.The aim of this paper is to present the sensitivity analysis for the relaxed cocoercive quasivariational inclusions based on the resolvent operator technique.The obtained results generalize the results on the sensitivity analysis for strongly monotone quasivariational inclusions [1,2,6] and others since the class of relaxed cocoercive mappings is more general than the strong monotone mappings, and furthermore, it is less explored.Some suitable examples of relaxed cocoercive mappings are also included.For more details, we recommend [1][2][3][4][5][6][7][8][9][10][11][12][13].
Let H be a real Hilbert space with the norm • and inner product where λ ∈ L is the perturbation parameter, is called a class of generalized relaxed cocoercive mixed quasivariational inclusion (abbreviated RCMQVI) problems.
Next, a special case of RCMQV I (1.1) problem is: determine an element u ∈ H such that where N(u,v,λ) = S(u,λ) + T(v,λ), for all u,v ∈ H, and S,T : 3) The solvability of RCMQV I problem (1.1) depends on the equivalence between (1.1) and the problem of finding the fixed point of the associated resolvent operator.
Note that if M is maximal monotone, then the corresponding resolvent operator J M ρ in the first argument is defined by where ρ > 0 and I is the identity mapping.

Cocoercivity and relaxed cocoercivity
This section deals with notions of cocoercive and relaxed cocoercive mappings and their connections to other mappings.The class of relaxed cocoercive mappings is more general than the strong monotone mappings, and furthermore, it is less explored in the context of applications yet.
Ram U. Verma 3 (ii) (s)-cocoercive in the first argument if there exists a positive constant s such that (2.2) (iii) (m)-relaxed cocoercive in the first argument if there exists a positive constant m such that (2.3) (iv) (γ,m)-relaxed cocoercive in the first argument if there exist positive constants γ and m such that Clearly, every (m)-cocoercive mapping is (m)-relaxed cocoercive, while each (r)-strongly monotone mapping is (1,r + r 2 )-relaxed cocoercive.Definition 2.5.A mapping T : H × H × L → H is said to be (μ)-Lipschitz continuous in the first argument if there exists a positive constant μ such that (2.6)

Nonlinear variational inclusions
Theorem 3.1.Let H be a real Hilbert space, let N : H × H × L → H be (γ,r)-relaxed cocoercive and (β)-Lipschitz continuous in the first variable, and let N be (μ)-Lipschitz continuous in the second variable.If where Consequently, for each λ ∈ L, the mapping G(u,λ) in light of (3.2) has a unique fixed point z(λ), and hence, z(λ) is a unique solution to (1.1).Thus, G z(λ),λ = z(λ). (3.4) It follows that Ram U. Verma 5 The (γ,r)-relaxed cocoercivity and (β)-Lipschitz continuity of N in the first argument imply that (3.7) On the other hand, the (μ)-Lipschitz continuity of N in the second argument results in In light of the above arguments, we infer where Since θ < 1, it concludes the proof.
Theorem 3.2.Let H be a real Hilbert space, let N : H × H × L → H be (γ,r)-relaxed cocoercive and (β)-Lipschitz continuous in the first variable, and let N be (μ)-Lipschitz continuous in the second variable.Let where (3.12) If the mappings λ → N(u,v,λ) and λ → J M(•,u,λ) ρ (w) are both continuous (or Lipschitz continuous) from L to H, then the solution z(λ) of (1.1) is continuous (or Lipschitz continuous) from L to H.

Sensitivity analysis
Proof.From the hypotheses of the theorem, for any λ,λ * ∈ L, we have Hence, we have This completes the proof.
Corollary 3.3 [1].Let H be a real Hilbert space, let N : H × H × L → H be (r)-strongly monotone and (β)-Lipschitz continuous in the first variable, and let N be (μ)-Lipschitz continuous in the second variable.If Ram U. Verma 7 where (3.18) Corollary 3.4 [1].Let H be a real Hilbert space, let N : H × H × L → H be (r)-strongly monotone and (β)-Lipschitz continuous in the first variable, and let N be (μ)-Lipschitz continuous in the second variable.Let where (3.20) If the mappings λ → N(u,v,λ) and λ → J M(•,u,λ) ρ (w) are both continuous (or Lipschitz continuous) from L to H, then the solution z(λ) of (1.1) is continuous (or Lipschitz continuous) from L to H.

Concluding remark
The present results on the sensitivity analysis based on the maximal monotonicity of M can further be generalized to the case of A-monotonicity [9] and H-monotonicity [3].Recently, the author [9] introduced a new class of mappings-A-monotone mappingswhich have a wide range of applications.The class of A-monotone mappings generalizes the well-known class of maximal monotone mappings, and on the other hand, it generalizes the recently introduced and studied notion of the H-monotone mappings by Fang and Huang [3] to a higher level.
Let X denote a real Hilbert space with the norm • and inner product •, • on X.
Definition 4.1 [9].Let A : X → X be a nonlinear single-valued mapping on X and let M : X → 2 X be a multivalued mapping on X.The map M is said to be A-monotone if (i) M is (m)-relaxed monotone; (ii) A + ρM is maximal monotone for ρ > 0.
This is equivalent to stating that M is A-monotone if M is (m)-relaxed monotone and R(A + ρM) = X for ρ > 0. Definition 4.2 [3].Let H : X → X be a nonlinear single-valued mapping on X and let M : X → 2 X be a multivalued mapping on X.The map M is said to be H-monotone if (i) M is monotone; (ii) (H + ρM)(X) = X for ρ > 0.
Proposition 4.3.Let A : X → X be an r-strongly monotone single-valued mapping and let M : X → 2 X be an A-monotone mapping.Then M is maximal monotone.
Proof.Since M is (m)-relaxed monotone, it suffices to show that Assume (x 0 ,u 0 ) ∈ graph (M) such that Since M is A-monotone, (A + ρM)(X) = X for all ρ > 0. This implies that there exists an element (x 1 ,u 1 ) ∈ graph (M) such that Since A is (r)-strongly monotone, it implies x 0 = x 1 for m < r.As a result, we have u 0 = u 1 , that is, (x 0 ,u 0 ) ∈ graph (M), a contradiction.Hence, M is maximal monotone.
Theorem 4.4.Let A : X → X be an r-strongly monotone mapping and let M : X → 2 X be an A-monotone mapping.Then the operator (A + ρM) −1 is single-valued.
This leads to the generalized definition of the resolvent operator.

. 3 )
It follows from (4.2) and (4.3) thatρ u 0 − u 1 ,x 0 − x 1 = − A x 0 − A x 1 ,x 0 − x 1 ≥ (−ρm) x 0 − x 1 2 .(4.4) •, • .Let N : H × H × L → H be a nonlinear mapping and M : H × H × L → 2 H be a maximal monotone mapping with respect to the first variable, where L is a nonempty open subset of H. Then the problem of finding an element u ∈ H such that 0