1. Introduction
In recent years, there has been an increasing interest in studying the develpoment of optimality conditions for nondifferentiable multiobjective programming problems. Many authors established and employed some different Kuhn and Tucker type necessary conditions or other type necessary conditions to research optimal solutions; see [1–27] and references therein. In [7], Lai and Ho used the Pareto optimality condition to investigate multiobjective programming problems for semipreinvex functions. Lai [6] had obtained the necessary and sufficient conditions for optimality programming problems with semipreinvex assumptions. Some Pareto optimality conditions are established by Lai and Lin in [8]. Lai and Szilágyi [9] studied the programming with convex set functions and proved that the alternative theorem is valid for convex set functions defined on convex subfamily S of measurable subsets in X and showed that if the system
(1)f(Ω)≪θ,g(Ω)<θ
has on solution, where θ stands for zero vector in a topological vector space, then there exists a nonzero continuous linear function (y*,z*)∈C*×D* such that
(2)〈f(Ω),y*〉+〈g(Ω),z*〉≥0 ∀Ω∈S.
In this paper, we study the following optimization problem:
(P)min x∈K f(x)g(x)subject to x∈K⊆X, hi(x)≤0,subject to xK⊆X i=1,2,…,m,
where K is a semiconnected subset in a locally convex topological vector space X, f:K→ℝ, g:K→ℝ+ and hi:K→(-∞,0], i=1,2,…,m, are functions satisfying some suitable conditions. The purpose of this study is dealt with such constrained fractional semipreinvex programming problem. Finally, we established the Fritz John type necessary and sufficient conditions for the optimality of a fractional semipreinvex programming problem.

2. Preliminaries
Throughout this paper, we let X be a locally convex topological vector space over the real field ℝ. Denote L1(X) by the space of all linear operators from X into ℝ.

Let W be a nonempty convex subset of X. Let f:W→ℝ be differentiable at x0∈K. Then there is a linear operator A=f′(x0)∈L1(X), such that
(3)lim α→0 f((1-α)x0+αx)-f(x0)α=f′(x0)(x-x0).
Recall that a function f:W→ℝ is called convex on W, if
(4)f((1-α)x0+αx)≤(1-α)f(x0)+αf(x)
or
(5)f((1-α)x0+αx)-f(x0)α≤f(x)-f(x0).
If f:W→ℝ is convex and differentiable at x0∈K, then by (3) and (5), we have
(6)f′(x0)(x-x0)≤f(x)-f(x0).
In 1981, Hanson [13, 14] introduced a generalized convexity on X, so-called invexity; that is, x-x0 is replaced by a vector τ(x0,x)∈X in (6), or
(7)f′(x0)τ(x0,x)≤f(x)-f(x0).
So an invex function is indeed a generalization of a convex differentiable function.

Definition 1 (see [<xref ref-type="bibr" rid="B7">6</xref>]).
(1) A set K⊆X is said to be semiconnected with respect to a given τ:X×X→ℝ if
(8)x,y∈K, 0≤α≤1⇒y+ατ(x,y,α)∈K.

(2) A map f:X→ℝ is said to be semipreinvex on a semiconnected subset K⊂X if each (x,y,α)∈K×K×[0,1] corresponds a vector τ(x,y,α)∈X such that
(9)f(x+ατ(x,y,α))≤(1-α)f(x)+αf(y),lim α↓0 ατ(x,y,α)=θ,
where θ stands for the zero vector of X.

The following is an example of a bounded semiconnected set in ℝ, which is semiconnected with respect to a nontrivial τ.

Example 2.
Let A≔[4,8], B≔[-8,-4] and K:=A∪B be bounded sets. Let τ:K×K×[0,1]→ℝ be defined by
(10)τ(x,y,α)=x-y1-α, for (x,y,α)∈A×A×[0,12],τ(x,y,α)=x-y1-α, for (x,y,α)∈B×B×[0,12],τ(x,y,α)=-8-y1-α, for (x,y,α)∈A×B×[0,12],τ(x,y,α)=4-y1-α, for (x,y,α)∈B×A×[0,12],τ(x,y,α)=x-yα, for (x,y,α)∈A×A×[12,1],τ(x,y,α)=x-yα, for (x,y,α)∈B×B×[12,1],τ(x,y,α)=-8-yα, for (x,y,α)∈A×B×[12,1],τ(x,y,α)=4-yα, for (x,y,α)∈B×A×[12,1].
Then K is a bound semiconnected set with respect to τ.

Theorem 3 (see [<xref ref-type="bibr" rid="B7">6</xref>, Theorem 2.2]).
Let K⊂X be a semiconnected subset and f:K→ℝ a semipreinvex map. Then any local minimum of f is also a global minimum of f over K.

From the assumption in problem (9), there exists a positive number λ such that
(11)f(y)g(y)≥λ ∀y∈X,f(y)-λg(y)≥0.
Consequently, we can reduce the problem (9) to an equivalent nonfractional parametric problem:
(Pλ)υ(λ):=min y∈X (f(y)-λg(y))≥0,
where λ∈[0,∞) is a parameter.

We will prove that the problem (P) is equivalent to the problem (Pλ*) for the optimal value λ*. The following result is our main technique to derive the necessary and sufficient optimality conditions for problem (P).

Theorem 4.
Problem (P) has an optimal solution y0 with optimal value λ* if and only if v(λ*)=0 and y0 is an optimal solution of (Pλ*).

Proof.
If y0 is an optimal solution of (P) with optimal value λ*, that is,
(12)λ*:=f(y0)g(y0)=min z∈X f(z)g(z)≤f(z)g(z) ∀z∈X.
It follows from (12) that
(13)f(z)-λ*g(z)≥0 ∀z∈X,f(y0)-λ*g(y0)=0.
Thus, we have
(14)0≤min z∈X (f(z)-λ*g(z))≤f(y0)-λ*g(y0)=0.
Then, by (14), we get
(15)ν(λ*)=min z∈X (f(z)-λ*g(z))=f(y0)-λ*g(y0)=0.
Therefore, y0 is an optimal solution of (Pλ*) and ν(λ*)=0.

Conversely, if y0 is an optimal solution of (Pλ*) with optimal value ν(λ*)=0, then
(16)f(y0)-λ*g(y0)=min z∈X (f(z)-λ*g(z))=0.
So
(17)f(z)-λ*g(z)≥0=f(y0)-λ*g(y0) ∀z∈X.
It follows from (17) that
(18)f(z)g(z)≥λ* ∀z∈X,f(y0)g(y0)=λ*,
and hence
(19)min z∈X f(z)g(z)≥λ*,min z∈X f(z)g(z)≤f(y0)g(y0)=λ*.
Therefore,
(20)min z∈X f(z)g(z)=λ*=f(y0)g(y0)
and we know y0 is an optimal solution of (P) with optimal value λ*.

3. The Existence of the Necessary and Sufficient Conditions for Semipreinvex Functions
Definition 5 (see [<xref ref-type="bibr" rid="B7">6</xref>]).
A mapping f:K⊂X→ℝ is said to be arcwise directionally (in short, arc-directionally) differentiable at x0∈K with respect to a continuous arc β:[0,1]→K⊂X if x0+β(t)∈K for t∈[0,1] with
(21)β(0)=θ, β′(0+)=u (in X),
that is, the continuous function β is differentiable from right at 0, and the limit
(22)lim t↓0 f(x0+β(t))-f(x0)t≅f′(x0;u) exists.

Note that the arc directional derivative f′(x0;·) is a mapping from X into ℝ. Moreover, how can we make K to be a semiconnected set? Indeed, we can construct a function τ concerned with β defined as follows.

For any x,y∈K and t∈[0,1], we choose a vector
(23)τ(x,y,t):=β(t)t=β(t)-β(0)t-0,
then
(24)lim t↓0 τ(x,y,t)=β′(0+)=u,ddt[tτ(x,y,t)]|t=0+=β′(0+)=u.
Let f:X→ℝ, -g:X→ℝ- and hi:X→ℝ-, i=1,2,…,m, be semipreinvex maps on a semiconnected subset K in X. Consider a constrained programming problem as (P).

The following Fritz John type theorem is essential in this section for programming problem (P).

Theorem 6 (Necessary Optimality Condition).
Suppose that f, -g and hi, i=1,2,…,m are arc-directionally differentiable at x0∈K and semipreinvex on K with respect to a continuous arc β defined as in Definition 5. If x0 minimizes locally for the semipreinvex programming problem (P), then there exist λ*∈(0,∞) and {γi}i=1m⊆[0,∞) such that
(25)f′(x0;u)-λ*g′(x0;u)+∑i=1mγihi′(x0;u)≥0,
where u=β′(0+) and
(26)∑i=1mγihi(x0)=0.

Proof.
By Theorem 4, the minimum solution to (P) is also a minimum to (Pλ*). Then x0 is the local minimal solution to (Pλ*). By Theorem 3, we have x0 is the global minimal solution to (Pλ). It follows that the system
(27)[f(x)-λ*g(x)]-[f(x0)-λ*g(x0)]<0,hi(x)≤0, i=1,2,…,m
has no solution in K, then we have
(28)[f(x)-λ*g(x)]-[f(x0)-λ*g(x0)]+∑i=1mγihi(x)<0
has no solution in K for any {γi}i=1m⊆[0,∞). Thus for any x∈K,
(29)[f(x)-λ*g(x)]-[f(x0)-λ*g(x0)]+∑i=1mγihi(x)≥0
for some {γi}i=1m⊆[0,∞). Putting x=x0 in (29), we get
(30)∑i=1mγihi(x0)≥0.
Since γi≥0 and hi(x0)≤0, it follows that
(31)∑i=1mγihi(x0)=0.
So (26) is proved.

As K is a semiconnected set, for any x∈K and t∈[0,1], we have
(32)x0+tτ(x0,x,t)∈K.
For t≠0, the point x~=x0+tτ(x0,x,t)≠x0 does not solve the system (27). So substituting x~ in (29) and using the result (26), we obtain
(33)[f(x0+tτ(x0,x,t))-f(x0)] -λ*[g(x0+tτ(x0,x,t))-g(x0)] +∑i=1mγi(hi(x0+tτ(x0,x,t))-hi(x0))≥0.
Since f and g are arc-directionally differentiable with respect to β, choose a vector τ(x0,x,t) as (23), so that (24) hold. It follows that if we divide (33) by t≠0 and take the limit as t↓0, then we have
(34)f′(x0;u)-λ*g′(x0;u)+∑i=1mγihi′(x0;u)≥0,
which proves (25) and the proof of theorem is completed.

Theorem 7 (Sufficient Optimality Condition).
Let f, -g and hi, i=1,2,…,m be arc-directionally differentiable at x0∈K and semipreinvex on K with respect to a continuous arc β defined as in Definition 5. If there exist λ∈(0,∞) and {γi}i=1m⊆[0,∞) satisfying
(35)f′(x0;u)-λg′(x0;u)+∑i=1mγihi′(x0;u)≥0,
with u=β′(0+) and
(36)∑i=1mγihi(x0)=0,
then x0 is an optimal solution for problem (P).

Proof.
Suppose to the contrary that x0 is not optimal for problem (P) and λ=f(x0)/g(x0). Then f(x0)-λg(x0)=0. Therefore,
(37)0≤min x∈X (f(x)-λg(x))≤f(x0)-λg(x0)=0,
thus ν(λ)=min x∈X(f(x)-λg(x))=0.

By Theorem 4, x0 was not optimal for problem (Pλ). Then there is an x∈X such that
(38)f(x)-λg(x)<f(x0)-λg(x0),hi(x)≤0
for i=1,2,…,m. Moreover, we have
(39)[f(x)-λg(x)]-[f(x0)-λg(x0)]<0,(40)∑i=1mγi[hi(x)-hi(x0)]≤0 (since∑i=1m γihi(x0)=0)
for any {γi}i=1m⊆[0,∞). Thus(41)[f(x)-λg(x)]-[f(x0)-λg(x0)] +∑i=1mγi[hi(x)-hi(x0)]<0.
Since the semi-preinvex maps f, -g and hi, i=1,2,…,m are arc-directionally differentiable, it follows that for (x,x0,t)∈K×K×[0,1] there corresponds a vector τ(x,x0,t)∈X such that
(42)f(x0+tτ(x,x0,t))≤(1-t)f(x0)+tf(x),-g(x0+tτ(x,x0,t))≤(1-t)(-g)(x0)+t(-g)(x),hi(x0+tτ(x,x0,t))≤(1-t)hi(x0)+thi(x),
and so
(43)f(x0+tτ(x,x0,t))-f(x0)t≤f(x)-f(x0),(-g)(x0+tτ(x,x0,t))+g(x0)t≤(-g)(x)+g(x0),hi(x0+tτ(x,x0,t))-hi(x0)t≤hi(x)-hi(x0).
Letting t↓0, we have lim t↓0 τ(x,x0,t)=β′(0+)=u and the last inequalities imply
(44)f′(x0,u)≤f(x)-f(x0),-g′(x0,u)≤-[g(x)-g(x0)],hi′(x0,u)≤hi(x)-hi(x0).
Consequently, from (41) and (44), we obtain
(45)f′(x0;u)-λg′(x0;u)+∑i=1mγihi′(x0;u)<0,
which contradicts the fact of (35). Therefore x0 is an optimal solution of problem (P).

Since any global minimal is a local minimal, applying Theorems 6 and 7, we can obtain the necessary and sufficient conditions for problem (P).

Theorem 8.
Suppose that f, -g and hi, i=1,2,…,m are arc-directionally differentiable at at x0∈K and semi-preinvex on K with respect to a continuous arc β defined as in Definition 5. If x0 minimizes globally for the semi-preinvex programming problem (P) if and only if there exists (λ,γi)∈ℝ+×(ℝ+∪{0}), i=1,2,…,m, such that
(46)f′(x0;u)-λg′(x0;u)+∑i=1mγihi′(x0;u)≥0,
where u=β′(0+) and
(47)∑i=1mγihi(x0)=0.

Remark 9.
Our results also hold for preinvex functions.