OPTIMALITY CRITERIA FOR DETERMINISTIC DISCRETE-TIME INFINITE HORIZON OPTIMIZATION

We consider the problem of selecting an optimality criterion, when total costs diverge, in deterministic infinite horizon optimization over discrete time. Our formulation allows for both discrete and continuous state and action spaces, as well as time-varying, that is, nonstationary, data. The task is to choose a criterion that is neither too overselective, so that no policy is optimal, nor too underselective, so that most policies are optimal. We contrast and compare the following optimality criteria: strong, overtaking, weakly overtaking, efficient, and average. However, our focus is on the optimality criterion of efficiency. (A solution is efficient if it is optimal to each of the states through which it passes.) Under mild regularity conditions, we show that efficient solutions always exist and thus are not overselective. As to underselectivity, we provide weak state reachability conditions which assure that every efficient solution is also average optimal, thus providing a sufficient condition for average optima to exist. Our main result concerns the case where the discounted per-period costs converge to zero, while the discounted total costs diverge to infinity. Under the assumption that we can reach from any feasible state any feasible sequence of states in bounded time, we show that every efficient solution is also overtaking, thus providing a sufficient condition for overtaking optima to exist.


Introduction
The problem of optimally selecting a sequence of decisions over an infinite horizon is complicated by the criterion issue of imposing preferences over the collection of associated cost streams.Even in the case where the infinite stream of cost flows is discounted, the resulting discounted total costs may all be infinite.Failure of an optimality criterion to distinguish among different policies is a problem of underselectivity of the criterion.At the other extreme is a notion of optimality so strong that none of the feasible policies satisfies its conditions, a problem of over-selectivity.In a recent paper, Schochetman and Smith [18] considered the notion of optimality-termed efficiency (see [16]) or sometimes finite optimality (Halkin [9]).A solution is termed efficient if, roughly speaking, it is optimal to each of the states through which it passes.Efficient solutions avoid being overselective in that their existence is assured by mild topological conditions.Nor are they particularly underselective in that the requirement that they be optimal to each state constrains prior states to be along optimal paths to those states.In this paper, we compare and contrast the selectivity of efficiency with more traditional notions of optimality, namely, strong, overtaking, weakly overtaking, and average optimalities.In particular, we develop a state reachability condition which, in the presence of discounting, assures us that efficient solutions are overtaking optimal.Since efficient solutions always exist, the latter condition provides a new sufficient condition for the existence of overtaking optimal solutions.In the discrete control setting of Schochetman and Smith [18], it was shown that, under a state reachability condition, every efficient solution is average optimal.Here, we weaken this reachability condition and extend this result to the continuous control case.
The discrete-time, deterministic framework within which we work, and the very general nature of the underlying optimization problem, represent significant departures from the traditional context for the comparison of optimality criteria.We consider an extremely general deterministic infinite horizon optimization problem, formulated as a dynamic programming problem.Essentially, the only restriction in this work, apart from being a deterministic model, is the requirement that the set of feasible decision alternatives be compact at each decision epoch.In particular, we do not assume that data are stationary.Moreover, we do not assume complete reachability, that is, the ability of the system to transition from any state to another in the very next period.This is not an uncommon assumption in the literature.Also since we have imposed no linear space structure, we do not make any convexity assumptions.In general, our model framework includes production planning under nonstationary demand, parallel and serial equipment replacement under technological change, capacity planning under nonlinear demand, and optimal search in a time-varying environment.
In this paper, we compare and contrast the selectivity of efficiency with the more traditional notions of optimality including strong, overtaking, weakly overtaking, and average optimalities.Strong optimality is conferred on any strategy that attains minimum total cost.Of course, it can happen (Example 3.13) that all total costs over the infinite horizon diverge, thus necessitating alternate notions of optimality.Overtaking optimality was introduced in the economic literature by Gale (1965) and von Weiszacker (1967), and later adopted by optimal control theorists.Shortly thereafter, the notion of weakly overtaking optimality was introduced by Brock [4] for economic growth models, followed by Halkin [9] for optimal control problems.In the latter, Halkin also implicitly defined the notion of finite optimality, which we refer to here as efficiency.Finally, average optimality was extensively studied by Veinott [19].See also Bertsekas [2,3].
We will see that the efficiency criterion is not overselective, since the existence of efficient solutions is assured by relatively mild topological conditions.(We give a reasonable sufficient condition for efficient optimal solutions to exist in our discrete-time, nonstationary, continuous state and control framework.)Nor is it particularly underselective, since such a strategy must be optimal to every state attained along that path.In the discrete action setting of Schochetman and Smith [18], it was shown that, under a (rather strong) state-reachability condition, every efficient solution is average optimal.Here, we weaken this state-reachability condition and extend this result to the case of continuous states and controls.Consequently, this provides a sufficient condition for average optimal solutions to exist.Moreover, we give a stronger state-reachability condition which, in the presence of discounting, assures us that efficient solutions are overtaking optimal.Since (as we have noted) efficient solutions commonly exist, this state-reachability condition provides a new sufficient condition for the existence of overtaking optimal solutions.Analogously, we show that a "weaker" reachability condition is sufficient for the existence of average optima.
In Section 2, we formulate the state-transition and cost structures of our discrete-time, infinite horizon, deterministic, nonstationary, continuous state and control problem.In Section 3, we introduce the optimality criteria of interest (with and without discounting), and compare them in the absence of any additional assumptions.In particular, we present a mild condition which is sufficient to guarantee the existence of efficient solutions (Theorem 3.4).It is also known (Halkin [9]), that weakly overtaking optima are efficient for continuous-time and vector states.We give a discrete-time proof of the fact that overtaking optima are average optimal (Theorem 3.9).We also show by counterexamples that, in general, the following holds: (i) the optimal average value may or may not be attained (Examples 3.12, 3.14), (ii) overtaking optima need not be strong optima (Example 3.13), (iii) weakly overtaking optima need not be overtaking optima (Example 3.15), (iv) average optima need not be overtaking optima (Example 3.13), (v) efficient optima need not be weakly overtaking optima (Example 3.13), (vi) efficiency and average optimalities are not comparable criteria in general (Examples 3.12, 3.15), (vii) weakly overtaking optimality and average optimality are not comparable in general (Examples 3.13, 3.15).In Section 4, we introduce various state reachability conditions which are considerably weaker than complete reachability.In the presence of average cost reachability, we show that efficient solutions are average optimal (Theorem 4.3).In the presence of total cost reachability, we show that the overtaking solutions are precisely the efficient solutions (Theorem 4.4).Finally, as a consequence of this fact, we obtain an easily verified sufficient condition involving bounded time reachability which guarantees the existence of overtaking optimal solutions (Theorem 4.7).
Some of the results contained herein are known for either the continuous-time setting or the discrete-time setting.In some instances, we give simpler, discrete-time proofs of certain examples of the continuous-time results.In addition to the references already cited, we recommend Brock and Haurie [5], Zaslavski [22], Haurie [10], Leizarowitz [14,15], Lasserre [13], and Carlson et al. [6].Finally, in [22,Section 5.3], the authors give a discrete-time version of their continuous-time model.However, implicit in this model are stationarity and complete reachability.In addition, states are required to belong to R n .We make no such assumptions here.Moreover, they do not consider the average optimality criterion at all there.

Problem formulation
We formulate a deterministic infinite horizon optimization problem within a discretetime framework.Otherwise, our problem is quite general.In particular, the problem is nonstationary, allows for compact state and action spaces, is discounted or not, and assumes no reachability properties (as part of the problem definition).Moreover, by a familiar device, stochastic infinite horizon problems can be modelled by our framework (see below).
Consider a sequence of decisions, where each decision is made at the beginning of each of a series of equal time periods, indexed by j = 1,2,....The set of all possible decisions available in period j (irrespective of the period's beginning state) is denoted by Y j .For convenience, we assume that Y j is a compactum, that is, a compact, nonempty metric space with metric ρ j , for all j = 1,2,.... Without loss of generality, we may assume that ρ j (x j , y j ) ≤ 1, for all x j , y j ∈ Y j , for all j = 1,2,....We consider a dynamic system governed by the state equation s j = f j (s j−1 , y j ), for all x j , y j ∈ Y j , for all j = 1,2,..., where s 0 is the fixed and given initial state of the system (beginning period 1), s j is the state of the system at the end of period j, that is, beginning period j + 1, y j is the control (or action) selected in period j with knowledge of the state s j−1 , S j is the compact metric space of feasible states ending period j (with S 0 = {s 0 }), so that s j ∈ S j , for all j = 1,2,..., Y j (s j−1 ), is the given closed, nonempty subset Y j of feasible controls available in period j when the beginning state is s j−1 ∈ S j−1 , so that y j ∈ Y j (s j−1 ) ⊆ Y j , and f j is the given continuous state transition function in period j, where f j : F j → S j , with (Note that the nonemptiness of Y j (s j−1 ), for s j−1 ∈ S j−1 , is equivalent to the assumption that all finite horizon feasible solutions can be feasibly continued from state s j−1 in period j.)We assume that the set-valued mapping s j−1 Y j (s j−1 ) of S j−1 into Y j has the following (closed graph).
Continuity property.For each j, if s n j−1 → s j−1 in S j−1 , and y n j → y j in Y j , as n → ∞, where y n j ∈ Y j (s n j−1 ), for all n, then y j ∈ Y j (s j−1 ).In this event, each F j is the closed (hence, compact) graph of the set-valued mapping s j−1 Y j (s j−1 ) in the compact space S j−1 × Y j .We require that S j = f j (F j ) for all j = 1,2,..., so that, in particular, S 1 = f 1 (F 1 ), where F 1 = {s 0 } × Y 1 (s 0 ).Thus, each S j consists of the set of feasible, that is, attainable, states in period j.
Remarks 2.1.Before proceeding, it is worth noting that continuous-time optimization problems can be adapted to our model.For the sake of simplicity, assume that strategies are the same as state trajectories, that is, decisions are system states.Then proceed as in [22].Moreover, stochastic optimization problems can also be adapted to our model.Once again, for simplicity, assume decisions are finite in number, so that policies correspond to probability mass functions over underlying stochastic states.Then proceed as in [14].We leave it to the interested reader to pursue those cases where decisions are not system states and probability distributions are more general.

The product set Y = ∞
j=1 Y j of all potential decision sequences or strategies is then a compact topological space relative to the product topology, that is, the topology of componentwise convergence.The product topology on Y is metrizable with metric d given by where β is chosen arbitrarily so that 0 < β < 1.Now let y ∈ Y and fix a positive integer N. Then y is feasible through period N if y j ∈ Y j (s j−1 ), where s j = f j (s j−1 , y j ), for all j = 1,2,...,N.Denote all such strategies by X N , which is thus a closed, nonempty subset of Y .Note that if y is feasible through period N and M < N, then y is feasible through period M, that is, X N ⊆ X M .Moreover, y is a feasible strategy if y is feasible through period N, for each N = 1,2,....We define the feasible region X to be the subset of Y consisting of all those y which are feasible through each period N, that is, x ∈ X N , for all N, so that X = ∩ ∞ N=1 X N .This set is closed in Y and nonempty, since Y j (s j−1 ) is nonempty, for all j, and all s j−1 ∈ S j−1 .In fact, as a consequence of this assumption, if y is feasible through a given period N, then it may be feasibly extended over all remaining periods.
If y is feasible through period N, then we may define so that s N (y) ∈ S N , and y ∈ X N if and only if y j ∈ Y j (s j−1 (y)), for all j = 1,2,...,N.We will refer to each such s N (y) as the state through which y passes at the end of period N. Thus, for each N, we obtain a mapping s N : X N → S N , which is onto since S N consists of feasible states.If y ∈ Y , z ∈ X N , and y j = z j , for all j = 1,2,...,N, then y ∈ X N and s N (y) = s N (z).Moreover, if x ∈ X, then for each period N, s N (x) is defined, and s ∈ S N implies that there exists x ∈ X for which s N (x) = s.Finally, if x ∈ X, then (s j−1 (x),x j ) ∈ F j , for all j = 1,2,.... Lemma 2.2.For each N, the mapping s N : X N → S N is continuous.
Proof.This follows from the continuity of f j .
For convenience, we introduce the following notation.If N is a positive integer and x, y ∈ Y , then we define The following is then immediate.
, for all M ≤ N, and s M (z) = s M (y), for all M > N.
Turning to the objective function, we allow the cost of a decision made in period j to also depend (indirectly) on the sequence of previous decisions, or more directly, on the state resulting from these decisions.Specifically, we let c j (s j−1 , y j ) be the (undiscounted) nonnegative cost of decision y j in period j, when s j−1 is the state beginning period j.We thus obtain cost functions c j : F j → [0,∞) which we require to be continuous.Thus, each c j attains its maximum, denoted by c j , for all j.We say that the period costs c j are exponentially bounded if there exist B > 0 and γ 1 such that c j ≤ Bγ j , that is, Of course, if γ = 1, then the period costs are actually uniformly bounded by B.
Throughout the following, let α be a discount factor, 0 < α ≤ 1.For each strategy x ∈ X and positive integer N, we define the associated total N-horizon cost C N (x|α) by , for all N, x, and α.Note that each C N (•|α) is a continuous real-valued function on X.
Our general problem is to find an infinite horizon feasible strategy x ∈ X which, in some suitable sense, is optimal, that is, minimal.The fundamental question is: what does "optimal" mean?There is no guarantee that the total cost of any strategy over the infinite horizon will be finite, even if it is discounted.In the next section, we compare and contrast five more-or-less familiar optimality criteria, each of which responds to this question.

Optimality criteria
There are many optimality criteria which exist in the literature, the most popular being strong optimality.Others include overtaking optimality, weakly overtaking optimality, finite optimality, also known as efficiency, and average optimality.In this paper, we contrast and compare these optimality criteria for our discrete-time problem, with and without discounting.We begin with strong optimality.
For each x ∈ X and discount factor α, define the infinite horizon total cost C(x|α) by Thus, the function C(•|α) : X → [0,∞] is both the pointwise limit and the supremum of the continuous functions C N (•|α).Hence, C(•|α) is lower semicontinuous on X (Hewitt and Stromberg [11, page 89]), for each α.As above, we will write C(x) for C(x|1).Thus, I. E. Schochetman and R. L. Smith 63 Consequently, for a given This depends on the behavior of c j (s j−1 (x),x j ) versus that of α j−1 , with respect to j. Accordingly, for each x in X for which C N (x) > 0 eventually, that is, C(x) > 0, we define Proof.Fix 0 < α < 1.For σ = −ln α, we have Our total cost optimization problem is then formulated as follows: For each 0 < α ≤ 1, we will denote the set of such strongly optimal solutions to our problem by X s (α).Thus, in general, with all inclusions possibly proper.
by our definition.(For our purposes here, this is the interesting case.)At the other extreme, if the period costs c j are exponentially bounded by Bγ j , then, for α < 1/γ, we have and C(•|α) is the uniform limit of the C N (•|α), that is, it is continuous on compact X.Hence, it attains its minimum value, so that X s (α) = ∅, in particular.
Proof.For a fixed α, this set is the inverse image of the point C * (α) under the lower semicontinuous mapping C(•|α).Hence, it is necessarily closed (Hewitt and Stromberg [11, 7.21(d)]).
The following well-studied optimality criteria are particularly useful if C * (α) = ∞, in which case there does not exist a strongly optimal strategy.We recall the familiar notions of overtaking and weakly overtaking optimalities.
Let x, y ∈ X.As in the continuous-time case, we will say that x overtakes y (relative to Overtaking and weakly overtaking optimalities.Let x ∈ X.Then x is overtaking optimal if x overtakes y, for all y ∈ X.The same goes for weakly overtaking optimal.Clearly, overtaking optimality implies weakly overtaking optimality.Overtaking optimality was originally introduced by von Weiszacker [20], who called it catching up optimality, while weakly overtaking optimality, also called sporadically catching up optimality, first appeared in Halkin [9].Denote the set of such optimal strategies in X by X o (α) (resp., X w (α)), so that in general.Of course, the sets X o (α) and X w (α) are different in general (Example 3.12).Both overtaking and weakly overtaking optimality have received considerable attention in the economics and optimal control literature, primarily for continuous-time problems.
The following can be found in Halkin [9] for the continuous-time case.
Theorem 3.3.Suppose 0 < α ≤ 1.Then, in general, strong optimality implies overtaking optimality.Specifically, if that is, there exists x ∈ X for which C(x|α) < ∞, then strong optimality and weakly overtaking optimalities are equivalent, that is, for such α. ) Next we turn to the much less well-known finite-optimality notion which we call efficiency.The state-space construction introduced above associated a unique state at the end of each time period with every infinite horizon feasible strategy.Strategies that have the property of optimally reaching each of the states through which they pass have been called efficient strategies, see [12,16,17,18] for an early introduction of a similar concept.This efficiency of movement through the state space suggests efficient solutions as candidates for optimality.
Efficiency (finite optimality).Let x ∈ X.Then x is efficient (relative to α) if, for each y ∈ X, and for each N such that s N (y) = s N (x), we have C N (x|α) ≤ C N (y|α).Also known as finite optimality, this criterion was originally introduced in a special case by Halkin [9], who called it finite horizon clamped endpoint optimality.
Let X e (α) denote the subset of X consisting of efficient strategies.It was shown in [18, Lemma 3.5] that efficient strategies exist in our context, that is, ∅ ⊂ X e (α) ⊆ X, provided each of the spaces Y j and S j−1 is discrete.(Although in Schochetman and Smith [18] we assumed that the period costs were uniformly bounded, while here we do not, this has no effect on the definition of efficient strategy.) Before continuing with our comparisons of optimality criteria, we give a sufficient condition for efficient solutions to exist in the case of nondiscrete Y j and S j−1 .Fix N, and for each s ∈ S N , let X N (s) denote the set of N-horizon feasible strategies which attain state s at the end of period N, that is, If we let X * N (s|α) denote the set of optimal solutions to this problem, then this set is a closed, nonempty subset of X N .We thus obtain another compact-valued set mapping of S N into X N given by s X * N (s|α).If we define so that ᐄ * N (α) are nonempty and nested downward, and then it is not difficult to see that the efficient solutions are precisely the elements of ᐄ * (α), that is, X e (α) = ᐄ * (α).
The following gives a sufficient condition for the existence of efficient solutions-in the continuous action/state case.Theorem 3.4.If, for each N, the set-valued mapping s X N (s) is continuous in the sense of [8, page 116], then efficient solutions exist, that is, X e (α) = ∅, and X e (α) is compact, for all 0 < α ≤ 1.
Proof.It follows from our hypothesis and [8] that the set-valued mapping s X * N (s|α) is upper semicontinuous in the sense of [8, page 109].Consequently, the space ᐄ * N (α) is compact (Berge [8, page 110]), for each N. Hence, ᐄ * (α) is the intersection of a descending sequence of compact, nonempty sets, and is thus, compact and nonempty.
The previous generalizes the following existence result for efficient solutions established in Schochetman and Smith [18, Lemma 3.5]-for the discrete action/state case.Corollary 3.5.If the S N are discrete, then efficient solutions exist in this case.
Proof.As is the case for single-valued functions, set-valued functions defined on discrete spaces are continuous.
Proof.The proof given in [9, Theorem 4.1] for continuous-time may be adapted here for discrete time.We leave the details for the interested reader.
Finally, we consider the well-studied notion of average optimality.As is customary, we define the infinite horizon average cost (per period) of x ∈ X to be where, for all N = 1,2,..., Our average cost optimization problem is then , for all y ∈ X.This optimality criterion has been studied by a number of authors.For example, see [1,3], as well as the references therein.We will denote the set of average optimal solutions to our problem by X a (α).As was the case for X o (α) and X w (α), the set X a (α) need not be closed in X (Example 3.13).Of course, and is not attained.Moreover, we have the following properties for also, in which case both X s (α) and X a (α) are empty.(iii) It is possible for X s (α) to be empty while X a (α) is not, that is, Lemma 3.8.If c j are exponentially bounded by Bγ j , and α < 1/γ, then A(x|α) = 0, for all x ∈ X, so that A * (α) = 0 and X a (α) = X in this case.
Proof.Suppose x ∈ X o (α).Let y ∈ X and > 0. Then there exists M sufficiently large such that C N (x|α) ≤ C N (y|α) + , for all N M. Consequently, for all such N. Hence, that is, A(x|α) ≤ A(y|α), so that x ∈ X a (α), since A(x|α) is necessarily finite by hypothesis.
In general, weakly overtaking solutions are not average optimal, that is, X w (α) need not be contained in X a (α) (Example 3.14).
We have shown that for α such that A * (α) < ∞, and without any additional assumptions, where the following hold:

is attained;
(iv) A * (α) may or may not be attained, in general; (v) it is always the case that X e (α) = ∅, if the set-valued mappings s X N (s) are continuous, for all N (e.g., discrete state-spaces).
Moreover, we will see (by Examples 3.12-3.15)that (vi) X s (α) may or may not be equal to X o (α); (vii) X o (α) may or may not be equal to X w (α); (viii) X w (α) may or may not be equal to X e (α); (ix) X o (α) may or may not be equal to X a (α); (x) X e (α) and X a (α) are not comparable in general; (xi) X w (α) and X a (α) are also not comparable, in general.
Thus, the previous inclusions are the best possible, barring any additional assumptions.
(1) Observe that if there exists x ∈ X for which , and is contained in both X e (α) and X a (α) (Corollaries 3.7 and 3.10), that is, In this case, X s (α) "dominates" all the other optimal sets in the sense that it is nonempty and contained in each of them.Thus, if C * (α) < ∞, then strong optimality is the optimality criterion of choice because such optimal strategies exist and have all the other properties.However, if C(x|α) = ∞, for all x ∈ X (i.e., C * (α) = ∞), then X s (α) = ∅, and the remaining optimality criteria become important, particularly efficiency, since we have a reasonable sufficient condition for such optima to exist in our model (Theorem 3.4).Needless to say, the strong emphasis here is on the case C * (α) = ∞.
(2) Intuitively speaking, strong optimality is short term biased, in that the earlier the decision, the greater the impact on the total cost.On the other hand, average optimality is long term biased because average cost is influenced only by cost to go.However, efficiency appears to be neither short term nor long term biased.It is reasonable to expect that a suitable infinite horizon optimization criterion should not be short term biased.The general concept of bias for optimality criteria has been studied formally by Chichilnisky [7].We will not pursue this issue here.
We next describe four examples.Without loss of generality, it suffices to consider only the case α = 1.If α = 1, then replace each c j (s j−1 , y j ) by c j (s j−1 , y j )/α j−1 to get the same conclusions.
Example 3.12.Let the data be as follows for j 1: (3.34) To introduce the cost structure, let Note that the period costs are uniformly bounded.We leave it to the reader to verify that this example has the following properties for α = 1, that is, the undiscounted case: where θ = (0,0,...), so that X a is not contained in X w , in general.
That is, there is exactly one efficient optimal solution, no overtaking optimal solution, and all feasible solutions are average optimal.
Example 3.13.Let the data be as follows for j 1: Y j = {0, 1}, S j = ( j,0),( j,1),...,( j, j) , To introduce the cost structure, define Note that the period costs are uniformly bounded.We leave it to the reader to verify that this example has the following properties for the undiscounted case: is equal to 1 in the first j positions and zero thereafter.
That is, there is exactly one (weakly) overtaking optimal solution, all but one of the feasible strategies are average optimal, and all feasible solutions are efficient.Thus, X w is properly contained in X e , X e is not contained in X a , and X a is not contained in X w .Example 3.14.Let the state-space structure be as in the previous example, but define the cost structure as follows: (see Figure 3.3.)Note that the period costs are uniformly bounded.We leave it to the reader to verify that this example has the following properties for α = 1, that is, the undiscounted case: that is, all feasible strategies are efficient, and no feasible strategy is optimal in any other sense.Thus, X w is properly contained in X e and X e is not contained in X a .Moreover, A * (1) is not attained.Example 3.15.Let the data be as follows for j 1: S j = ( j,0),( j,1) , s 0 = (0,0), where n j = 0, if j + 1 is not a power of 2, and n j = 2 m , if j + 1 = 2 m , for some integer m 1, (see Figure 3.4.) I. E. Schochetman and R. L. Smith 73 Note that the period costs are not uniformly bounded, but they are exponentially bounded; specifically, 0 ≤ c j (s j−1 , y j ) ≤ 2 j .Clearly, there are just two feasible solutions x 0 and x 1 , given by x 0 = (0,0,0,...) and x 1 = (1,0,0,...).Moreover, for each N 1, we have C N (x 0 ) = N and ), C * (1) = ∞, and X s = ∅.For each N M = 2 M − 2, we have that is, C N (x 0 ) and C N (x 1 ) are each equal to N, for all such N. Next, suppose that N is strictly between two such indices, that is, 2 for such N. From these facts, it follows that and, in particular, for N M = 2 M+1 − 3, that is, x 0 overtakes x 1 (so that x 0 weakly overtakes x 1 ), x 1 does not overtake x 0 , and x 1 weakly overtakes x 0 .Hence, X o (1) = {x 0 } and X w = X.Clearly, X e (1) = X also since, for each state, only one of the strategies attains that state.As we have observed, for each N 2, there exists a unique M 2 such that 2 M − 2 < N ≤ 2 M+1 − 2, and C N (x 1 ) = 2 M+1 − 2, so that

.46)
Consequently, We thus obtain the following inclusions: so that, in particular, X w and X e are not contained in X a .There is exactly one overtaking (average) optimal solution, no strongly optimal solutions, and all feasible solutions are weakly overtaking and efficient.
Remark 3.16.Example 3.14 shows that there exist problems for which our five optimality criteria are indiscriminate.In such cases, other criteria are called for, of which there are many.See Carlson et al. [6], for example.

Reachability conditions
In this section, we consider certain additional state-reachability conditions for our problem which will prove to be useful for comparing our optimality criteria in the case C * (α) = ∞.These conditions are controllability notions.A very strong version of such a notion in the literature is complete reachability, which requires that the system be able to transition from any state in any period to any state in the very next period.This was assumed in Zaslavski [22], and most notably in [6,Section 5.3].Another strong controllability notion (used in [14]) requires that transition from any state at any time to any future state be accomplished by a feasible stationary strategy.Our state-reachability conditions are considerably weaker than these.First, we recall (a slightly weaker version of) the bounded reachability condition introduced in [18].

Bounded time reachability (BTR).
There exists a positive integer R such that for each 1 ≤ K < ∞ and each x, y ∈ X, there exist K ≤ L ≤ K + R and z ∈ X L (depending on K, x, y) for which s K (z) = s K (y) and s L (z) = s L (x).If such R exists, then our problem is said to satisfy the bounded time reachability, that is, property (BTR).Roughly speaking, there exists a strategy z which steers the system from state s K (y) at time K to state s L (x) at some time L, which is at most R periods from K.
Note that property BTR is independent of the cost structure and the discount factor.Consequently, we introduce two other notions of state-reachability which do depend on these data.
Total cost reachability (TCR|α).Let x, y ∈ X, 0 < α ≤ 1.For each > 0, there exists a positive integer M (depending on ), such that for all N M, there exist 0 ≤ K ≤ N and z ∈ X (depending on N) such that s K (z) = s K (y), s N (z) = s N (x) and C N (z|α) − C K (z|α) < .Thus, given > 0, for sufficiently large N, there exists an earlier period K and a strategy z which steers state s K (y) at time K to state s N (x) at time N with cost less than .Theorem 4.4.If α is such that property (TCR|α) is satisfied, then every efficient strategy is overtaking optimal, that is, X e (α) ⊆ X o (α), so that

.16)
Proof.By Corollary 3.7, it suffices to show the set inclusion for the first claim.Fix x ∈ X e (α) and let y ∈ X.We show that x overtakes y.Let > 0. By property (TCR|α), there exists a positive integer M such that for all N M, there exist 0 Hence, by the efficiency of x at horizon N, we have Therefore, x ∈ X o (α).To complete the proof, apply Theorem 4.3, together with the fact that (TCR|α) ⇒ (ACR|α).
Example 4.5.Let the data be as in Example 3.12.We leave it to the reader to verify that this example has properties BTR with R = 1, (ACR|α), for all 0 < α ≤ 1, and (TCR|α), for all 0 < α < 1, that is, it does not have property (TCR|1).
Examples 4.6.Let the data be as in Examples 3.13, 3.14, or 3.15.We leave it to the reader to verify that, for each example, and for each 0 < α ≤ 1, all three reachability properties fail.
Before leaving this section, we summarize our main results.so that there exists a strong optimum which is optimal in every sense.

Journal of Applied Mathematics and Decision Sciences
Special Issue on Intelligent Computational Methods for Financial Engineering

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning ) and x weakly overtakes y (relative to α) if limsup N C N (y|α) − C N (x|α) 0. (3.11) .24) so that 0 ≤ A N (x|α) ≤ C N (x|α), and A N (x|α) ≤ A N (x|1).Then A(x|α) ≤ C(x|α), and 0 ≤ A(x|α) ≤ A(x|1) ≤ ∞, (3.25) in general.Note that the function A(•|α) = limsup N A N (•|α), where A N (•|α) is continuous, for all N.However, A(•|α) need not be lower semicontinuous, as was the case for C(•|α).

•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation