SHIELDS-HARARY NUMBERS OF GRAPHS WITH RESPECT TO CONTINUOUS CONCAVE COST FUNCTIONS

The Shields-Harary numbers are a class of graph parameters that 
measure a certain kind of robustness of a graph, thought of as a 
network of fortified reservoirs, with reference to a given cost 
function. We prove a result about the Shields-Harary numbers with 
respect to concave continuous cost functions which will simplify 
the calculation of these numbers for certain classes of graphs, 
including graphs formed by two intersecting cliques, and complete 
multipartite graphs.

1. Introduction.Suppose we have finite simple graph G and a "weighting" function g : V (G) → [0, ∞), which together constitute a weighted network.Think of the weights assigned to each vertex of G by g as representing some amount of harmful "stuff" stored there.Some enemy of this weighted network might wish to dismantle it by knocking out vertices until the sum of weights on each remaining connected component is no greater than some threshold, say 1. (We will call a set of vertices which, after being knocked out, satisfies this requirement a g-dismantling set.)The enemy does not get to knock out vertices for free.The enemy will pay f (g(v)) to knock out vertex v where f is some particular nonincreasing, nonnegative function on the range of g.If S represents the set of vertices knocked out, then the enemy will pay v∈S f (g(v)).Assume that the enemy's intelligence is good and so the enemy will always pay the least amount to dismantle the network for each particular weighting, say m f (g, G).The Shields-Harary number of G with respect to the cost function f , denoted by SH(G, f ), is sup g:V →[0,∞) m f (g, G).
Informally, SH(G, f ) can be thought of as the most the enemy can be made to pay to dismantle the network.This is not quite accurate since the "sup" in this definition is usually not a "max." Suppose the definition of dismantling is altered so that the network is dismantled when the sum of the weights on each remaining component after vertex removal is strictly less than 1.Let the minimum cost the enemy pays to dismantle the weighted network with this definition of dismantling be denoted by m f (g, G). (Clearly, m f (g, G) ≥ m f (g, G).)We define SH(G, f ) by SH(G, f ) = sup g:V →[0,∞) m f (g, G).It turns out that when f is continuous, the "sup" in this case is always a "max."(We can actually make the enemy pay this amount.)A weighting g at which the max is achieved will be called optimal (for G and f ).By the way SH(G, f ) is defined, we may as well search for optimal weightings taking no value greater than 1.Therefore, SH(G, f ) depends only on the behavior of f on [0, 1].
We define SH 0 (G, f ) and SH 0 (G, f ) as SH(G, f ) and SH(G, f ) were defined, with the weighting functions g confined to be constants.
Why is the cost function decreasing (or at least nonincreasing)?The situation we are presenting here is one in which the more stuff stored at each vertex, the harder it will be to defend that vertex, and thus the less it will cost the enemy to knock it out.
The Shields-Harary parameters arose from a conjecture posed in 1972 by the late Allen Shields about which he consulted Frank Harary.They proved some initial results, which they did not publish, but which survived somehow.Their efforts were later added to by others in contribution to what we now know about these parameters.Some of what we know is presented next.
Initially, much of what was done with the Shields-Harary parameters dealt with the specific cost function f (x) = 1/x, which was the cost function involved in Shields' original conjecture.Johnson [3] presented everything known at the time about the SH parameters with that particular cost function.Much of what has been done more recently involves arbitrary cost functions.
Exact values of SH(G, f ) are known for all continuous f when G = K n − e [2], G = K n , and G = K 1,n−1 [1].Harary and Johnson, in the same paper, also provided bounds for P n (the path on n vertices) and C n as well as the following result: if f is continuous, then SH(G, f ) is a max and SH(G, f ) = SH(G, f ).
The following conjecture is posed by Harary and Johnson: if f is continuous and G is vertex-transitive, then there is a constant optimal weighting of V (G).
Here is a problem that is related to this conjecture: for which continuous f is it the case that for every G, there is an optimal weighting of V (G) which is constant on each orbit of V (G) under Aut(G)?
In Section 2, we give the results of this paper.The main result of the paper has to do with cost functions which are concave on [0, 1].We then end the paper with examples of how we apply these results to obtain the Shields-Harary numbers of some graphs with particular concave cost functions.

Results.
In what follows, G will be an arbitrary finite simple graph with vertex set V (G), of order n(G) = |V (G)|.For u ∈ V (G), deg(u) will denote the degree of u in G, and N G (u) will denote the set of vertices adjacent to u in G.The complete graph (clique) on n vertices will be denoted K n .
For any continuous f , there is an optimal weighting g of G satisfying g(u) ≤ g(v) for each u ∈ T and each v ∉ T .For 0 < s < l,r, denote by G(l, r , s) the graph consisting of a K l and a K r intersecting in a K s , as indicated in Figure 2.1.
Corollary 2.2.For any continuous f , there is an optimal weighting g of G(l, r , s) with g(u) ≤ g(v) for every u in the K s and every v not in the K s .
Proposition 2.5.Suppose f is continuous and concave on [0, 1] and S ⊆ V (G) satisfies N G (u)\{v} = N G (v)\{u} for each u, v ∈ S. Suppose either that f (1) = 0 or that S induces a clique in G.Then, for any optimal weighting g of G, there is another optimal weighting g of G which is constant on S and agrees with g on V (G)\S, and min v∈S g(v) ≤ g| S ≤ max v∈S g(v).Further, if S 1 ,S 2 ,...,S k are disjoint sets of vertices of G, each satisfying the suppositions above, then there is an optimal weighting ĝ of G which is constant on each S i , i = 1, 2,...,k, agrees with g at vertices not in any S i , and, for each i, Corollary 2.6.If f is continuous and concave on [0, 1], there is an optimal weighting of G(l, r , s) satisfying the conclusion of Corollary 2.2, which is constant on each of K l \K s , K r \K s , and K s .

Proofs
Proof of Proposition 2.1.Let g be an optimal weighting of G with respect to , and ĝ = g on V (G)\{u, v}.We will show that ĝ is an optimal weighting of G with respect to f .This will prove the proposition since, even if ĝ does not satisfy the requirement of the conclusion, we can go on switching values until we arrive at a weighting that does satisfy that requirement, and this final weighting will be optimal.
Let S ⊆ V (G) be a strict ĝ-dismantling set such that If neither u nor v, or if both u and v, belongs to S, then S is a strict g-dismantling set, whence and it follows that ĝ is an optimal weighting of G with respect to f .This leaves two cases to consider.
so S is a strict g-dismantling set, and so The conclusion that ĝ is an optimal weighting follows as before.
Case II (v ∉ S, u ∈ S).In this case, S is a strict g-dismantling set and and f is nonincreasing.The conclusion that ĝ is optimal follows as before.

Proof of Corollary 2.2. The proof of this corollary follows immediately from Proposition 2.1 by taking T = V (K s ).
Proof of Proposition 2.3.Since each weighting g n is an optimal weighting of G, m f (g n ,G) = SH(G, f ) for each n.Now, let S be a strict g-dismantling set of vertices of least cost with v∈S f (g(v)) = m f (g, G) ≤ SH(G, f ).We now show that S is a strict g n -dismantling set for all n sufficiently large by showing that for such n and for each component Since S is a strict g-dismantling set, then, for each component H of G − S, we have Thus, there exists some integer N H such that n ≥ N H implies that There are only finitely many such H; take then n ≥ N implies that for each such H, v∈V (H) g n (v) < 1.Now, for all n sufficiently large, we have Proof of Proposition 2.5.Suppose we have an optimal weighting g of G with respect to f , so m f (g, G) = SH(G, f ).Let S ⊆ V (G) be such that for each u, v ∈ S with u ≠ v, N G (u) \ {v} = N G (v) \ {u}.We further suppose that either f (1) = 0 or S induces a clique in G.If g is constant on S, we can take g = g, so assume that g is not constant on S. Let u 0 ,u 1 ∈ S such that g(u 0 ) = min v∈S g(v) < g(u 1 ) = max v∈S g (v).Now, we define a weighting ĝ by ĝ = g except at u 0 and u 1 , where ĝ(u 0 ) = ĝ(u 1 ) = (g(u 0 ) + g(u 1 ))/2.We will show that We have four cases to consider.Case 1 (u 0 ∉ T , u 1 ∈ T ).In this case, for any connected component If this occurs, we can find another set with a dismantling cost equal to v∈T f ( ĝ(v)) because ĝ(u 0 ) = ĝ(u 1 ).The set T 1 is a strict ĝ-dismantling set of vertices because T is, and u 0 and u 1 have the same neighbors other than themselves.Since, by assumption, T is a cheapest strict ĝ-dismantling set, T 1 must be one as well.Further, T 1 satisfies the requirement defining Case 1, so we are done in this case.Case 3 (u 0 ∉ T , u 1 ∉ T ).If u 0 and u 1 are adjacent, then they will be in the same component of G − T .Then, for every connected component (3.12) Now, if u 0 and u 1 are not adjacent, then S does not induce a clique in G, so f (1) = 0. Now, u 0 and u 1 may possibly not be in the same component of G − T .If they are in the same component, then T is a strict g-dismantling set and we are done.If they are not, then u 0 and u 1 are isolated vertices in G − T because they have the same neighbor sets in G.We know that u∈V (H) g(u) ≤ u∈V (H) ĝ(u) < 1 for every connected component H of G − T except the H consisting of the vertex u 1 .Now, if g(u 1 ) < 1, then T is a g-dismantling set and we are done.If g(u 1 ) = 1, then f (g(u 1 )) = 0, and so T ∪ {u 1 } is a strict g-dismantling set with the same cost as T .We have that (3.13) Case 4 (u 0 ∈ T , u 1 ∈ T ).In this case, it is clear that for every connected component H of G − T , u∈V (H) g(u) = u∈V (H) ĝ(u) < 1 and so T is a strict g-dismantling set.Now, since f is concave.This completes the proof that ĝ is optimal.Now, for any weighting h of G, let d(h) = max u∈S h(u) − min u∈S h(u).We will show that for every optimal weighting h of G with d(h) > 0, there is another optimal weighting h satisfying the following:

\S, and min v∈S h(v) ≤ h| S ≤ max v∈S h(v).
Let h be any optimal weighting of G with d(h) > 0 and let h 1 = ĥ, obtained as above.By the definition of ĥ,

, and clearly, min v∈S h(v) ≤ min v∈S h 1 (v) ≤ max v∈S h 1 (v) ≤ max v∈S h(v). Therefore, d(h 1 ) ≤ d(h). If d(h 1 ) < d(h), take h = h 1 . Otherwise, we have d(h 1 ) = d(h) > 0, which implies that max v∈S h 1 (v) = max v∈S h(v) and min v∈S h 1 (v) = min v∈S h(v).
Note that the set of vertices in S where h 1 achieves its maximum is the set of vertices in S where h achieves its maximum, minus one vertex, and the same holds for the sets of points where h and h 1 achieve their minimum on S. Let Otherwise, continue, letting h 3 = h 2 , and so on.In going from h i−1 to h i , one vertex of S at which h i−1 is maximal has its weight decreased, and one vertex at which h i−1 is minimal has its weight increased, and these are vertices at which h is maximal and minimal, respectively.Since there are only a finite number of such vertices, we must have d(h i ) < d(h) eventually.It is straightforward to see that h = h i has the desired properties.
Suppose that g is an optimal weighting of G and suppose that W = {h : V (G)→ [0, 1], h is an optimal weighting of G, h ≡ g on V (G)\S, and min v∈S g(v) ≤ h| S ≤ max v∈S g(v)} contains no weightings which are constant on S.
By the meaning of inf, for each positive integer k, there is a weighting is a sequence of optimal weightings.Since the h k are bounded functions on a finite set V (G), the sequence (h k ) has a convergent subsequence; to avoid proliferation of subscripts, we suppose that

), (h k (v)) converges to some value h(v).
By Proposition 2.3, the weighting h is optimal, and it clearly satisfies the other requirements for membership in W .We claim that d(h) = d.It is certainly clear by the definition of then h is an optimal weighting with d(h) = 0, contrary to supposition, so such a weighting satisfying all conditions of the proposition must exist after all.If d(h) > 0, then, by previous remarks, there is another optimal weighting h ∈ W with d( h) < d(h) = d.But this contradicts the definition of d, by which d is a lower bound of a collection of numbers of which d( h) is one.So there must be an optimal weighting of G which is constant on S satisfying all the conditions of the proposition after all.Now, suppose that S 1 ,...,S k are pairwise disjoint sets of vertices, each satisfying the conditions of the proposition.We proceed by induction on k.By the induction hypothesis, there is an optimal weighting ĝk−1 of G, with respect to f , which is constant on each of S 1 ,...,S k−1 , agrees with g off k−1 i=1 S i , and whose constant value on S i is between the max and min values of g on S i , for each i = 1,...,k− 1.If we let S k play the role of S and let ĝk−1 replace g in the argument above, we get an optimal weighting ĝ that satisfies the conclusion of the proposition.
Proof of Corollary 2.6.Let g be an optimal weighting of G(l, r , s) satisfying the conclusion of Corollary 2.2, possibly not constant on K l −K s , K r −K s , and/or K s .Now, let S 1 = V (K l − K s ), S 2 = V (K r − K s ), and S 3 = V (K s ).The sets S 1 , S 2 , and S 3 are disjoint sets of vertices which satisfy the conditions of Proposition 2.5.Applying Proposition 2.5 to G(l, r , s) with the weighting g of Corollary 2.2 will then yield a new optimal weighting ĝ which will satisfy the conclusion of the corollary.

Examples
Example 4.1.If G = G(3, 3, 1) and f (x) = 1−x, then SH(G, f ) = 3/2.Corollary 2.6 tells us that there will be an optimal weighting of G as illustrated in The simplification provided by Corollary 2.6 in the problem of determining SH(G(l, r , s), f ) for any concave cost function f is mainly to reduce the number of variables involved from l + r − s (= 5, in this case) to 3.Even with this reduction, and even with a particular cost function f , the analysis necessary to determine SH(G(l, r , s), f ) and (what may be more important) an optimal weighting of the vertices of G, will be rather tedious and involved with the inspection of numerous cases.We will give some indication of these below, in this case, but will spare the reader the details.In fact, supplying those details might be an interesting exercise.
An enemy of this weighted network would certainly be attracted to the removal of the center vertex of weight x based solely on the structure of the graph.However, that vertex will have the highest removal cost.The owner of this network will want to find a way to drive the minimum dismantling cost as high as possible.In light of all of this, we can break the analysis down into three cases: (1) 2a ≥ 1 and 2b ≥ 1, (2) 2a ≥ 1 and 2b < 1, and (3) 2a < 1 and 2b < 1.
We will examine one subcase of one of these cases to give the reader the flavor of the game.In case (1) in the preceding paragraph, there are two subcases: In subcase (1i), assuming additionally that a, b, x < 1 (which is reasonable if f (x) = 1 − x), there are, for each a ≥ b ≥ x satisfying the requirements of the case and subcase, only two candidates (up to "equivalence") for a strict dismantling set of minimum cost, and the costs of these are The cost of dismantling will be the minimum of these two.Clearly, we may as well make a, b, and x as small as possible, while not violating the subcase requirements nor omitting possible values.In particular, we may as well assume that b For each b ≥ 1/2, we make each of these as large as possible by taking the smallest possible a and b within the requirements of the subcase; that would be a This turns out to be an optimal weighting, by comparison with the results in the other cases.Analysis of the other cases discovers other optimal weightings; they are all of the form shown in Figure 4.2, with 0 ≤ x ≤ 1/2.Example 4.2.If G = G(3, 3, 1) and f (x) = 2 − x, then SH(G, f ) = 5.The analysis in this case is similar to that in Example 4.1, but is complicated by the possibility of using 1 as a weight.Any vertex with weight 1 will have to be removed in strict dismantling.When the cost function was 1 − x, the cost of this removal was zero, so there was no point in assigning 1 as a weight.But with f (x) = 2−x, it turns out to be optimal to use 1 as a weight.There are two optimal weightings of the "a − b − x" type, given by a = b = x = 1 and a = 1, b = x = 1/2.
Example 4.3.If G = K 2,3 and f (x) = 1 − x, then SH(G, f ) = 3/2.In the case of a complete r -partite graph K n 1 ,...,nr , r ≥ 2, and a concave cost function f satisfying f (1) = 0, the application of Proposition 2.5 allows us to look for optimal weightings which are constant on each part of size n i ≥ 2, and constant on the clique formed by the parts with only one vertex.Thus, the number of variables is reduced from n = r i=1 n i either to r − s + 1 or to r , if s = {i; n i = 1} = 0.
Thus, for the complete bipartite graphs K m,n , except for K 1,1 = K 2 and such a cost function, there are only two variables to worry about, the constant weights on each part.We leave it as a recreation to see that SH(K 2,3 , 1 − x) = 3/2, with optimal weightings 1/4 on the small part and 1/2 on the large part, or 1/4 on every vertex.