Trade-Off Considerations in Designing Efficient VLSI Feasible Interconnection Networks

It is well known that the hypercube has a rich set of good properties, and consequently it has been recognized an ideal structure for parallel computation. Nevertheless, according to the current VLSI technology, the implementation feasibility of the hypercube remains questionable when the size of the hypercube becomes large. Recent research efforts have been concentrated on finding good alternatives to the hypercube. The star graph was shown having many desirable properties of the hypercube, and in several aspects, the star graph is better than the hypercube. However, we observe that the star graph as a network has several disadvantages, compared with the hypercube. In this paper, we propose a class of new networks, the star-hypercube hybrid networks (or the SH networks). The SH network is a simple combination of both the star graph and the hypercube. This class of networks contains the star graph and the hypercube as subclasses. We show that the SH network is an efficient and versatile network for parallel computation, since it shares properties of both the hypercube and the star graph, and remedies several major disadvantages of the hypercube and the star graph. This class of networks provide more flexibility in choosing the size, degree, number of vertices, degree of fault tolerance, etc. in designing massively parallel computing structures feasible for VLSI implementations.


INTRODUCTION
t is well known that the hypercube has a rich set of good properties, and consequently it has been recognized an ideal structure for parallel computa- tion [9].Many efficient parallel algorithms for the hypercube computers have been developed.Nevertheless, according to the current VLSI technology, its implementation feasibility remains questionable when the size of the hypercube becomes large.This is because that the degree and the number of edges in the hypercube grow rapidly as the number of ver- tices increases.Recent research efforts have been concentrated on finding good alternatives to the hypercube (e.g. [1, 2,8]).The star graph was shown having many desirable properties of the hypercube [1, 2, 3].For example, as the hypercube, the star graph is a recursively defined regular graph, it is ver- tex and edge symmetric, and it admits high fault- tolerance.The major advantage of the star graph over the hypercube is that its degree and diameter are sub-logarithmic as functions of the number of vertices.More specifically, the n-star graph S con- tains n! vertices, and its degree and diameter are n 1 and/3(n 1)/2], respectively.By contrast, the n-dimensional hypercube has degree and diameter which are logarithmic as functions of the number of vertices.The investigation on combinatorial, fault- tolerant, communication and computational aspects of the star network is still in its early stage.However, the star graph as a network has several disadvan- tages, compared with the hypercube.For example, its variable incremental factor prohibits its implementation when n becomes large, and its sparser connectivity and less regularity result in less efficient parallel algorithms, and less efficiency in simulating many other useful network structures.
In this paper, we propose a class of new networks, the star-hypercube hybrid networks (or the SH net- works).The SH network is a simple combination of 366 S.Q.ZHENG, B. CONG, and S. BETTAYEB both the star graph and the hypercube.This class of networks contains the star graph and the hypercube as subclasses.We show that the SH network is an efficient and versatile network for parallel computation, since it shares properties of both the hypercube and the star graph, and remedies several major disadvantages of both.This class of networks provide us more flexibility in choosing the.size, degree, number of vertices, degree of fault tolerance, etc. in designing massively parallel computer systems.

THE HYPERCUBE AND THE STAR GRAPH
The n-dimensional hypercube, denoted by Q,, is a graph of 2 vertices, each is labeled by an n-bit bi- nary number.Two vertices u and v of Qn are con- nected by an edge if and only their binary labels differ in exactly one bit position.An alternative def- inition of hypercubes is as follows.Q is a graph of two vertices, labeled by 0 and 1 respectively, con- nected by an edge.For n > 1, Qn consists of two copies of Q_, Q,O_ whose vertex labels are prefixed by a 0, and Q,_, whose vertex labels are prefixed by a 1.Two vertices u Ou,,_u,,_3 Uo QO,_ and A rich set of symmetry properties make the hy- percube an ideal structure for parallel computation [9, 14].The n-dimensional hypercube is a regular graph of degree n.Used as an interconnection net- work, the hypercube has high bandwidth and high fault-tolerance.It is vertex and edge symmetric.A graph is vertex symmetric if for its any two vertices u and v there is an automorphism of the graph that maps u to v. Similarly, a graph is edge symmetric if for its any two edges el and e 2 there is an auto- morphism of the graph that maps e to e.Such sym- metries are very important for a graph to be used as an interconnection network.For example, vertex (edge) symmetry allows for all processors (data links) to be treated as identical.By such symmetries, simple and efficient data routing schemes can be de- rived and congestions in data communication can be minimized.The diameter of Q is n, which is loga- rithmic with respect to the number of vertices.The recursive definition of Q indicates that it is perfectly suitable for divide-and-conquer parallel algorithms.Opposite to its partitionability, the hypercube can be easily expanded by a factor of 2 in size without changing its substructures.Furthermore, the N-processor hypercube can simulate many O(N)-processor networks (e.g.ring, array, binary tree, and mesh-of- trees) with only a small constant factor slow-down.The major drawback Of the hypercube is that the number of its edges increases rapidly as its dimen- sion grows, and it becornes diffiult to implement the hypercube network when its dimension becomes large.
Recently, the star graph was proposed as an alter- native to the hypercube.The star graphs belong to a class of networks formed by Cayley graphs, a family of graphs with group theoretic properties.The n-star, denoted by S, is a graph of n! vertices, each is la- beled by a unique permutation of {i, i,.
i}, a set of n distinct symbols.In particular, we may as- sume that i k.Two vertices u (u, u, u) and v (v, v2, Vn) are connected by an edge if and only if for some j > 1 such that u v, v u, and u v for k 4: 1, j.As the hypercube, the star graph can also be defined recursively.We say a per- mutation p (i, i2, in) is ended with k if k.Let S be the (n 1)-star defined by the per- n-1 mutations of {1, 2  n} {k}, and let Sk,,_k denote the (n 1)-star obtained from Sk,_ in such a way that all vertices in S_k are labeled by per- mutations of 1, 2 n) ended with k and a vertex u in Sk,,_k is labeled (i, i i_, k) if it is labeled (i, i, ..., i_) in Sk,_.S is a graph of a single vertex labeled by a symbol.For n > 1, Sn consists of n copies of (n 1)-stars, Sn-I 1, _2, Sre_n_ n.Two vertices u (Ul, u2, Un_l, i) Si_i and v (v, v2, v_, j) S_ j, #: j, are connected if and only if Ul j and v i, and Uk Vk for k : i, j.Star graphs $2, $3 and $4 are shown in Figure 2 (a), (b) and (c), respectively.It has been shown that the star graph possesses many symmetry properties of the hypercube [1, 2,  3].For example, as the hypercube, the star graph is vertex and edge symmetric, and admits high fault- tolerance.The major advantage, among several oth- ers, of the star graph is that its degree and diameter are sub-logarithmic, with respect to the number of its vertices.More specifically, the n-star graph S contains n! vertices, but its degree and diameter are n 1 and/3(n 1)/21, respectively.That is, the star graph is a graph sparser than the hypercube, yet it has a smaller diameter.The investigation on the communication, algorithmic and fault-tolerance as- pect of the star graph as an interconnection network is still in its early stage.However, we observed sev- eral disadvantages of the star graph.
We informally define the incremental factor of a recursive network G as the minimum number of copies of smaller networks with the same connectivity property of .G that are used in constructing G.
Clearly,, the incremental factor of the hypercube is 2, a constant, whereas the incremental factor of the n- star graph Sn is n.Because of its linear incremental factor, we claim that the star graph is less regular in structure than the hypercube.Such irregularity may lead to serious problem in the realization of the star networks.The hardware implementation of multipro- cessors involves issues in VLSI layout, chip pack- aging, printed-circuit board layout and connection, inter-board wire connection, etc. Suppose that the current state of the electronic technologies allows for the implementation of a multipr0cessor system with the 10-star as the underlying network, which consists of 362,780 processors.To build any larger star-based multiprocessor system, the technology must advance to a stage that about at least four millions of processors can be integrated into the system.When n is large, the construction of star-based systems may be- come far from reality.
Sur and Srimani proposed a new network struc- ture, called the superstar graph [15].This is a class of networks based on the star graph.They show that given any N > 0, a superstar graph of N vertices can be constructed using copies of star graphs of differ- ent sizes.As the star graph, the superstar graph is optimally fault tolerant, and its diameter is sublogarithmic in the number of vertices.While the superstar graph is a good network structure for distributed computing systems, it is not a good alternative for massively parallel computer systems, because it is difficult to design well structured efficient parallel algorithms for it.
Another major disadvantage of the star multiprocessor system is that the parallel algorithms on the star structure may not be as efficient as those on the hypercube structure.We observed that all known al- gorithms on S, are no more efficient but more com- plicated than their counterparts on Q (e.g. [4,12]).
While the investigation on the computational aspects of the star graph has just begun, it is reasonable to believe that in general it is more difficult to use the star network for parallel computing, and the algorithms on the star network are less efficient than their counterparts on the hypercube.This is because that, compared with the hypercube, the decompositions of the star network into subnetworks are less regular, and the bandwidth between the subnetworks of S is smaller.
All known results indicate that the ability of the star network in simulating other useful networks is not as good as the hypercube (e.g.[5][6][7]).This is another disadvantage of the star graph.Again, one can observe that this is because all useful networks (including the hypercube) have constant incremental factors, but the star graph has a linear incremental factor, and furthermore, the hypercube has more edges than the star graph.

STRUCTURE OF THE SH NETWORK
As discussed in the previous section, several advan- tageous features of the hypercube are the disadvan- tageous features of the star graph, and vice versa.It is desirable to design a network which possesses good properties of both and remedy the disadvan- tages of both.Finding trade-offs between these two networks is the simple idea behind our design of the star-hypercube hybrid network (or the SH network).
The SH(x, y) network consists of x!2 y vertices.Each vertex u is labeled by a pair (p(u), b(u)), where p(u) (Pl(U), P2(U) Px(U)) and b(u) by(u)by_2(u) b(u) are called the permutation label and the binary label of u, respectively.The label p(u) is a permutation of {1, 2  x}, and the label b(u) is a y-bit binary number.Two vertices u and v in SH(x, y) are connected by an edge if and only if exactly one of the following two conditions holds: (i) b(u) b(o) and p(v) can be obtained from p(u) by interchanging pl(u) with some pi(u), (ii) p(u) p(v) and b(u) and b(o) differ in exactly one bit position.We call an edge that connects two vertices satisfying condition (i) an s-type edge, and an edge that con- nects two vertices satisfying condition (ii) an h-type edge.SH(3, 2) is shown in Figure 3 and Figure 4, in which each vertex is labeled by a pair (p, b), where p and b are the permutation label and the binary label of the vertex, respectively.
Conceptually, SH(x, y) can be viewed as a two- level network.Its edges can be partitioned into two sets.Edges in the first set are used to connect ver- tices into clusters, each is a subnetwork of the same structure.These edges are used for intracluster data communications.The second set contains the edges connecting vertices from different clusters, and used for intercluster data communication.SH(x, y) can be considered as obtained by connecting x! copies of y- dimensional hypercubes by the x-star graphs in such a way that the set of all vertices with the same binary label in these x! y-dimensional hypercubes are con- nected by an x-star graph.In this sense, the SH net- work can be considered as a star-connected-cube net- work.Then, each cluster is a y-dimensional hypercube.Also, SH(x, y) can be considered as ob- tained by connecting 2 y copies of x-star graphs by y- dimensional hypercubes in such a way that the set of all vertices with the same permutation label in these 2 y x-star graphs are connected by a y-dimen- sional hypercube.In this sense, the SH network can be considered as cube-connected-star network.Then, each cluster is an x-star network.If x! is much greater than 2y, then SH(x, y) is close to a star graph, and if 2 y is much greater than x!, the hypercube structure dominates.In Figure 3, SH(3, 2) is shown as a star-connected-cube.In this figure, a cycle of four vertices connected by thick edges is a Q2.In Figure 4, SH(3, 2) is shown as a cube-connected- star.
The SH network can be defined recursively, as the hypercube and the star graph.Clearly, SH(O, y) is the y-dimensional hypercube, and SH(x, 0) is the x-star graph.Hence, the class of SH networks contains the hypercube and the star graph as subclasses.SH(x, y) can be constructed using two copies of SH(x, y 1), SH(x, y 1) whose binary labels e prefixed by a 0, and SH(x, y 1), whose binary labels are prefixed by 1.The new binary bit for each node u is denoted by by(u).Two nodes, u SH(x, y 1) and o SH(x, y 1) are connected by an edge if and only if p(u) p(o) and b(u) bi(v for 1 < -< y 1.Similarly, SH(x, y) can be constructed by com- bining x copies of SH(x 1, y).Therefore, SH(x, y) has an incremental factor either 2 or x.The SH net- works combine the features of the hypercube and the star graph, and they preserve many properties of both, as illustrated in the next section.This class of networks provide us more flexibility in choosing the size, degree, number of vertices, degree of fault tol- erance, etc. in designing massively parallel computer systems.

SOME PROPERTIES OF THE SH NETWORK
The properties of the SH network can be derived from the properties of the hypercube and the star graph.In this section, we discuss several fundamen- tal ones.It is straightforward to see that Property 1: The connectivity of SH(x, y) can be derived from the succinct vertex labels.
This property leads to efficient data routing algorithms, since routing tables can be avoided.Routa-bility with compact routing information is very im- portant for massively parallel computers.
Property 2: SH(x, y) is a regular graph of degree x + y 1.The number of vertices and the number of edges in SH(x, y) are x!2 y and x!2Y-(x + y 1), respectively.The diameter of SH(x, y) is L3(x 1)/ 21 + y.The average distance between vertices in SH(x, y) is O(x + y).
The average distance of SH(x, y) can be verified by the results of 1, 2].Let us verify the diameter of SH(x, y).By the definition of SH(x, y), for any two vertices u and o in SH(x, y) such that the distance between p(u) and p(v) in S is/3(x 1)/2J and the distance between b(u) and b(v) is y in Qy, the dis- tance between u and v is no less than [3(x 1)/2J + y.On the other hand, using the shortest path al- gorithm for Sx, a path of length 13(x 1)/21 from u to v can be found by finding a path from u to w, where p(w) P(v) and b(w) b(u) by the rules given in [1].Then, a path of length y from w to v can be found.Therefore, the diameter of SH(x, y) is 13(x 1)/2J + y.Note that this proof also gives an algorithm for one-to-one data routing in SH(x, y) along a shortest path.
For SH(x, y), if we choose x and y properly, say letting x y n, then the degree and diameter of are sublogarithmic as functions of the number of ver- tices in SH(n, n).In general, if y < O(x log x), the diameter of SH(x, y), the ratio between the number of edges and the number of vertices in SH(x, y), and the average distance between vertices in SH(x, y) are sublogarithmic as functions of the number of vertices and the number of edges in SH(x, y), as the star graph.In contrast, these parameters for Q, are log- arithmic with respect to the number of vertices and the number of edges.Property 3: The SH network is vertex symmetric, s-type edge symmetric and h-type edge symmetric.
Here, by s-type (h-type) edge symmetry we mean that all s-type (h-type) edges can be viewed identical.Consider an arbitrary pair of vertices u and v of SH(x, y).By the vertex symmetry of the star graph [1, 2], there is an automorphism of SH(x, y) that maps u to w such that p(w) p(v) and b(w) b(u).Then, by the vertex symmetry of the hypercube graph, there is an automorphism of SH(x, y) that maps w to v. Therefore, SH(x, y) is vertex symmetric.
Consider any given pair of two s-type edges, el con- necting Ul,1 and Ul,: and e2 connecting Vl,1 and Vl,, of S(x, y).By the definition of SH(x, y) and the vertex symmetry of the hypercube, we know that there is an automorphism of SH(x, y) that maps el to e' that connects u' 1, and u' 1,2 such that b(U'l,1) b(Vl,) and b(u',2) b(v 1,2)" Then, by the edge symmetry of the star graph, we know that there is an automorphism of SH(x, y) that maps e' to e 2. Similarly, we can show that there is an automorphism of SH(x, y) that maps any h-type edge to any other h-type edge in SH(x, y).
Property 4: The SH network is Hamiltonian and bipartite.
We know that both the hypercube and the star graph are Hamiltonian [7,9], i.e. they contain Hamiltonian cycles.Also, both of them are bipartite.To show that SH(x, y) contains a Hamiltonian cycle, we view SH(x, y) as a star-connected-cube.Let (Po, P2, Pxt-) be a Hamiltonian cycle Cs in Sx, where each p is a permutation of {1, 2   x}, and b and b E be the binary labels of two adjacent vertices in a Hamiltonian cycle CI-I in the y-dimensional hy- percube.Denote by H the subgraph of SH(x, y) in- duced by vertices w such that p(w) p, i.e. each H is a y-dimensional hypercube.Let Es {(u, v)(u) Pi, p(v) P(i+l)mod(x!) and b(u) b(v) b for even i}, Es2 {(u, o)3(u) Pi, p(o) P(i+l)mod(x!) and b(u) b(v) b2 for odd i}, and Eh edges on the Hamiltonian path fi'om u to v in HI0 -< -< x! 1 }.
Then, by the fact that x! is even for x > 1, the edges in E t_J Es2 [.J E h forn] a Hamiltonian cycle of SH(x, y).To prove that SH(x, y) is bipartite, we only need to show that there is no cycle in SH(x, y) that con- tains an odd number of edges.First, in any cycle C of SH(x, y) there must be even number of s-type edges, since otherwise C cannot be a cycle.Mapping all h-type edges in C to an H, all images of these edges in Hi form a cycle.Since Hi is a hypercube, and any cycle of H contains even number of edges, the total number of h-type edges in C is even.There- fore, SH(x, y) is bipartite.Now, let us consider the fault-tolerant features of the SH network.A graph is said to be f-fault tolerant if it always remains connected when no more than f vertices are removed.The fault-tolerance of a graph G is defined as the maximum f for which G is f-fault tolerant.A graph whose fault tolerance is d 1, where d is its degree, is said maximally fault toler- ant.It is well known that Q, and S, are maximally fault tolerant [1,9].By the definition of SH(x, y), we know that SH(x, y) is (x + y 2)-fault tolerant, and consequently, it is also maximally fault tolerant.The fault diameter of a graph G with fault tolerance f is the maximum diameter of any subgraph obtained from G by deleting f vertices.A family of graphs G, is said strongly resilient if the fault diameter of Gn is at most d(G) + c, where d(G) is the diameter of G, and c is a constant.It is known that both Q and S are strongly resilient, and c is 1 and 3 for Q and S, respectively [1].From these results and by the definition of SH(x, y), we know that Property 5: SH(x, y) is (x + y 2)-fault tolerant, and consequently, it is also maximally fault tolerant.Furthermore, SH(x, y) is strongly resilient, and the constant c in its fault diameter is no greater than 4.
The SH(x, y) is (x + y 1)-connected.Hence, by Menger's theorem, we have the following property of SH(x, y).Property 6: in SH(x, y), given any two vertices s and d, there exist (x + y 1) vertex-disjoint paths between them; given a vertex s and a set of (x + y 1) distinct vertices {d, d2 d/y_ }, there exist (x + y 1) vertex-disjoint paths, one from s to each di, 1 <_i<-x+ y-1.
Since (x + y 1) is the degree of SH(x, y), this property further indicates another optimal fault tol- erance feature of SH(x, y).In fact, all these paths can be computed efficiently using the existing algorithms for the star graph and the hypercube [6, 13, 14].In similar ways, many other properties of the SH net- work can be derived from the properties of the star graph and the hypercube.

ALGORITHMS ON THE SH NETWORK
By the structure of the SH network, it is natural to believe that the algorithm design techniques for the SH-network based computer system should be a combination of those used for the star based and hypercube based systems.To demonstrate this, let us consider two problems" the data broadcasting problem and the sorting problem.Broadcasting a mes- sage from any vertex u to all vertices of SH(x, y) can be carried out as follows.First, use the data broadcasting algorithm for the x-star to broadcast the mes- sage from u to all vertices in S {rib(o) b(u))}.This requires O(x log x) time 1, 4].Then, in parallel, use the data broadcasting algorithm for the Q to broadcast from each v S to all vertices in H {wo(w) p(v)}.Clearly, in O(x log x + y) parallel steps all vertices in SH(x, y) receive the message broadcasted from vertex u.Note that the diameter of SH(x, y) is 13(x 1)/2/ + y.Let Tb(G) denote the time required for broadcasting a message in network G, and d(G) denote the diameter of G.The ratio d(G)/Tb(G) can be used to measure the efficiency of data broadcasting in G. Clearly, the efficiency of data broadcasting in SH is no worse than that in the star graph, but no better than that in the hypercube.
The time complexity of best known sorting algorithms on the S network for sorting a set of ele- ments, at most one in each processor, is O(n 3 log n) [4, 12].By slightly modifying these algorithms and using the hypercube Bitonic sorting algorithm as a subalgorithm, one can obtain a sorting algorithm on SH(x, y) with time complexity O(x 3 log x + y log 3 y).Clearly, such an algorithm is more efficient than currently best known sorting algorithms on the star network.
Let EH(P), Es(P), and EsI(P) be the efficiencies of best parallel algorithms of the same type (such as SIMD or MIMD) for solving a problem P on the hypercube, the star and the SH networks, respectively.. Since the SH network is a combination of the hypercube and the star networks, for all problems P, min EI(P), Es(P) <-Es(P) <-max E(P), Es(P) }.

SIMULATING OTHER NETWORKS BY THE SH NETWORK
Given two networks G and H, called the guest and the host, respectively.An embedding f of G into H is a mapping of the vertices of G into the vertices of H.The ratio IV(H)I/IV(G)I is called the expansion of f, where V(G) and V(H) denote the vertex sets of G and H, respectively (note: Iv( l <-IV(G)I).The max- imum distance between the images of adjacent ver- tices of G in H is called the dilation of f.Graph embedding is used to determine how to simulate a network, the guest, by another, the host.The effi- ciency of such simulation is measured partly by the dilation and expansion of the embedding.In general, we want to find embeddings with small dilations, which result in small slow down factors of the sim- ulations, and small expansions, which yield better processor utilizations of the simulations.Usually, there are trade-offs between these two parameters.
We have shown that the SH network contains Hamiltonian cycles.Thus, the SH network can be used to optimally simUlate linear arrays and rings (with dilation 1 and expansion 1).The O(n log n) time data broadcasting algorithm given in 1 implies a dilation 1, one-to-one embedding of a symmetric binary tree of depth O(n log n) into S.By this result, it is easy to verify that there is a dilation 1, one-to- one embedding of a symmetric binary tree of depth O(x log x / y) into S(x, y).Since the SH network contains the hypercube and the star networks as sub- networks, one can expect that in general the performance of simulations of other networks by the SH networks is a trade-off of the performances of sim- ulations of these networks by the hypercube and the star networks. 7

SIMULATING THE HYPERCUBE BY THE SH NETWORK
The star graph has been considered a good alterna- tive to the hypercube as the underlying interconnec- tion structure for parallel multiprocessor systems.Since many efficient hypercube algorithms have been developed, the problem of simulating the hy- percube by the star network has been investigated (e.g.[10,11]).Due to the drastic structural differ- ence between these two networks, the embeddings of the hypercube into the star graph appeared not good, considering both dilation and expansion parameters.Results on trade-offs between dilations and expansions of embeddings of the hypercube into the star graph have been reported in [10,11].Since the SH network contains hypercube clusters as sub- graphs, it is quite obvious to expect the performance of simulating the hypercube by the SH network to be improved, compared with simulating the hypercube by the star network.Indeed, using any previous results on embedding the hypercube into the star graph, we can obtain embeddings of the hypercube into the SH network with smaller dilations and/or smaller expansions.The following argument illus- trates a simple and general method of obtaining hy- percube-into-SH embeddings that preserve the dila- tion, but achieve significant improvement in expansions, compared with the hypercube-into-star embeddings.
Theorem 1: If Qn can be embedded into Sj(n) with dilation k, then Q can be embedded into SH(f(n p), p), 1 -< p -< n 1, with dilation at most k.
The idea of the proof of this theorem is as follows.
Divide the n bits of the binary vertex labels of Qn into two parts of p bits and n p bits, respectively.The binary numbers of the p-bit part are mapped to the binary labels of vertices in SHOr(n p), p), and binary numbers of the (n p)-bit part are mapped to the permutation labels of vertices in SH(f(n p), p).Thus, each subgraph of Q induced by the same p-bit numbers is mapped to a p-dimensional hypercube cluster of SH(f(n p), p), for which the dilation is 1.Similarly, the binary numbers of (n p)-bit part are mapled to the Sa,_p) clusters of SH(n p), p) with dilation k.

SIMULATING THE STAR NETWORK BY THE SH NETWORK
The problem of simulating the star network by the hypercube was considered in [5].Compared with simulating the star network by the hypercube, the performance of simulating the star network by the SH network can be expected much better simply be- cause that SH(x, y) contains 2 y copies of S as sub- graphs.Indeed, this is true.However, proving this requires techniques different from that for proving the performance of simulating the hypercube by the SH network.For brevity, we state some of our results and omit proofs.First, we are particularly interested in constant dilation embedding of the star graph into the SH network.
Theorem 2: S, can be embedded into SH(p, (N p)(n 1)) with dilation at most 4. The expansion of this embedding is p!2(n-p)(n-l)[ n!.The best known constant dilation embedding of the star into the hypercube has expansion 2"/n !.The improvement in the expansion of constant-dilation star-into-SH embedding over the constant-dilation star-into-hypercube embedding is significant.Now, consider the embedding of star network into the SH network with small expansion.We have Theorem 3: S, can be embedded into SH(p, (n 1) + [log(n p)q!) with dilation at most [log(n p)]! + 2.
When p n/2, the dilation is log(n/2)]!+ 2, and expansion is n/2.The known smallest dilation of (n/2)-expansion embedding of the star graph into the hypercube is n(log n 2) [5].Therefore, by embedding S, into the SH network we can signifi- cantly reduce the dilation without increasing the ex- pansion.It is not difficult to show that the trade-offs of dilations and expansions of embeddings of the star graph into the SH networks are always better than those of the best known embeddings of the star graph into the hypercube, the ones given in [5].

CONCLUDING REMARKS
We proposed a class of new networks, the SH net- work which includes the star graph and the hyper- cube as subclasses.The SH network is a simple com- bination of the star graph and the hypercube, and it has the best features of both networks.By choosing x and y carefully, high-performance parallel com- puter systems using SH(x, y) network as the under- lying interconnection structure can be implemented.
When needed, the system can be upgraded easily.
The algorithms developed for the star and the hypercube based computer systems can be adopted for the SH network with similar or improved performance.Due to the space limit, we only discussed sev- eral aspects of the SH network.The evidence provided is sufficient to conclude that the SH network is a versatile network structure for parallel computation.
Another class of graphs, called the pancake graph, defined on the symmetry group has also been shown an attractive alternative to the hypercube as inter- connection network [2].The pancake graph has properties similar to those of the star graph.As a simple generalization of our approach, we can define a new class of networks, which may be named the pancake-hypercube hybrid network (or the PH net- work) in a way similar to the definition of the SH network.The PH network is a regular graph, and it shares properties of both the pancake graph and the hypercube.By an analysis similar to the one given in this paper, one can show that trade-offs of differ- ent aspects can be made in constructing a multiprocessor systems using the PH network.

l
On_2On_ 3 t) o Qn_ are connected if and only if Un_2Un_ 3 U 0 Dn_2Dn_ D 0. Hypercubes Q, Q2 and Q3 are shown in Figure l(a), (b) and (c), respectively. FIGURE FIGURE 2