Spectral Interpretation and Applications of Decision Diagrams

Different Decision Diagrams (DDs) for representation of discrete functions are discussed. DDs can be derived by applying some reduction rules to Decision Trees (DTs) which in turn are graphical representations of some function expressions for discrete functions. Various DTs and their generalizations based on lesser known ANDEXOR rather than AND-OR expressions are surveyed. Finally, the concept of spectral interpretation of DDs and some of their applications and ways of calculation are also presented.

INTRODUCTION Manipulations and calculations with discrete functions are fundamental tasks in Computer Science and Engineering.Many problems in digital system design and testing can be expressed as a sequence of operations on discrete functions.The performance of CAD systems used in solving vari- ous problems in this area strongly depends on the efficiency of representation of discrete functions.
Decision diagrams (DDs) [1, 5] have proved very convenient data structure for discrete function representations, permitting manipulations and calculations with large discrete functions efficiently in terms of time and space.In many applications, as for example those involving large matrices, conventional algorithms are significantly improved by using DDs [11].In logic design, such applica- tions relate to the basic problems of the design, verification and testing of logical networks [13,72,79].Therefore, DDs based packages are to become, or have already been, a standard part of many CAD tools in logic design.
This paper surveys basic concepts in theory and applications of DDs representations.Presentation is based on the following principle which, at the same time, determined the organization of the paper.*Corresponding author.Tel." (65) 790-4521, Fax: (65) 791-2687, e-mail: efalkowski@ntu.edu.sgB.J. FALKOWSKI AND R. S. STANKOVIt An easy way towards studying, understanding and discussing applications of DDs is reached by realization that DDs are derived by the reduction of the corresponding decision trees (DTs), which are graphical representations of some functional expansions for discrete functions.Therefore, such expansions and related DTs are first briefly discussed (Sections 2 and 3).The spectral inter- pretation of DTs (Section 4) permits uniform consideration of various DDs (Section 5) and explains their efficiency in solving various prob- The Shannon decision tree (SDT) in Figure illustrates this procedure.This SDT shows the complete disjunctive form off f XlX:Zx3fooo 212:zx3f001 ,lX2X3fOll XlX2X3flO0 Xl,2X3flO1 XlX2X3flll (4) Each node has the label S, since the Shannon expansion was used.The nodes corresponding to the i-th variable form the i-th level in the DD. 3.2.Spectral Interpretation of SDTs In the matrix notation, (1) can be written as f0 ] (5) f [2i xi]ni (1) fl where Recursive application of the same decomposi- tion rule to all variables in f can be expressed through the Kronecker product.
i=1 Relation (6) can be interpreted as a Fourier series-like expansion of f with respect to the so- called trivial basis consisting of functions defined by minterms, i.e., by columns of B(n).Shannon DT is the graphical representation of the decom- position off with respect to this basis.
Example 2 If we expand f(xl, x2, x3) by using the pD-expansion with respect to xl, we have f fo xlf2.
The expansion tree in Figure 2 illustrates this procedure and, therefore, is called the positive Davio decision tree (pDDT).Thus, it represents the PPRM off Each node has the label pD, which shows that the positive Davio expansion was used in drawing this tree.
The negative Davio decision tree (nDDT) can be defined in the same way by using the relation (3). 3.4.Spectral Interpretation of DDTs In the matrix notation, (2) can be written as f [1 xi ]Ri (1) f where (9) [1 0] After recursive application to all the variables in (10)   i=1 n R(n) ( Ri (1).i=1 The set of coefficients AU in PPRM, written as a vector Au(n)=[Au(O),...,Au(2"-1)] r, is defined by Relation (10) is the matrix representation of PPRM, since R(n) is the positive polarity Reed- Muller matrix.The columns of this matrix define the Reed-Muller functions representing a basis in GF2(C).Therefore, pDDT is graphical represen- tation of decomposition of f with respect to the Reed-Muller basis.In pDDT, each path from the root node up to a constant node corresponds to a Reed-Muller function.The values of constant nodes are the Reed-Muller spectral coefficients of f.The same interpretation extends to nDDTs.The negative literal for xi requires permutation of columns in Ri(1) [34,84].In (10), for each variable xi, 1,..., n, if we use either a positive literal (xi) throughout or a negative literal (i) throughout, then we have a fixed polarity Reed-Muller expression (FPRM).FPRMs are represented by fixed polarity Reed- Muller DTs (FPRMDTs) [75].In FPRMDTs, the same decomposition is used at all the nodes corresponding to the same variable xi.If either the positive or negative Davio expansion for each node is used, the corresponding tree represents a pseudo Reed-Muller expression (PSDRM) [75].
Example 3 Figure 3 shows PSDRM of a three- variable function where f, f0, f02 and f21 use the positive Davio expansions, while fg., rE0 and f22 use the negative Davio expansions as is shown by the corresponding labels pD and nD of the nodes.Thus, the tree in Figure 3 Kronecker DTs (KDTs) [16] are defined by freely choosing for each variable among the Shannon, positive Davio, and negative Davio expansion rules.Pseudo-Kronecker DTs (PKDTs) are defined by freely choosing the decomposition rule for each variable irrespective to other nodes in the DT.KDTs and PKDTs represent Kronecker and pseudo-Kronecker ex- pansions for switching functions [75,79].DTs mentioned till now are usually denoted as Bit-Level DTs, since the values of constant nodes are the logical values 0 and 1.The matrix notation and spectral interpretation extends as well to these DTs.
3.6.Further Generalizations of Bit-Level DDs In bit-level Kronecker DDs, at each node either S, pD, or nD expansion is freely chosen.These expansions can be expressed by (2 2) matrices.Quaternary DDs (QDDs) [77] are introduced by allowing at each node an expansion described by an arbitrary (4 4) matrix.Thus, QDDs are used to represent functions of four or multiple of two variables.
In [60], a generalization is performed by allow- ing both (2 2) and (r r) matrices, r < n, in the same DD.The case when all the matrices are of the same order r=4 is included as a particular example.A restriction assumed in [60] is that columns of used (r r) matrices must be described by products of either positive or negative literals of a subset of r switching variables.These DDs are denoted as Generalized Kronecker DDs (GKDDs).The function expressions derived from these DDs involve Kronecker expressions and generalized Reed-Muller expressions as particular cases.
Expressed in terms of variables, these generalizations of AND-EXOR related DDs can be described as follows.In QDDs, expansion rules are performed over four-valued variables Xi recoding pairs of switching variables (xi, xj).In [60], an extension to subsets of an arbitrary number of variables is allowed, including the singleton set consisting of a single variable.
In a group theoretic approach to DDs, these methods can be uniformly considered as DDs based on different decompositions of the domain group of order 2n.In QDDs, all the subgroups are of order four.In Generalized Kronecker DDs, the subgroups are of an arbitrary order.In these and other DDs, as for example, lattice DDs [61], the permitted groups are restricted to Abelian groups.
The use of such decompositions assume nodes with several outgoing edges.A bottleneck is that in many cases, the reduction of the size increases the width of the DD.Extension to DDs on non- Abelian groups permits simultaneous reduction of both size and width of the DDs [88, 90].
Free BDDs [4] are a generalization of BDDs allowing a different order of variables in the paths from the root node to the constant nodes.Different, not quite arbitrary, order of variables is allowed, since the canonicity of the representa- tion should be kept, and the recursive structure of a decision tree should be appreciated.The same way of generalization extends to any DD.For example, we can define Free QDDs, as the Free Generalized Kronecker DDs and other related DDs are described in [61].DDs defined with respect to arbitrary sets of linearly independent functions over GF(2) are introduced in [58] and further elaborated in a series of papers by this author and his associates ([62] and references given there).

Word-level D Ts
Word-level DTs are an extension of DTs for switching functions permitting representations of integer-valued and complex-valued functions [79].Thus, they extend area of application of DDs representations.
We denote the space of such functions by C(C') and consider the integer-valued functions as a subset of complex-valued functions.Switching and multiple-valued functions are considered as integer-valued functions.Their logical values are formally identified with corresponding integers.Thus, these functions can be also represented by world-level DDs.In some applications, such representations may be more efficient than bit- level DDs.Further, word-level DDs permit representation of multiple-output switching func- tions through the integer-valued equivalents fz.
For example, a multiple-output switching function f0*fl*'''*fm-1 is uniquely represented by the integer-valued function [41] m-1 2)5.In logic design and related areas, the main merit of word-level DDs is that for some classes of switching functions they provide more compact representations than bit-level DDs.
Some of word-level DTs are defined by using integer counterparts of the basic functions used in definition of the bit-level DTs.For example, Multi-terminal binary DTs [9] are a generalization of BDTs defined through decomposition of f with respect to the basis represented by B(n), but with logical values 0 and replaced by the integers 0 and 1, respectively.Binary moment trees (BMTs) [6] are DTs defined in terms of the integer Reed- Muller basis derived in the same way from R(n).
Therefore, the decomposition rule applied at the nodes of BMTs is described by the matrix A/(1)--[_11 01] which is the inverse of Re(l) over the complex field C.This observation shows a way to assign a word-level DD to each bit-level DD, as mentioned above.
In a bit-level DD, the values of basic functions represented by columns of the matrix Q are interpreted as integers.Then, the matrix Q-1 is calculated as the inverse of Q over C [91, 94].The matrix Ai(1) is the basic transform matrix in the arithmetic representations of switching functions.Arithmetic transform DDs (ACDDs) are de- fined with respect to the same expansion rule as BMDs, but differ from BMDs in the reduction rules [94].
As in the case of bit-level DDs, different expansion rules produce different DDs.For example, Walsh DTs (WDTs) [94] are defined in terms of the decomposition rule described by the matrix Wi(1)-- [11 _11 ], that is derived from A/(1) by (0, 1) (1, 1) coding of switching variables.
Besides integer counterparts of bit-level DDs, various word-level DDs can be introduced by using different spectral transforms.For example, Complex Hadamard DTs (CHDTs) [25,27,67], are defined by using decomposition rules described by the matrix Hi(l)= [-i-1]' used in the definition of one of the possible complex Hada- mard transforms (CHTs) [26, 28, 65].
Even though the family of CHTs and corre- sponding DDs were introduced only recently [28,65], there have been some applications of CHTs in binary logic design discussed below which justify the usage of complex valued transforms and corresponding spectra formed from Gaussian integers rather than the standard integers.It should be also noticed that when a CHT matrix with half spectrum property is selected [26, 28, 65], then only half of spectral transform coefficients are needed to be calculated and operated on.For such a case, there is no overhead in calculation half spectrum performing Gaussian integers arithmetic versus calculation of full spectrum by integer valued arithmetic for standard integer-valued transforms.For example, the CHT with half spectrum property were found very advantageous in classification and synthesis/analysis of logic circuits [29,67,68], as well as in some signal processing problems [69].
To perform such a classification, only spectral summation and examination of real and imaginary parts of half of the complex coefficients are necessary.This classification method is more efficient than that based on the Walsh transform, since it does not require any computation once half of the complex Hadamard spectrum and its summation are obtained.The complex spectral coefficients are manipulated according to the lookup flow diagram to identify classes of linearly separable and NPN equivalent functions.Some classes of functions can be identified extremely fast by checking values of only few selected spectral coefficients.
Another important application of CHT devel- oped recently was identification of Boolean symmetries.Detection of the symmetries has been an important issue as its knowledge leads to a more proficient realization of the function.The problem of symmetries is important in many applications, for example in Boolean matching, cell library design, testing, computing arithmetic and related functions using threshold circuits [13].Different approaches have been tried for the identification of Boolean symmetries.They may be grouped into classical and spectral approaches.Classical approaches usually require decomposi- tion charts, truth-tables, and similar large data structures [13], while spectral approaches are based on arithmetic manipulations on some subsets of spectral coefficients [41,42,49].Since there are efficient methods for calculating the Walsh spectrum of Boolean functions from decision diagrams or disjoint cubes [10,30], then, the second approach, in general is more viable from the implementation point of view.When CHT is used in identification of symmetries in Boolean func- tions, only few spectral coefficients are required (in the worst case the half of the spectrum is needed) [66].These few complex coefficients may be calculated directly from representations of Boolean functions (such as DDs or disjoint cubes), and represented very efficiently with their own Complex Decision DDs.It was shown in [65] that CHT represents a mapping of 4-valued integers into the unit circle of complex plane.As such, it is well suited to processing not only binary, but also MV functions.Analysis in [65] has shown that some CHTs may be considered as systems of complex Walsh functions [99], while others become q- valued Vilenkin-Chrestenson functions for q= 2 or 4 [41,99,104].It is then obvious that CHTs can be used in different applications of Complex Walsh functions and Vilenkin-Chrestenson functions in processing of MV functions, especially for the case of 4-valued functions.Much work has already been done for Vilenkin-Chrestenson transform, for example characterization of ternary threshold function [53], development of measures of the dependence of MV functions on linear logic functions [54], and disjoint spectral translation that allows extending the possibility of low complexity realization to a large class of MV functions [52].Similar results can be obtained for these CHTs that are different from Vilenkin- Chrestenson functions.The methods should be computationally more effective as Vilenkin- Chrestenson functions do not possess the half spectrum property.
Various DTs are defined by using different sets of basics functions.Further generalizations are achieved by introducing the additive [45,102], or multiplicative coefficients [6], or both additive and multiplicative coefficients [38,46,102].The use of nodes with greater number of outgoing edges [77] permitted extension of DDs representations to MV [49,50,86], and other discrete functions on not- necessarily Abelian groups [88].See [96] for a unified interpretation and classification of various DDs.
A merit of introduction of different DDs is that they may be useful in some applications where other DDs are inconvenient.In Section 4, an example and a brief discussion of that subject is given.

Spectral Interpretation of DTs
Spectral interpretation shown for BDTs (Subsection 3.2), PRMDTs (Subsection 3.4), and PKDTs (Subsection 3.5.1),and used in Subsection 3.6.1 to introduce examples of some particular DTs, extends to other DTs [12,17,86,94,95].In this interpretation, a given function f is assigned to a DT through decomposition with respect to a set Q of basic functions.Each path from the root node to constant nodes corresponds to a basic function that is determined by the product of labels at the edges (Example 4).As noted in this example, if these functions are represented as columns of a matrix Q, then the values of constant nodes are the Q-spectrum Soy of f defined by Soj Q-F, (11) where Q-1 is the matrix inverse to Q and F is the truth-vector off.
With this interpretation, two basic points on DTs representations of discrete functions read as follows: 1. Given a function f.To represent it by a DT defined in terms of a basis Q, we calculate the Q-spectrum Soy off, i.e., we perform the direct Q-transform off.
2. Given a DT defined in terms of a basis Q.To read f represented by this DT, we perform the inverse Q-transform by starting from the constant nodes in the DT.
Composition of any transform defined by a matrix Q, with the identical transform defined by B(n) equals Q.Therefore, if we perform the identical transform over DT by starting from the constant nodes, then we read at the root node the Q-spectrum off.In DTs, that means the nodes corresponding to the Q-transform are formally replaced by the Shannon nodes and, consequently, the labels at the edges are replaced by these used in the Shannon DTs.This consideration permits to derive the following remark illustrated in Fig- ures 4 and 5 for n 3.
Remark 1 A DT defined in terms of a basis Q represents f and, at the same time, the Q-spectrum off, depending on the interpretation of nodes and labels at the edges in the DT.
The following example illustrates the Remark 1.For simplicity, the example is given for n 2.
Example 5 Figure 6 shows WDT for two-vari- able switching functionf(xl, x2) represented by the truth-vectors F [f(0),f(1),f(2),f(3)] r.In this WDT, the values of constant nodes are the Walsh spectral coefficients.The Walsh coefficients repre- In WDDs, the expansion rule used in the nodes is derived from the basic Walsh matrix W(1)  This rule determines the labels at the edges as and (1-2x).The basic Walsh matrix is a self- inverse matrix with the constant 1/2.Thus, by following the labels at the edges starting from the constant nodes, which are the Walsh coefficients, we perform the inverse Walsh transform.Thus, we read f at the root node of WDT.f =((1.Su(O0) / (1 2x2)Sf(O1)) Assume that in the nodes of WDT, we perform the rule derived from the inverse Walsh transform.Thus, we perform the identical mapping expressed as composition of mappings W (1) W( 1) I(1), since the constant nodes are determined by using the rule derived from the direct Walsh transform.
In WDT, that means the Walsh nodes are formally replaced by the Shannon nodes.Correspondingly the labels at the edges are replaced by the labels .i,and (1-2xi) Xi.Thus, we read the Walsh spectrum SU at the root node in WDT for f Sf .i228f (0) / YCl X2Sf (1) / x122Sf (2) [_f(00) f(01) f(10) + f(ll) 4. DECISION DIAGRAMS Decision diagrams (DDs) are derived by the reduction of DTs.Reduction is performed by deleting or sharing redundant nodes in the DT.Depending on the decomposition rules used at the nodes (choice of the matrices Q/), reduction is done by using BDD reduction rules [75], zero- suppressed BDD reduction rules (ZBDD) [51], or generalized BDD reduction rules [94].A DD is reduced (RDD) if further reduction with the same reduction rules is impossible.The DD is ordered if the variables off assigned to the levels in the DD appear in a fixed order.
A DD is characterized by the number of levels (the depth of the DD), maximal number of non- terminal nodes per level (the width of the DD), and the total number of nodes (the size of the DD).
Complexity of a DD expressed through these parameters determines applicability of the DDs and often is a limiting factor in practical applica- tions within given hardware and software re- sources.This is at the same time a justification for consideration of many different DDs.
For example, note that representation of n-bit multipliers by BDDs is impossible for a large n within a reasonable node limit of for example 100,000 nodes [15].However, ACDDs and WDDs are efficient in that case.
The number of nodes to represent an n-bit multiplier approximates to O(4 n) for MTBDDs, and O(n2) for ACDDs and WDDs [94].Some attempts to solve the problem of size reduction are done by introducing DDs with attributed edges.In particular, further reduction of the size of DDs for multipliers is possible by using DDs with attributed edges as for example, BMDs [6] and K.BMD [15].A BMT for f is reduced into BMD for f if there are arithmetic spectral coefficients for f of equal values, since they are values of constant nodes in the BMT..BMTs are a generalization of the concept by using identical factors in the values of arithmetic spectral coefficients for f to achieve more compact DDs.The concept will be explained by the following example.
Example 7 Arithmetic spectrum for f given by the vector F [8,12,10,6,20,24,37, 45] T is AU [8, 20, 2, 4, 12, 24, 15, 0] T. These values can be factored as To produce an ,BMD for f the first and second factors are moved to the first and the second level in the BMD.The third factors are kept as the values of constant nodes.The final ,BMD is shown in Figure 9.
BMDs are DDs with multiplicative attributes assigned to the edges.The decomposition rule applied to the nodes in ,BMDs are defined by wiAi(1), where wi is the multiplicative weight coefficient assigned to the nodes at the i-th level in the ,BMD for f.Edge-valued binary DDs (EVBDDs) [45] are also DDs related to the arithmetic transform [87].In EVBDDs, the additive weight coefficients are used.
K,BMDs are a generalization of ,BMDs, where the role of the arithmetic transform is replaced by the various Kronecker transforms [15].In f FIGURE 9 ,BMT for fin Example 7.   K,BMDs, both additive and multiplicative weight coefficients at the edges are used.The same is done in Factored EVBDDs (FEVBDDs) [45].We refer to [17], and [91] for further details and examples of these DDs.
It is stated in [15] that the size of a K,BMD for n-bit multipliers is considerably reduced and approximates to O(2n).However, in the analysis of complexity of DDs, with attributed edges, it is needed to take into account the amount of other related information that should be stored to uniquely describe a DD, for example, both nodes and attributes to the edges should be stored.In DDs with different decomposition rules at the nodes, another problem is determination of optimal combination of the decomposition rules for the nodes.
They use the same decomposition rule for all the nodes.The decomposition rule is derived from the Fourier transform on groups and advantages are taken from the properties of group representations of non-Abelian groups.The same as in the case of other DDs with increased number of outgoing edges, FNADDs permit reduction of levels in the DD, compared to DDs with two outgoing edges.In DDs on Abelian groups, reduction of the number of levels by using nodes with several outgoing edges, as for example in QDDs, and GKDDs, often increases the width of the DD.By taking the advantages in the properties of Fourier transform on non-Abelian groups, FNADDs permit reduction of both width and size of the DD [88, 90].Table I compares sizes and widths of Shared BDDs [51] and FNADDs [93] for some

C4Q2
benchmark functions.For FNADDs, the number of non-terminal nodes and constant nodes are given separately.We also show the decomposition of the domain group for f, since the depth of a FNADD is equal to the number of subgroups.In all DDs, the size strongly depends on the order of variables in the represented function f.
That means the size of a DD depends on the assignment of nodes per levels in the DD.Different order of variables in f produces DDs of different sizes.Thus, in all DDs, irrespective to the used sets of basic functions, the optimization by changing the variables ordering may be performed.
Free BDDs are a generalization of the optimiza- tion of ordered DDs by variables ordering.The change in variables ordering can be described as the application of the permutation matrices to values of constant nodes in the decision trees [91].Thus, we perform permutation over truth-vectors for functions or some spectra derived by applica- tion of some transform matrices to the truthvectors.Therefore, in optimization of DDs by variables ordering, we have composition of at least two mappings, described by successive application of the spectral transform matrix and the permuta- tion matrix.In Free DDs, the set of allowed permutation matrices is extended when compared to that used in optimization of DDs by variables ordering [91].In some Free DDs, several permuta- tion matrices may be successively applied to get the smallest size DD.
In QDDs, another way of optimization of DDs by manipulation with variables ordering is per- formed.It consists in variables pairing during determination of pairs of recoded variables.It can be described by permutation matrices.However, the permutation matrices are now applied not to the function values or the related spectra, but to the variables.If we express that in terms of permutation matrices applied to function values, that would be a subset of permutation matrices allowed in Free QDDs.See [91] for more details on the relationships among permutation matrices and DDs.
A problem is that we do not know how to predict the order of variables for a given f to get the corresponding DD of the smallest size.In DDs based on the decomposition of the domain group into the product of subgroups of arbitrary orders, the solution of that problem is easy [80].The subgroups of larger orders should have the varia- bles at the bottom of the DD.If some subgroups are non-Abelian, they also should have variables at the bottom levels in the DD.However, if the subgroups are all of the same order, we recognize that as the variables ordering problem in DDs.

DDs BASED LOGIC DESIGN
Design of architectures for realization off derived from DDs representations of f was studied by several authors [48,71,79, 103] and discussions and references given there.These realization architectures are based upon the multiplexers, Reed-Muller modules or suitable FPGAs [81,77].Some recent results in that area are given in [37].These methods can be easy explained by recalling the spectral interpretation of DTs.
The Remark extends to DDs, since the reduc- tion rules are formulated in such a way that they do not reduce the information content in a DD.
The impact of deleted nodes can be simply ex- pressed and taken into account through the cross points [94].

DEFINITION
The cross point in a RDD is a point where a branch longer than crosses a level in DD 2 With this definition, DDs based design methods can be shortly expressed by the following two rules.It is assumed that an n-variable function f is 2A level in the DD consists of the nodes corresponding to a particular variable Xi, ,n.A branch connecting a node at (i-1)-th level with a node at (i / 1)-th level is of the length 2, Note that taking of the cross points into account does not extends the RDD into a complete tree.
represented by a RDD defined with respect to a basis Q.Thus, we use at the nodes of the DD the decompositions described by the sub-matrices Qi (1).It is also assumed that the modules that may realize operations defined by Q/(1) and Q-I(1) are provided.
Design from DDs: 1.To realize f attach a corresponding Q-l_ module to each node and to each cross point in the quasi-reduced QDD. 2. To calculate Q-spectrum of f, attach a multi- plexer to each node in the quasi-reduced QDD.
Note It is clear that the Shannon modules, i.e., variable control multiplexers, attached to the cross points corresponding to the deleted Shannon nodes can be also deleted.The same applies to the Reed-Muller modules which have both inputs connected to the constant 0. This is possible, since whatever constant 0 or at both inputs in such a multiplexer produce the same constant at the output.The same is for the constant 0 at both inputs in a two-input Reed-Muller module.
To do the optimization, a quasi-reduced QDD attached to different AND-EXOR expression can be considered and the expression consisting of the smallest number of nodes can be chosen.The procedure can be applied to DDs for different orders of the variables.
Example 8 Figure 10 shows the reduced ordered BDD of the function f(xl, X2, X3, X4) given by the truth-vector F [1001101001011100] r considered in [77].The multiplexer-based realization architec- ture for this function is shown in Figure 11.The procedure produces the network equal to that shown in [77].
Example 9 The realization of the Reed-Muller spectrum Sfoffcan be done by the network shown in Figure 12 derived by attaching the Reed- Muller modules to the nodes and cross points in the ROBDD.
Note that, if switching variables and their complements can be used directly as the data inputs of a module, some considerable savings can be achieved, since the modules having constant inputs can be deleted as shown in Figures 10 and 11 at the outputs of these modules.Main disadvantage of the presented DDs based design methods is due to the propagation delay, since the depth of the produced circuit is equal to the number of variables.Therefore, the design methods for small depth circuits are proposed for BDDs [39], and KDDs [38].Some other solutions use the functional decomposition [7,74].
Lattice DDs [63] are a generalization of BDDs adapted to the synthesis with regular layout networks [61].Such networks usually consists of identical blocks with connections of the equal length between blocks.The regular structure of a lattice is expressed through the regular structure of a lattice DD, which can be simply transferred into the regular layout network.Extensions to ternary and quaternary lattice DDs are presented in [63].Their application to the synthesis of fuzzy logic and analog circuit design is also discussed in [63]. 6. CALCULATION OF SPECTRAL TRANSFORMS OVER DDS BDDs can be efficiently used for calculation of spectral transforms on finite dyadic groups [8][9][10][19][20][21][22]24,47,65,92].The method is derived first for Kronecker product representable transform matrices.In that case, it consists in realization of FFT-like algorithms based on the Good-Thomas factorization [35], however, per- formed over BDD of f instead by using the fast flow-graph of FFT-like algorithm.For a given spectral transform, calculations in a FFT-like algorithm are performed over a fast-flow graph of the same regular structure for all functions of a fixed number of variables.Therefore, complexity of the algorithm is fixed and does not depend on properties of the data function.For example, for n-variable switching functions the time complexity of FFT-like spectral transform methods approxi- mates to O(n2n), while the space complexity is O(2n) assuming that in-place computations are used [8, 57, 92].
The BDDs based procedures perform basic "butterfly" operations of the corresponding FFT-like algorithm at non-terminal nodes and cross points in the BDD off [92].This way, when implementing FFT over BDDs some particular properties of f are exploited which permits reduction of nodes in the BDD off.Hence savings both in time and space complexity are achieved, since 1. calculation is transferred into vector operations at the nodes of the BDD, 2. repeated calculations over identical parts of the truth-vector off are avoided, since these parts are removed from the BDD.
Example 10 Figure 13 shows BDD for a switch- ing function f(Xl,N2, X3) given by the truth-vector F [1,0, 0, 1,0, 1, 1, 1] r.To calculate the Walsh spectrum for f, we perform at each node and the cross point in this BDD, the operations described by the basic Walsh transform matrix W(1).In the matrix notation, the calculation procedure can be described as follows.
The constant nodes are processed first by performing the matrix W(1) at the nodes and cross points at the level corresponding to x3.Therefore, Performing W(1) at the nodes at the level corresponding to x2, i.e., over the subvectors '2x 2 where points the outgoing edges of these nodes, Performing W(1) at the root node we get the Walsh spectrum off as follows.
Sz(w) w ,,0 As shown in Figure 14, the procedure generates the MTBDD for the Walsh spectrum off.
Thus formulated procedures permit processing of large functions.Tables II and III, taken from [11], show CPU-times for the Reed-Muller and Walsh transform of some benchmark functions calculated through BDDs.Calculations are per- formed over DEC-5000 workstation.It should be noted that DDs based spectral transform calcula- tion procedures for functions with a number of variables satisfactory large for many applications, may be performed with acceptable efficiency by using simple, easily accessible, hardware, such as Programmable Logic Devices or different FPGAs.At the same time, the DDs based methods may be extended to other spectral transforms with recur- sive, but not entirely Kronecker product represen- table matrices [8,18,21,22,83].
In some cases, further savings may be achieved by exploiting particular properties of a transform.For example, the Haar transform is a local discrete transform, and thanks to that, the calculation of the Haar spectrum off reduces to the processing of the first elements in the subvectors represented by the nodes of BDD of f [83].Therefore, the calculation complexity of this procedure is di- rectly proportional to the size of the BDD of f.Thus, in Table IV showing the calculation times for the Haar spectrum of some benchmark functions, the number of nodes for the Haar spectrum is not given.In this respect, the procedure corresponds to the in-place computa- tions in FFT.Thanks to the linearity of the Haar transform, the calculations may be performed over shared BDDs.The method assumes that integer- valued function values are represented by binary sequences.Thus, the integer-valued function is transferred into a multiple-output switching func- tion and represented by a shared BDD.The nodes in this shared BDD are processed in the same way as BDDs for single output function are processed in calculation of the Haar transform.The sum of outgoing Haar spectra for separated outputs with coefficients 2z, < < n, are the required Haar spectrum.This way some other savings may be achieved resulting in a very fast procedure.Calculations reported in Table III have been done without any optimization of shared BDDs by variable order- ing.Calculations are performed on a 133MHz Pentium PC with 32Mbytes of RAM.Time to build up the BDD off is not counted.
In a similar way, the spectral transforms for switching functions can be calculated over Edgevalued DDs [45], and their generalizations [46, 102].Through Multiple-Place DDs [82] and their integer-valued counterparts, DDs based calculation methods can be extended to spectral transforms for MV and complex-valued functions on any finite, not necessarily Abelian group [49,97], and other related algebraic structures [98]. 7. CONCLUSION This paper presented a systematic approach to different decision diagrams used in many applications of Computer Science and Engineering.Spectral interpretation of arbitrary Decision Trees was also shown.Different methods to realize logical circuits directly from Decision Diagrams were discussed.Finally, advantages of calculating spectral transforms over Decision Diagrams rather than with the usage of FFT-like algorithms have been underlined and experimental data for the calculating three main discrete transforms used in logic design have been given.Presented classifica- tion and results are important not only for applications in logic design but also everywhere when processing of large discrete functions and matrices is necessary [11,73,79].Some of such applications are in functional verification [6, 19,  46], simulation [36], functional decomposition [7, 73], and technology mapping [43, 73, 77, 79], integer programming [45,73], and artificial intelli- gence [79]. FIGURE

FIGURE 4
FIGURE 4 Readingf from the DT.

FIGURE 5
FIGURE 5 Reading SQf from the DT.

FIGURE 12
FIGURE 12 Realization of SU off from Example 8.

FIGURE 14
FIGURE 14 Generation of the MTBDD for the Walsh spectrum of finExample 10.

TABLE SBDDs and
FNADDs for benchmark functions

TABLE III CPU
-times for the Walsh transform BDD off MTBDD of Sf Time[sec]