As being one of the most crucial steps in the design of
embedded systems, hardware/software partitioning has received
more concern than ever. The performance of a system design
will strongly depend on the efficiency of the partitioning. In
this paper, we construct a communication graph for embedded
system and describe the delay-related constraints and the
cost-related objective based on the graph structure. Then, we
propose a heuristic based on genetic algorithm and simulated
annealing to solve the problem near optimally. We note that the
genetic algorithm has a strong global search capability, while
the simulated annealing algorithm will fail in a local optimal
solution easily. Hence, we can incorporate simulated annealing
algorithm in genetic algorithm. The combined algorithm will
provide more accurate near-optimal solution with faster speed.
Experiment results show that the proposed algorithm produce
more accurate partitions than the original genetic algorithm.
1. Introduction
Embedded systems [1–3] are becoming more and more important because of the wide applications. They consist of some hardware and software components. This is beneficial, because hardware will lead to faster speed with more expensive cost, while software will lead to lower speed with cheaper cost. So, critical components can be implemented in hardware and noncritical components can be implemented in software. This kind of hardware/software partitioning can find a good tradeoff between system performance [4] and power consumption [5]. How to find an efficient partition has been one of the key challenges in embedded system design. Traditionally, partitioning is carried out manually. The target system is usually given in the form of a task graph, which is usually assumed to be a directed acyclic graph describing the dependencies among the components of embedded system. Recently, many research efforts have been undertaken to automate this task. Those efforts can be classified by the feature of the partitioning architecture and algorithm aspects.
On the target architecture aspect of the partitioning problem, some are assumed to consist of a single software and a single hardware unit [6–9]; parallelism among components is another assumed limitation, while others do not impose these limitations. The target system is usually given in the form of a task graph, a directed acyclic graph describing the dependencies between the components of the system.
The family of exact algorithm includes branch and bound [10–12], integer linear programming [6, 7, 13], and dynamic programming [14–16]. Those algorithms are used for partitioning problems with small inputs successfully. When applied to problems with inputs of large size, they tend to be quite slow. The reason is that most formulations of the partitioning problem are NP hard [17], and these exact algorithms have exponential runtime.
Corresponding to the exact algorithms, there are more flexible and efficient heuristic algorithms. Right now, most of the researches focus on heuristic algorithms. Traditional heuristic algorithms are software oriented and hardware oriented. The hardware oriented heuristic algorithms start with a complete hardware implementation and then iteratively move component to software until the given constraints are satisfied [18, 19]. The software oriented algorithms start with a complete software implementation and iteratively move component to hardware until the speedup time constraints are met [20, 21]. Many general-purpose heuristic algorithms are also utilized to solve the system partitioning problem. Simulated annealing-related algorithms [22–24], genetic algorithms [8, 9, 25, 26], tabu search, and greedy algorithms [25, 27, 28] have been extensively used to solve partitioning problem.
In addition to the general-purpose heuristic algorithms, some researchers have constructed heuristic algorithms that leverage problem-specific domain knowledge and can find high-quality solution rapidly. For example, authors define two versions of the original partitioning problem and propose two corresponding algorithms in [29]. In the first algorithm, the problem is converted to find a minimum cut in the corresponding auxiliary graph. The second algorithm is to run the first algorithm with several different parameters and select the best partition from this set that fulfills the given limit. Another example is presented in [30]. Authors reduce the partitioning problem to a variation of knapsack problem and solve it by searching one-dimension solution space with three greedy-based algorithms, instead of searching two-dimension solution space in [29]. This strategy reduces time complexity without loss of accuracy. Some researchers address the issue that we cannot accurately determine the cost and time of system components in the design stage. Some people think that they are a subjective probability and make use of this theory in system level partitioning [31–33].
Most of the algorithms work perfectly within their own codesign environment. In this paper, we construct a communication graph, in which the implementation cost, execution time, and communication time are all taken into account. We construct a mathematical model based on this communication graph and solve the model by an enhanced heuristic method. The proposed heuristic method incorporates simulated annealing into genetic algorithm to improve the accuracy and speed of original genetic algorithm. Simulation results show that the new algorithm provides more accurate and faster partitions than that of original genetic algorithm.
This paper is organized as follows. Some background on the genetic algorithm and simulated annealing is introduced in Section 2. The constructed communication graph and the proposed mathematical model definition for partitioning problem are presented in Section 3. Section 4 presents the method which incorporates simulated annealing in genetic algorithm, for the partitioning model. Experiment results about the comparison of the original genetic method and the combined method are given in Section 5. Finally, we conclude the paper in Section 6.
2. Background
This section provides some detailed notations and definitions of genetic algorithm and simulated annealing algorithm.
2.1. Simulated Annealing
Simulated annealing algorithm is a generic probabilistic metaheuristic for the global optimization problem, locating a good approximation to the global optimum of a given function. It is proposed by Kirkpatrick et al. [34], based on the analogy between the solid annealing and the combinatorial optimization problem. In condensed matter physics, annealing involves materials' heating and controlled cooling.
Before the implementation of simulated annealing algorithm, we need to choose an initial temperature. After the initial state is generated, the two most important operations Generation and Acceptation can be performed.
Then, the algorithm will reduce the value of the temperature. The iteration process will stop until certain condition is met; for example, a good approximation to the global optimum of the given function has been found. The algorithm is shown in the Algorithm 1.
Algorithm 1: Annealing algorithm.
(1) Initialize the parameters of the annealing algorithm;
(2) Randomly generate an initial state as the current_state;
(3) K≔ 1;
(4) while (system has been frozen) do
(5) while (system equilibrium at Tk) do
(6) call generation strategy for the next_state_j;
A genetic algorithm is a search heuristic that mimics the process of natural evolution. The basic principles of genetic algorithm were laid down by Holland [35] and have been proved useful in a variety of search and optimization problems. Genetic algorithms are based on the survival-of-the-fitness principle, which tries to retain more genetic information from generation to generation. A genetic algorithm is composed of a reproductive plan that provides an organizational framework for representing the pool of genotypes of a generation. After the successful genotypes are selected from the last generation, the set of genetic operators such as crossover, mutation, and inversion is used in creating the offspring of the next generation. Whenever some individuals exhibit better than average performance, the genetic information of these individuals will be reproduced more often.
Before the implementation of genetic algorithm, we need to generate an initial population and define a fitness function. Each individual of the initial population is a binary string which corresponds to a dedicated encoding. The initial population is usually generated randomly. We will evaluate each individual with the fitness function. The fitness of each individual is defined as fi/f¯, where fi is the evaluation of individual iand f¯ is the average evaluation of all individuals. Then, the most important three operators Selection, Crossover, and Mutation can be performed on the current generation.
Then, we can evaluate individuals of the next generation with the fitness function, deciding whether to stop or go on performing the three operations. The evolution process will stop until certain condition is met; for example, the fitness of individual will not be improved any more. Finally, the algorithm will return the best individual of the latest generation as the solution. The algorithm is shown in the Algorithm 2.
Algorithm 2: Genetic algorithm.
(1) Initialize the parameters of the genetic algorithm;
(2) Randomly generate the old_population;
(3) generation≔1;
(4) while (generation≤max_generation) do
(5) clear the new_population;
(6) compute fitness of individuals in the old_population;
(7) copy the individual with the highest fitness;
(8) while (individual_number<population_size) do
(9) Select two parents from the old_population;
(10) Perform the crossover to produce two offsprings;
(11) Mutate each offspring based on mutation_rate;
(12) Place the offspring to new_population;
(13)end while
(14)Replace the old_population by the new_population;
(15) end while
(16) returnnew_solution with the best fitness;
3. Problem Formulation
This section provides the formal definition of the partitioning problem, including the constructed communication graph structure, formal notations, and mathematical model.
3.1. Problem Definition
While preserving the dependencies among the system task modules, we build a graph structure to represent the real-world system. The communication graph can be constructed through the following steps.
Determine the boundary of the system to be partitioned, identify the main task modules in this boundary, and describe the data signal flow through these task modules. We can accomplish this by referring to the design document, designer, implementer, and deployer of the system. A simple example is shown in Figure 1.
Construct the communication graph structure for the presented system. We map a node to each basic task module. Edges presented in step 1 are regarded as causal or dependency correlations caused by data communication. An arc is added between two nodes if the represented basic task modules are connected. This can be easily finished based on the model constructed in the previous step. The constructed communication graph structure for the system model is shown in Figure 2.
Constructed task module of a given system.
Constructed graph structure for the system to be partitioned.
Based on the communication graph structure, we can formalize the problem as follows. The communication graph is denoted as G(V,E), where V is the set of nodes {v1,v2…,vn} and E is the set of edges {eij∣1≤i,j≤n}. We need to add cost values and execution time to each node as well as communication cost to each edge. The following notations are defined on V and E.
hi denotes the cost of node iin hardware implementation, and si denotes the cost of node iin software implementation.
tih denotes the execution time of node iin hardware implementation, and tis denotes the execution time of node iin software implementation.
cij denotes the communication time between node i,j. The value of cij is given in the context that the two nodes are implemented in different way.
The partitioning problem is to find a bipartition P, where P=(Vh,Vs) such that Vh⋃Vs=V and Vh⋂Vs=∅. The partitioning problem can be represented by a decision vector x(x1,x2…,xn), representing the implementation way of the n task modules. There are three kinds of optimization and decision problems defined on the software/hardware partitioning.
H0 is the given hardware constraint. Find a HW/SW partition P such that HX≤H0 and TX is the minimal execution time.
T0 is the given execution time constraint. Find a HW/SW partition P such that TX≤T0 and HX is the minimal hardware cost.
H0 and T0 are the given hardware constraint and execution time constraint, respectively. Find a HW/SW partition P such that HX≤H0 and TX≤T0.
It has been proved that Q1,Q2 are NP hard and Q3 is NP complete [36]. In this paper, HW/SW partitioning is performed according to the Q2 type.
3.2. Mathematical Model
As described in Section 1, a partition is characterized by two metrics: cost and time. The cost includes hardware cost and software cost. It represents the resource consumption to achieve the hardware and software implementation of each task module. The time includes the execution time of each task module and the communication time between task modules.
Based on the definition of previous subsection, hardware cost H(x) of the partition P(x) and the total time metric T(x) can be formalized as follows:
(1)H(x)=∑i=1nhi(1-xi),T(x)=∑i=1ntisxi+tih(1-xi)+∑i=1n-1∑j=i+1ncij[(xi-xj)2].
Based on the formalization of the two metrics and the given constraint M on execution time, the partitioning problem can be modeled as the following optimization problem:
(P1)minimize H(x),subjecttoT(x)≤Mx∈{0,1}n,
which can be simplified as the problem (P2) presented later:
(P2)maximize∑i=1nhixi,subjectto∑i=1n-1∑j=i+1ncij[(xi-xj)2]+∑i=1n(tis-tih)xi≤M-∑i=1ntih,x∈{0,1}n.
4. Algorithm
In this section, we propose two algorithms to solve the partitioning problem (P2) based on genetic algorithm and simulated annealing algorithm. The basic principles of genetic algorithm were laid down by Holland [35] and have been proved useful in a variety of search and optimization problems. Genetic algorithm simulates the survival-of-the-fitness principle of nature. The principle provides an organizational reproductive framework: starting from an initial population, proceeding through some random selection, crossover, and mutation operators from generation to generation, and converging to a group of best environment-adapted individuals. Simulated annealing algorithm is a generic probabilistic metaheuristic for the global optimization problem, locating a good approximation to the global optimum of a given function. It is proposed by Kirkpatrick et al. [34], based on the analogy between the solid annealing and the combinatorial optimization problem.
4.1. Initial Algorithm
We apply the genetic algorithm to the uncertain partitioning problem to find the approximate optimal solution of the problem (P2). The pseudo code in the Algorithm 3 shows the description of the algorithm. The steps (1)–(4) are the initialization of parameters and solution of the partition problem. The step (5) is used to check whether the termination condition of the propagation is met or not. The step (6) is used to ensure that the number of individuals of the next generation is not reduced. The crossover and mutation operations are performed in the iteration block to produce individuals of the next generation. The fitness function is defined on the object function of the problem (P2). We choose the crossover and mutation strategy from [36].
Algorithm 3: Heuristic algorithm.
(1) Encode the parameters for the partitioning problem;
(2) Initialize the first generation P0;
(3) Calculate the fitness of each individual in P0;
(4) Copy the individual with the highest fitness to the solution;
(5) while (termination conditions) do
(6) while (number of individuals ≤ generation size) do
(7) Select two individuals (g1,g2);
(8) Perform crossover on (g1,g2)→(g1′,g2′);
(9) if (max{fitness(g1′), fitness(g2′)}≤ max{fitness(g1), fitness(g2)}) then
(10) Reject the crossover with g1′=g1, g2′=g2;
(11) else
(12) Accept the crossover;
(13) end if
(14) Perform mutation on g1′ to produce ng1;
(15) if (fitness(ng1) ≤ fitness(g1′)) then
(16) Reject the mutation, ng1=g1′;
(17) else
(18) Accept the mutation;
(19) end if
(20) Perform the above steps on g2′ to produce ng2;
(21) end while
(22) Calculate the fitness of each individual;
(23) if (the highest fitness ≥ fitness(solution)) then
(24) Copy the individual with the highest fitness;
(25) end if
(26) increase the generation number;
(27) end while
(28) return solution: x[i], i∈[1,n];
4.2. Improved Algorithm
We note that the genetic algorithm has a strong global search capability, while the simulated annealing algorithm will fail in a local optimal solution easily. Hence, we can incorporate simulated annealing algorithm in genetic algorithm. We hope that the combined algorithm will provide more accurate near-optimal solution with faster speed. The pseudo code in the Algorithm 4 shows the algorithm.
Algorithm 4: Combined heuristic algorithm.
(1) Encode the parameters and solution for the partitioning problem;
(2) Initialize the first generation P0, temperature T0, annealing ratio α;
(3) Calculate the fitness of each individual in P0;
(4) Copy the individual with the highest fitness to the solution;
(5) while (termination conditions) do
(6) while (number of individuals ≤ number of the generation size) do
(7) Select two individuals (g1,g2) from the current generation;
(8) Perform crossover on (g1,g2) to produce two new individuals (g1′,g2′); /* start of annealing-crossover*/
(9) if (max{fitness(g1′), fitness(g2′)}≤ max{fitness(g1), fitness(g2)}) then
(19) Perform mutation on g1′ to produce ng1; /* start of annealing-mutation*/
(20) if (fitness(ng1) ≤ fitness(g1′)) then
(21) ΔC = (fitness(ng1) − fitness(g1′));
(22) if (min{1, exp(−ΔC/Tk)}≥ random[1,0)) then
(23) Accept the mutation;
(24) else
(25) Reject the mutation, ng1=g1′;
(26) end if
(27) else
(28) Accept the mutation;
(29) end if /* end of annealing-mutation*/
(30) Perform step (19)–(29) on g2′ to produce ng2;
(31) end while
(32) Calculate the fitness of each individual in current generation;
(33) if (the highest fitness of the current generation ≥ fitness(solution)) then
(34) Copy the individual with the highest fitness to the solution;
(35) end if
(36) Reduce the temperature and increase the generation number;
(37) end while
(38) return solution: x[i], i∈[1,n];
The steps (8)–(18) are the original crossover operation incorporated to the Metropolis of annealing algorithm. The key idea is that when the original crossover operation produces better individuals, the crossover operation is accepted. Otherwise, we will accept the new individuals as the candidates of next generation in the Metropolis criterion. The steps (9)–(29) are the original crossover operation incorporated with the Metropolis of annealing algorithm. The key idea is the same as annealing crossover. The modified genetic operators ensure that the next generation is better than the current generation with the accepted rules based on fitness and Metropolis criterion. Those accepted rules speed up the convergence of the solution process without loss of accuracy. The steps (32)–(36) are the update of solution, generation number, and temperature.
5. Empirical Results
The proposed two algorithms are heuristics; the model is constructed from the communication graph. We have to determine the performance and the quality of the model and the solution. We have implemented them in C and test the algorithms on Intel i5 2.27 GHZ PC. In order to demonstrate the effectiveness of the proposed algorithm, we compare it with original genetic-algorithm-based partitioning [36]. For testing, several random instances with different nodes and metrics are utilized. The parameters of the partitioning problem are generated with the following rules.
hi is randomly generated in [1,100].
tih is randomly generated in [1,100], and tis is randomly generated in [tih,200+tih].
cij is randomly generated in [1,20].
M is a given time constraint and randomly generated in [∑1ntih,∑1ntis].
The simulation results of the proposed algorithms as well as the original genetic algorithm are presented in Figures 3 and 4. Each instance is tested for 100 times and the average values are presented. The first graph demonstrates the accuracy of the proposed algorithm and the second graph demonstrates the efficiency of the proposed algorithm. Furthermore, we collect the convergence track and the run time of the two algorithms.
Minimum cost comparison of the partition.
Runtime comparison of the two algorithms.
The values about the cost value are shown in Figures 3 and 4 for different parameters configurations. For those random graphs with the small size of nodes, the results of EGA and GA are almost the same. The two algorithms yield similar results. For bigger random graphs, EGA outperforms GA. EGA can always find smaller values than Algorithm 1. With the increase of the size, the deviation between the two algorithms grows bigger. The improved algorithm will keep better population size, and the local search will be more universal and accurate.
We also store the convergence track of the two algorithms, as presented in Figure 5. At the beginning of the iteration procedure, GA drops faster than EGA. But EGA can find the near optimal solution faster than GA in the convergence process. The iteration number grows with the size of the nodes, which means more time to go into the stable state. We also collect the minimum expectation cost value of the two algorithms. The appearance times of the minimum value of the two algorithms demonstrate that the EGA performs better than the GA, even for a small number of nodes.
Convergence track for number of nodes equals 1000.
As shown in the experiment results, we can find that the original genetic algorithm needs more time, which means more iterations to meet the termination conditions. Furthermore, the accuracy of the near-optimal solution got by the incorporated algorithm is higher. From the experiments, it is reasonable to draw the conclusion that our proposed algorithm produces high-quality approximate solution and generates the solution with faster speed.
6. Conclusion
In this paper, we construct a communication graph for the partitioning problem, in which the implementation cost, execution time, and communication time are all taken into account. Then, we propose a heuristic based on genetic algorithm and simulated annealing to solve the problem near optimally, even for quite large systems. The proposed heuristic method incorporates simulated annealing in genetic algorithm. Those incorporated accepted rules based on fitness and Metropolis criterion speed up the convergence of the solution process without loss of accuracy. Experiment results show that the proposed model and algorithm produce more accurate partitions with faster speed.
Acknowledgments
This work was supported by the National Medium and Long-term Development Plan (Grant no. 2010ZX01045-002-3), the 973 Program of China (Grant no. 2010CB328000), the National Natural Science Foundation of China (Grant nos. 61073168, 61133016, and 61202010) and by the National 863 Plan of China (Grant no. 2012AA040906).
WangJ. B.ChenM.WanX.WeiC.Ant-colony-optimization-based scheduling algorithm for uplink CDMA nonreal-time data20095812312412-s2.0-5964908398210.1109/TVT.2008.924983JiangY.ZhangH.SongX.Bayesian network based reliability analysis of plc systems201210.1109/TIE.2012.2225393WangJ. B.ChenM.WangJ.Adaptive channel and power allocation of downlink multi-user MC-CDMA systems20093556226332-s2.0-6784911877310.1016/j.compeleceng.2009.01.003ZBL1187.68058GajskiD. D.VahidF.NarayanS.GongJ.SpecSyn: an environment supporting the specify-explore-refine paradigm for hardware/software system design199861841002-s2.0-003202833710.1109/92.661251HenkelJ.Low power hardware/software partitioning approach for core-based embedded systemsProceedings of the 36th Annual Design Automation Conference (DAC)June 1999ACM1221272-s2.0-003265058710.1109/92.661254NiemannR.MarwedelP.An algorithm for hardware/software partitioning using mixed integer linear programming1997221651932-s2.0-003109692110.1023/A:1008832202436MannZ.OrbánA.Optimization problems in system-level synthesisProceedings of the 3rd Hungarian-Japanese Symposium on Discrete Mathematics and Its Applications2003Tokyo, JapanDickR. P.JhaN. K.MOGAC: a multiobjective genetic algorithm for hardware-software cosynthesis of distributed embedded systems199817109209352-s2.0-003218411610.1109/43.728914HidalgoJ. I.LancharesJ.Functional partitioning for hardware-software codesign using genetic algorithmsProceedings of the 23rd EUROMICRO ConferenceSeptember 19976316382-s2.0-003068424210.1109/43.709402ChathaK. S.VemuriR.Hardware-software partitioning and pipelined scheduling of transformative applications20021031932082-s2.0-003662524110.1109/TVLSI.2002.1043323WangJ.XieX.Optimal odd-periodic complementary sequences for diffuse wireless optical communications2012519095002JigangW.ChangB.SrikanthanT.A hybrid branch-and-bound strategy for hardware/software partitioningProceedings of the 8th IEEE/ACIS International Conference on Computer and Information Science (ICIS '09)June 20096416442-s2.0-7035074331410.1109/ICIS.2009.152BanerjeeS.BozorgzadehE.DuttN. D.Integrating physical constraints in HW-SW partitioning for architectures with partial dynamic reconfiguration20061411118912022-s2.0-3384552739610.1109/TVLSI.2006.886411KnudsenP. V.MadsenJ.PACE: a dynamic programming algorithm for hardware/software partitioningProceedings of the 4th International Workshop on Hardware/Software Co-Design (Codes/CASHE '96)March 1996IEEE Computer Society85922-s2.0-0029709469MadsenJ.GrodeJ.KnudsenP. V.PetersenM. E.HaxthausenA.Lycos: the lyngby co-synthesis system1997221952352-s2.0-003109947410.1023/A:1008884219274WuJ.SrikanthanT.Low-complex dynamic programming algorithm for hardware/software partitioning200698241462-s2.0-3264444924710.1016/j.ipl.2005.12.008ZBL1187.68688KalavadeA.1995University of California, BerkeleyGuptaR.De MicheliG.Hardware-software cosynthesis for digital systems199310329410.1109/54.232470NiemannR.MarwedelP.Hardware/software partitioning using integer programmingProceedings of the European Design & Test ConferenceMarch 1996IEEE Computer Society4734792-s2.0-0029734631VahidF.GajskiD. D.Clustering for improved system-level functional partitioningProceedings of the 8th International Symposium on System SynthesisSeptember 199528332-s2.0-0029515669VahidF.GongJ.GajskiD. D.Binary-constraint search algorithm for minimizing hardware during hardware/software partitioningProceedings of the European Design Automation ConferenceSeptember 1994IEEE Computer Society2142192-s2.0-0028714173ErnstR.HenkelJ.BennerT.Hardware-software cosynthesis for microcontrollers19931046475HenkelJ.ErnstR.An approach to automated hardware/software partitioning using a flexible granularity that is driven by high-level estimation techniques2001922732892-s2.0-003530099310.1109/92.924041LiL.SongY.GaoM.A new genetic simulated annealing algorithm for hardware-software partitioningProceedings of the 2nd International Conference on Information Science and Engineering (ICISE '10)December 2010IEEE Computer Society2-s2.0-7995196344310.1109/92.924041LiL. Y.ShiM.Software-hardware partitioning strategy using hybrid genetic and Tabu searchProceedings of the International Conference on Computer Science and Software Engineering (CSSE '08)December 200883862-s2.0-7995149703810.1109/CSSE.2008.488ZhengS.ZhangY.HeT.The application of Genetic Algorithm In embedded system hardware-software partitioningProceedings of the International Conference on Electronic Computer Technology (ICECT '09)February 20092192222-s2.0-6454910412010.1109/ICECT.2009.132ElesP.PengZ.KuchcinskiK.DoboliA.System level hardware/software partitioning based on simulated annealing and tabu search1997215322-s2.0-003078405510.1023/A:1008857008151WiangtongT.CheungP. K.LukW.Comparing three heuristic search methods for functional partitioning in hardware-software codesign2002644254492-s2.0-003664865210.1023/A:1016567828852AratóP.MannZ.OrbánA.Algorithmic aspects of hardware/software partitioning2005101136156JigangW.SrikanthanT.ChenG.Algorithmic aspects of hardware/software partitioning: 1D search algorithms2010594532544MR27518192-s2.0-7764925872610.1109/TC.2009.173AlbuquerqueJ.CoelhoC.Jr.CavalcantiC. F.da SilvaD. C.Jr.FernandesA. O.System-level partitioning with uncertaintyProceedings of the 7th International Conference on Hardware/Software Codesign (CODES '99)May 19991982022-s2.0-0032680016AlbuquerqueJ.Solving hw sw partitioning by stochastic linear programming with management of teams uncertaintyJiangY.ZhangH.JiaoX.Uncertain model and algorithm for hardware/software partitioningProceedings of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI '12)August 2012243248KirkpatrickS.GelattC. D.VecchiM. P.Optimization by simulated annealing198322045986716802-s2.0-2644447977810.1126/science.220.4598.671MR702485ZBL1225.90162HollandJ.Genetic algorithms199226716672AratóP.JuhaszS.MannZ.OrbánA.PappD.Hardware-software partitioning in embedded system designProceedings of the IEEE International Symposium on Intelligent Signal ProcessingSeptember 200319720210.1109/ISP.2003.1275838