SOME REMARKS ON MODELLING AND SIMULATION OF PHYSICAL PHENOMENA

Mathematical modelling and computer simulation of physical phenomena is a rapidly growing field of work in all areas of pure and applied sciences. In principle, mathematical modelling of physical phenomena has been the field of theoretical physics from the very beginning of physics although the computer has increased the potentials of this method by many orders of magnitude. Modelling and simulation are often used as synonyms. It may, however, be meaningfull to distinguish the development of a mathematical model from its use in computer simulation. Also, a mathematical model in this sense must be distinguished from mathematical expressions interpolating experimental data. In the field of textures, models of texture formation, models of mate/’ials properties, as well as the combination of the two are being used. In this connection it is important whether a texture formation model is linear or non-linear. In the first case the texture formation operator can be reduced to the orientation space whereas a non-linear operator operates in the full texture space.


INTRODUCTION
Computer modelling and simulation are becoming more and more important techniques in research and technology.The papers in this volume deal with modelling and simulation in the field of textures.This may be taken as an occasion to consider these methods under some general aspects.
A physical phenomenon can not be considered as being understood unless it can be mathematically formulated.On the other hand, a mathematical formulation, by itself, does not necessarily mean that the phenomenon is really understood.Mathematical formulation of physical phenomena with the aim of understanding them i.e. modelling them-is the very basic idea of physics.It dates back to the early days of physics.What is new, however, and lead to an explosionary evolution of the use of mathematical modelling is the availabilty of computers allowing to calculate numerically the "output" of even sofisticated models and to compare them with experimental data to an extend and with an accuracy that was impossible without computers.It may thus be worthwhile to recapitulate the aims and principles of methods such as mathematical modelling, computer modelling, computer simulation and related terms.151 MODELLING Any physical phenomenon enters physics by a qualitative observation followed earlier or later by detailed quantitative measurements.The question, what is the reason of the phenomenon, usually arises with the first observation, a reasonable answer to it can, however, only be expected on the basis of quantitative measurements.As an example we may take primary recrystallization of deformed metals.In order to understand the phenomenon a physical model must be conceived taking into account all that is known about the phenomenon itself as well as all relevant physical principles in general, Figure 1.In the chosen example of primary recrystallization this must include the structure of the deformed material, the observed fact that recrystallization starts with nucleation, the nuclei grow which requires a driving force as well as a boundary mobility.These facts must be correlated with each other according to general physical principles e.g.minimising of the energy, the principles of diffusion, and others.This way a physical model is obtained as a hypothesis which must be proved or rejected by comparing its "output" with the experimental facts.
In many cases this (comprehensive) model is much too complicated.On the one hand, its mathematical formulation may be too complicated and on the other hand it may require a too large number of empirical input data which are not known.In the chosen example of recrystallization, the structure of the deformed state is often too complex to be taken completely into account.Hence, in a second step a simplified physical model is made.This step depends essentially on the intuition of the person who proposes the model.This includes the suggestion which of the influencing parameters may have a strong influence and which ones may be neglected in this stage.Hence, many different models may be proposed by different persons.The physical model of this stage is then formulated mathematically.Thereby it is assumed that there is a correct one-to-one correspondence between the (simplified) physical model and the mathematical model.Strictly speaking, the mathematical model defines the exact features of the physical model.The mathematical model itself can always be expressed in terms of an analytical formula (using an appropriate symbolism).The same is not necessarily true for the "output" of the model which is then to be compared with experimental data.That is where "computer modelling" comes in.If the "output" can be expressed in terms of an analytical formula, the "properties" of this formula, i.e. how it depends on the input data, can mostly be discussed with analytical methods.If no analytical expression is available numerical methods must be used to calculate the outcome of the model.Thereby, it must be kept in mind that the numerical model is not exactly identical with the original mathematical model.For instance, integrals usually must be expressed by sums, infinite series must be truncated after a finite number of terms and so on.Finally the numerical model must be executed by a computer which requires an appropriate computer code.Usually one assumes that the computer code exactly corresponds to the numerical model (in practice, however, this is a problem which needs very carefull and often cumbersome tests and "debugging").One important step at that stage of the model is to make sure in which range of the variables the numerical model and computer code are correct (taking the used approximation into account).It may happenand often it does happenthat on the way from the mathematical model over the numerical model to the actual computer code "hidden conditions" were introduced which have the result that the computer code, strictly speaking, implements a different mathematical model.Sometimes, the programmer himself is not aware of that and most often it is not specified in the description of the model.This is the reason why comparison of different mathematical models of one and the same physical phenomenon, given in the literature, is often so difficult.After that, the output of the model must be compared with the known experimental data.This raises the very difficult problem of the required accuracy of the comparison.Thereby the errors introduced by the "simplified" physical model compared to the "comprehensive" one must be taken into account, but also the accuracy of the measurements must be considered.
The comparison between model and measurement should be done for the whole range of the input parameters for which the model should be applicable.Very often it turns out that the model predicts phenomena which have not yet been measured.It is then necessary to carry out new measurements and to compare them with the predictions.
If any of these comparisons fails then the model must be discarded or at least it must be improved i.e. some of the introduced simplications must be abandoned.
If the model has passed all these examinations with positive result it can, nevertheless, not yet be assumed as being the correct model of the physical phenomenon.Rather it must be compared with all possible alternative models.This is usually the most difficult step in modelling a physical phenomenon.This problem has two different aspects: On the one hand, it is virtually impossible to conceive all possible alternative models.
The second aspect is more of psychological nature.Every one is much more familiar with his own model than with those proposed by other persons.Hence, comparison of competative models is often done on a subjetive basis rather than on an objetive one, if it is done at all.
Let us assume that the proposed model has also passed this test i.e. it gives much better results than all other models then this model can be looked at as describing the physical phenomenon at the best of our present knowledge.Usually this situation will hold until a still better model comes up.
The different steps of modelling are shown schematically in Figure 1.

Estimation of empirical data by modelling
A mathematical model usually requires three types of input data as is illustrated schematically in Figure 2.These are 1) Universal constants 2) Empirical values of some input parameters 3) Choosable parameters for which the output of the model is to be considered.
The most satisfactory case is to have an "ab-inito" model which does not use empirical values of any input parameter.Ab-inito models are, however, the rare exception.In all other models the output depends on the correct knowledge of the values of the empirical data.Very often, however, this is not the case.Then the model can be used to obtain most probable values of the unknown input data provided that the model itself can be trusted.In this case the output of the model is calculated with varying values of the input data within a reasonable range.The so obtained output data are compared with experimental measurements.The best fit is then assumed to correspond to the correct values of the input parameters.
It must, however, be mentioned that this procedure can only work if the model itself is good.This is, however, often not known beforehand.Rather, the validity of the model and the correct values of the input parameters can only be proved simultaneously.Hence, this method can only be used with great care.In the chosen example of primary recrystallization the misorientation dependence of grain boundary mobility is used which has not yet been measured thoroughly as a function of all misorientation angles.Hence, models are used to estimate the misorientation dependence of this quantity.

COMPUTER SIMULATION
Computer simulation is different from computer modelling as is illustrated schematically in Figure 3.In this case the mathematical model of the physical phenomenon is completely known.It is also assumed that a correct computer code is available in which the necessary data, e.g.universal constants and correct empirical data are already included.The output of the computer code then depends only on the actual choice of the choosable parameters.In this case it is presumed that the output of the computer code will always agree with the corresponding measurement (if we would do such measurement).The most prominent example of computer simulation is a flight simulator.Thereby it is assumed that the physics of flying is (sufficiently) known.As empirical data the properties of the particular aircraft are given to the computer.The output of the simulator is then exactly what would happen with the airplane if it were operated with the same chossable parameters.In contrast to modelling, simulation does not give new insight into the considered physical phenomenon.Rather, it is a prerequisite of simulation that the physical phenomenon is fully understood.The advantage of simulation may be manyfold: It may avoid dangerous situations (flight simulator) It may be cheaper (e.g. the simulation of crash-tests) The physical process can be considered much faster or much slower than in reality (e.g. the continental shift, wheather forcast, or explosion phenomena) Simulation is needed e.g. for machine control using "feed-back" models of the real process (e.g.automatic landing of an aircraft or the control of rolling mills) In the field of textures, the simulation of r-values on the basis of on-line texture measurement is a well-known example.

INTERPOLATION
A third situation should be considered in this connection, that is interpolation of empirical data by an empirical formula.Given are data points, Figure 4a, which are to be interpolated by a continuous curve Figure 4b.The most well known interpolation functions are the straight line, an n-th order polynominal, or the cubic spline function.
These function types contain a certain number of parameters which are fitted to the experimental data points.The type offunction is usually chosen intuitively by looking at the data points.
In texture analysis, spread functions of texture components are mainly chosen according to this principle, these are, among others, Gauss-functions, Standard-functions, Elliptical-functions, and several others.Also the "log-normal" distribution, often used to interpolate grain size distribution functions, is chosen according to this principle.

Interpolation by empirical formula
Figure 4 Interpolation of empirical data by an empirical formula.
The principle of interpolation may, of course, be mixed with that of modelling.In this case some ideas for a physical model are given but often only with the aim of justifying the chosen type of function.This applies, for instance, to the Gaussor Normal Distribution functions as "Model-Functions" for texture components.

MODELLING IN THE FIELD OF TEXTURE
In the field of textures all three discussed principles are being used, i.e. modelling, simulation, and interpolation.Figure 5 is a scheme of the main areas which can be distinguished in this field.Each of these areas requires its own particular mathematical methods.

Texture representation
The central area is the texture function, i.e. the ODF, as well as several other orientation- related distribution functions such as the misorientation distribution function, MODF, of grain boundaries, and others.The arguments of these functions are orientations g and directions F both of which define non-euclidean spaces, the higher-order functions even higher-order spaces.This requires a particular texture repesentation mathematics.In this connection it is often desirable to have a rather simple representation of these functions i.e. by splitting them into some components which can be described by a low number of parameters.For this purpose Gaussor other "Normal"-Distribution functions are often used.These functions are mainly chosen from mathematical reasons.
Hence, they must be considered as "interpolation functions" and not as "model  A second area is texture measurement.This may be done either "grain-by-grain" or with "polycrystal methods".In both cases mathematical methods of data evaluation are needed in order to transform the experimental data, e.g.pole figures, into the required texture functions, e.g. the ODF.Also this is strictly speaking a problem of interpolation, although one in a higher-dimensional space.The ODF interpolates (according to a least-squares principle) all available measured pole figure values.

Texture formation
The texture (together with other aggregate parameters) is being formed in the material by the action (simultaneously or successively) of five main texture forming physical processes: Crystallization from any non-crystalline state Plastic deformation, mainly by glide and twinning All kinds of recrystallization processes Phase transformation (into another crystalline phase) Rigid rotation of particles (either loose particles or embedded in a softer medium) It is of vital interest to understand these processes quantitatively.This requires the construction of physical, mathematical, and numerical models as well as computer codes of these.These models must comprehend all aspects of the particular process.The texture is then only one of the output parameters of the process model.In the example of recrystallization, the recrystallization kinetics, e.g.described by the Avrami-equation, is an other such output parameter.Since the texture (ODF) is, however, a multi-valued quantity, it allows a very strong comparison with experimental data.This is why texture analysis is a very sensitive method for the study of such processes (e.g.texture measurements step-by-step during plastic deformation, recrystallization, and all the other processes).
Texture formation also has a simulation aspect as is also illustrated in Figure 5.It is one of the aims in materials technology to produce materials with particular textures.The safest way to reach this goal is to measure the texture (e.g."on-line") and to control the parameters of the production line on the basis of the measured texture data.This requires that the corresponding texture formation model is completely known and can then be used for simulating this process in real time and (hopefully) to use the output of simulation to control the process.

Physical properties of materials
In technological applications of materials it is not primarily the texture which is interesting but the materials properties influenced by the texture (and other aggregate parameters).Hence, physical, mathematical, and numerical models as well as appropriate computer codes are needed in order to express the overall properties of the material in terms of the single crystal properties and the texture (and other aggregate parameters).The influence of texture on the materials properties has its origin in the anisotropy of properties in single crystals.Two situations are then to be distinguished: 1) The crystallites of the aggregate do not interact with each other.Then the overall property is the linear mean value weighted with the texture function, ODF.The other aggregate parameters have no influence on the overall property.Strictly speaking this situation will never be fullfilled exactly.It is, however, often assumed as a first-order model (in the sense of the simplified model in Figure 1).2) The crystallites of the aggregate interact with each other.The overall properties then depend on size, shape, and mutual arrangement of the grains, additional to the texture i.e. the ODF.This is the situation for elastic, plastic, and ferromagnetic properties (as well as many others).
The calculation of linear mean values on the basis of the ODF is no more a problem.Models of non-linear mean values, including grain interaction (and hence, higher-order aggregate parameters) is presently one of the fields of active research using physical, mathematical and numerical models.

Feed-back models
Finally, texture formation prooesses may be directly influenced by texture-dependent properties of the material.A prominent example for that are deep-drawing processes leading, for instance, to earing.In this case the particular form of the stress and strain fields in the deformed sheet depend on the (local) plastic anisotropy and this, in turn, depends on the (local) textures.Hence, a comprehensive model of this process must necessarily contain a texture formation model and a property model simultaneously.This is shown in Figure 5 as feed-back models.These feed-back models are mainly needed for the purpose of simulation, e.g. of the deep-drawing process.In this case, simulation will be much cheaper and also faster than finding the production parameters for every new shape of a new part by experiments.

TEXTURE FORMATION MODELS
All texture formation models according to any of the physical processes shown in Figure 5 can be considered under a common aspect illustrated in Figure 6.They start with an input texture which is then modified by the process.The process model thus calculates an output texture based on the input texture.Both textures are "multivalued" quantities which may be represented, for instance, by the functional values f(g) which are then said to form the "texture vector" or they may be expressed by the series expansion coefficients.Depending on the used accuracy, these may be something in between 100 and 1000 parameters.

Texture transformation operators
Texture transformation as illustrated in Figure 6, may be expressed in terms of a texture transformation operator foutput(g) M (R) finpt(g) (1) where M depends on all relevant process variables.Two different situations may then be distinguished: Then eq(1) takes on the form foutput(g) ] fiput(g).M (R) g. dg (5) g which means that the operator needs only to be applied to orientations g rather than to textures f(g), as is illustrated schematically in Figure 7.The operator thus operates in the orientation space rather than in the texture space.The "properties" of the operator are completely known when it has been applied to all possible orientations.With the approximation assumed in Figure 6 these are I orientations with I in the range 100 < I < 1000 orientations (6) 2 Non-linear operators In the more general case the operator will not be linear i.e. non-distributive.Then it is M (R) [fl(g) + f2(g)] M (R) fl(g) + M (R) f2(g) (7) f+r,p(g) foutpUt(g) Superposition texture models.
In these models each crystal orientation recrystallizes independently of all others.
Compromise texture models In these models a growing nucleus must consume deformed grains of different orientations.Its overall growth rate is thus a compromise of the growth rates in the different orientations.
Superposition models are linear whereas compromise models are non-linear.In the theory of deformation texture formation the classical full-constrained Taylor model is linear.
The same holds for the rate-sensitive model (in its classical form) whereas self-consistent models and all other models taking grain interaction into account are non-linear.For models of phase transformation similar considerations hold as for recrystallization.
Diffusive phase transformation can be treated in terms of Superposition or Compromise models.Martensitic texture transformation has been mainly considered in terms of Superposition models (thus far).

TOTAL TEXTURE FORMATION OPERATOR
In reality, a final texture usually is the result of the sucessive action of several (often many) of the processes given in the list of Figure 5. Then the final texture can be written frml(g) M" (R) ...M 2 (R) M (R) finitial(g) Thereby the same kind of process may take place repeatedly.For instance, the final texture of deep-drawing steel may comprise the following texture formation mechanisms: Solidification texture Rolling texture in the y-state Recrystallization in the y-state Phase transformation from ?' to a Rolling texture in the a-state Recrystallization texture in the a-state From the technological point of view a formula of the type eq(10) is most desirable.A computer code is wanted which implements eq(10) such that all possible process parameters can be specified in the operators MI...Mn, giving the correct final texture.Also in geological sciences eq(10) is wanted.In this case frm(g) is being measured.Eq(10) should then be used in the oposite direction in order to f'md out the operators M...M.The straight-forward use of eq( 10) is (in principle) unique whereas the inverse problem i.e. determination of the sequence of operators M1...M can in general not be unique.

CONCLUSIONS
In the field of textures as in any other branch of experimental and applied physics- modelling and computer simulation are methods which gain ever increasing importance.Physical, mathematical, and numerical models as well as computer codes developed on this basis are used in order to understand the considered physical phenomenon.Thereby the agreement between output of the model and measured data is a necessary but not a sufficient condition for a model to be correct.
In texture analysis, texture formation models as well as property models and the combination of both are to be considered.Mathematical models can be used to estimate unknown empirical data by comparing the output of the model as a function of assumed values of the unknown parameters, provided the model itself has already reached a sufficient degree of reliability.
Computer simulation of a physical phenomenon requires the availability of a reliable mathematical model and an appropriate computer code.Simulation may be faster, slower, cheaper, or less dangerous than the real physical or technological process.It is also needed for machine control.
A third term in this connection is interpolation of experimental data, sometimes by sophisticated interpolation formulae (e.g. the ODF as interpolation of measured pole figure data).This is, however, not a model, in the above sense, for the understanding of the underlying physical phenomena.
Mathematical modelling is not altogether new.Rather it is the basic principle of physics.What is new and has lead to an explosionary evolution of modelling is the availability of computers as excellent tools for modelling.It must, however, be kept in mind that the computer is nothing else than a tool, which is used to calculate the outcome of a physical model for all possible input data.

Figure 1
Figure1Schematic representation of the process of modelling a physical phenomenon.

Figure 2
Figure 2 Schematic representation of the estimation of unknown data by comparing modelled with experimental data.

Figure 3
Figure 3 Schematic representation of simulation of a physical phenomenon on the basis of an established mathematical model.

Figure 5
Figure5Three classes of mathematical model used in the field of textures, as well as mathematical methods of data evaluation and texture representation.
Data evaluation

Figure 7 A
Figure 7 A linear texture formation model transforms the function values flnput(g) individually into function values jutpUt(g).It operates in the orientation space.