One More Tool for Understanding Resonance and the Way for a New Definition

We propose the application of graphical convolution to the analysis of the resonance phenomenon. This time-domain approach encompasses both the finally attained periodic oscillations and the initial transient period. It also provides interesting discussion concerning the analysis of nonsinusoidal waves, based not on frequency analysis but on direct consideration of waveforms, and thus presenting an introduction to Fourier series. Further developing the point of view of graphical convolution, we arrive at a new definition of resonance in terms of time domain.


Introduction
1.1.General.The following material fits well into an "Introduction to Linear Systems, " or "Mechanics, " and is relevant to a wide range of technical and physics courses, since the resonance phenomenon has long interested physicists, mathematicians, chemists, engineers, and, nowadays, also biologists.
The complete resonant response of an initially unexcited system has two different, distinguishable parts, and there are, respectively, two basic definitions of resonance, significantly distanced from each other.
In the widely adopted textbook [1] written for physicists, resonance is defined as a linear increase of the amplitude of oscillations in a lossless oscillatory system, obtained when the system is pumped with energy by a sinusoidal force at the correct frequency.Figure 1 schematically shows the "envelope" of the resonant oscillations being developed.
Thus, a lossless system under resonant excitation absorbs more and more energy, and a steady state is never reached.In other words, in the lossless system, the amplitude of the steady state and the "quality factor"  (having a somewhat semantic meaning in such a system) are infinite at resonance.
However, the slope of the envelope is always finite; it depends on the amplitude of the input function, and not on .Though the steady-state response will never be reached in an ideal lossless system, the linear increase in amplitude by itself has an important sense.When a realistic physical system absorbs energy resonantly, say in the form of photons of electromagnetic radiation, there indeed is some period (still we can ignore power losses, say, some back radiation) during which the system's energy increases linearly in time.The energy absorption is immediate upon appearance of the influence, and the rate of the absorption directly measures the intensity of the input.
One notes that the energy pumping into the system at the initial stage of the resonance process readily suggests that the sinusoidal waveform of the input function is not necessary for resonance; it is obvious (think, e.g., about swinging a swing by kicking it) that the energy pumping can occur for other input waveforms as well.This is a heuristically important point of the definition of [1].
The physical importance of the initial increase in oscillatory amplitude is associated not only with the energy pumping; the informational meaning is also important.Assume, for instance, that we speak about the start of oscillations of the spatial positions of the atoms of a medium, caused by an incoming electromagnetic wave.Since this start is associated with the appearance of the wave, it can be also associated with Figure 1: The definition of resonance [1] as linear increase of the amplitude.(The oscillations fill the angle of the envelope.)The infinite process of increase of the amplitude is obtained because of the assumption of losslessness of the system.the registration of a signal.Later on, the established steadystate oscillations (that are associated, because of the radiation of the atoms, with the refraction factor of the medium) influence the velocity of the electromagnetic wave in the medium.As [2] stresses-even if this velocity is larger than the velocity of light (for refraction factor  < 1, i.e., when the frequency of the incoming wave is slightly higher than that of the atoms oscillators)-this does not contradict the theory of relativity, because there is already no signal.Registration of any signal and its group velocity is associated with a (forced) transient process.
A more pragmatic argument for the importance of analysis of the initial transients is that for any application of a steady-state response, especially in modern electronics, we have to know how much time is needed for it to be attained, and this relates, in particular, to the resonant processes.This is relevant to the frequency range in which the device has to be operated.
Contrary to [1], in textbooks on the theory of electrical circuits (e.g., [3][4][5]) and mechanical systems, resonance is defined as the established sinusoidal response with a relatively high amplitude proportional to .Only this definition, directly associated with frequency domain analysis, is widely accepted in the engineering sciences.According to this definition, the envelope of the resonant oscillations (Figure 2) looks even simpler than in Figure 1; it is given by two horizontal lines.This would be so for any steady-state oscillations, and the uniqueness is just by the fact that the oscillation amplitude is proportional to .
After being attained, the steady-state oscillations continue "forever, " and the parameters of the "frequency response" can be thus relatively easily measured.Nevertheless, the simplicity of Figure 2 is a seeming one, because it is not known when the steady amplitude becomes established, and, certainly, the "frequency response" is not an immediate response to the input signal.The envelope of resonant oscillations, according to the definition of resonance in [3,4] and many other technical textbooks.
When this steady state is attained?
The illustration, for  = 10, of the resonant response of a second-order circuit.Note that we show a case when the excitation is precisely at the resonant frequency, and the notion of "purely resonant oscillations" applies here to the whole process, and not only to the final steady-state part.
Thus, we do not know via the definition of [1] when the slope will finish, and we do not know via the definition of [3][4][5] when the steady state is obtained.
We shall call the definition of [1] "the "Q-t" definition, " since the value of  can be revealed via duration of the initial/transient process in a real system.The commonly used definition [3][4][5] of resonance in terms of the parameters of the sustained response will be called "the "Q-a" definition, " where "a" is an abbreviation for "amplitude." Figure 3 illustrates the actual development of resonance in a second-order circuit.The damping parameter  will be defined in Section 3.
The Q-t and Q-a parts of the resonant oscillations are well seen.For such a not very high  (i.e., 1/ not much larger than the period of the oscillations) the period of fair initial linearity of the envelope includes only some half period of oscillations, but for a really high  it can include many periods.The whole curve shown is the resonant response.This response can be obtained when the external frequency is closing the self-frequency of the system, from the beats of the oscillations (analytically explained by the formulae found in Section 3) shown in Figure 4.
Note that the usual interpretation is somewhat different.It just says that the linear increase of the envelope, shown in Figure 1, can be obtained from the first beat of the periodic  out ()  Figure 4: Possible establishing of the situation shown in Figure 3 through beats while adjustment of the frequency.We can interpret resonance as "filtration" of the beats when the resonant frequency is found.
beats observed in a lossless system.Contrary to that, we observe the beats in a system with losses, and after adjustment of the external frequency obtain the whole resonant response shown in Figure 3.
Our treatment of the topic of resonance for teaching purposes is composed of three main parts shown in Figure 5.The first part briefly recalls traditional "phasor" material relevant only to the Q-a part, which is necessary for introduction of the notations.The next part includes some simple, though usually omitted, arguments showing why the phasor analysis is insufficient.Finally, the third part includes the new tool, which is complementary to the classical approach of [1], and leads to a nontrivial generalization of the concept of resonance.
Our notations need minor comments.As is customary in electrical engineering, the notation for √ −1 is .The small italic Latin "v, " , is voltage in the time domain (i.e., a real value), V means phasor, that is, a complex number in the frequency domain. is the dummy variable of integration in a definite integral of the convolution type.It is measured in seconds, and the difference  − , where  is time, often appears.

Some Advice to the Teacher
First, we deal here with a lot of pedagogical science-in principle the issues are not new but are often missed in the classroom; as far as we know, no such complete scheme of the necessary arguments for teaching resonance exists.Perhaps this is because some issues indeed require a serious revisiting and time is often limited due to overloaded teaching plans and schedules.That the results of this "economy" are not bright is seen first of all from the already mentioned fact that electrical engineering (EE) students often learn resonance only via phasors and are not concerned with the time needed for the very important steady state to be established.The resonance phenomenon is so physically important that it is taught to technical students many times: in mechanics, in EE, in optics, and so forth.However, all this repeated teaching is actually equivalent to the use of phasors, that is, relates only to the established steady state.
Furthermore, the teachers (almost all of them) miss the very interesting possibility to exhibit the power of the convolution-integral analysis for studying the development of a resonant state.In our opinion, this demonstration makes the convolution integral a more interesting tool; this really is one of the best applications of the "graphical convolution, " which should not be missed in any program.The convolution outlook well unites the view of resonance as a steady state by engineers and the view of resonance as energy pumping into a system, by physicists.The arguments of the graphical convolution also enable one to easily see (before knowing Fourier series) that a nonsinusoidal periodic input wave can cause resonance just as the sinusoidal one does.Thus, these arguments can be used also as an explanation of the physical meaning of the Fourier expansion.Our classroom experience shows that the average student can understand this material and finds it interesting.
Thus, regarding the use of the pedagogical material, we would advise the teacher of the EE students, to return to the topic of resonance (previously taught via phasors), when the students start with convolution.
Finally, the present work includes some new science, which can be also related to teaching, but perhaps at graduate level, depending on the level of the students or the university.We mean the generalization of the concept of resonance considered in Section 5.It is logical that if the convolution integral can show resonance (or resonant conditions) directly, not via Fourier analysis, then this "showing" exposes a general definition of resonance.Furthermore, since mathematically the convolution integral can be seen-with a proper writing of the impulse response in the integrand-as a scalar product, it is just natural to introduce into the consideration the outlook of Euclidean space.
The latter immediately suggests a geometric interpretation of resonance in functional terms, because it is clear what is the condition (here, the resonant one) for optimization of the scalar product of two normed vectors.As a whole, we simply replace the traditional requirement of equality of some frequencies to the condition of correlation of two time functions, which includes the classical sinusoidal (and the simplest oscillator) case as a particular one.
The geometrical consideration leads to a symmetry argument: since the impulse response ℎ() is the only given "vector, " any optimal input "vector" has to be similarly oriented; there simply is no other selected direction.The associated writing  inp ∼ ℎ, that is often used here just for brevity, precisely means the adjustment of the waveform of  inp () to that of ℎ() by the following two steps.
It is relevant here that for weak power losses typical for all resonant systems, the damping of ℎ() in the first period can be ignored, which should be a simplifying circumstance for  creation of the periodic  inp ().The way of the adjustment of  inp () reflects the fact that the Euclidean space can relate to one period.
Both because of the somewhat higher level of the mathematical discussion and some connection with the theory of "matched filters, " usually related to special courses (which could not be discussed here), it seems that this final material should be rather given for graduate students.However, we also believe that a teacher will find here some pedagogical motivation and will be able to convey more lucid treatment than we succeeded to doing.Thus, the question regarding the possibility of teaching the generalized resonance to undergraduate students remains open.Some other nontrivial points, deserving pedagogical judgement or analytical treatment, appear already in the use of the convolution.This means the replacement of the weakly damping ℎ() of an oscillatory system by the not damping but cut function ℎ  (), shown in Figure 11, and the problem of definition of the damping parameter  for the tending to zero ℎ() of a complicated oscillatory circuit.A possible way for the latter can be by observation (this is not yet worked out) of some averages, for example, how the integral of ℎ 2 , or of |ℎ|, over the fixed-length interval (,  + Δ) is decreased with increase in .

Elementary Approaches
3.1.The Second-Order Equation.The background formulae for both the Q-t and Q-a parts of the resonant response can be given by the Kirchhoff voltage equation for the electrical current () in a series RLC (resistor-inductorcapacitor) circuit driven from a source of sinusoidal voltage with amplitude   : Differentiating ( 1) and dividing by  ̸ = 0, we obtain with the damping factor  = /2 and the resonant frequency For purely resonant excitation, the input sinusoidal function is at frequency  =   , or at a very close frequency   , as defined below in (6).

The Time-Domain Argument.
The full solution of (2) can be explicitly composed of two terms; the first, denoted as  ℎ , originates from the homogeneous (ℎ) equation, and the second, denoted as   , represents the finally obtained () periodic oscillations, that is, is the simplest (but not the only possible!) partial solution of the forced equation: It is important that the zero initial conditions cannot be fitted by the second term in (3),   (), continued backward in time to  = 0. (Indeed, no sinusoidal function satisfies both the conditions () = 0 and / = 0, at any point.)Thus, it is obvious that a nonzero term  ℎ () is needed in (3).This term is where at least one of the constants  1 and  2 nonzero.Furthermore, it is obvious from (4) that the time needed for  ℎ () to decay is of the order of 1/ ∼   (compare to (9)).However, according to the two-term structure of (3), the time needed for   () to be established, that is, for () to become   (), is just the time needed for  ℎ () to decay.Thus, the established "frequency response" is attained only after the significant time of order   ∼ .
Unfortunately, this elementary logic argument following from ( 3) is missed in [3][4][5] and many other technical textbooks that ignore the Q-t part of the resonance and directly deal only with the Q-a part.
However form (3) is also not optimal here because it is not explicitly shown that for zero initial conditions not only   () but also the decaying  ℎ () are directly proportional to the amplitude (or scaling factor)   of the input wave.
That is, from the general form (3) alone it is not obvious that, when choosing zero initial conditions, we make the response function as a whole (including the transient) to be proportional to   , appearing in (1), that is, to be a tool for studying the input function, at least in the scaling sense.
It would be better to have one expression/term from which this feature of the response is well seen.Such a formula appears in Section 4.

The Phasor Analysis of the Q-a Part.
Let us now briefly recall the standard phasor (impedance) treatment of the final Q-a (steady-state) part of a system's response.We can focus here only on the results associated with the amplitude, the phase relations follow straightforwardly from the expression for the impedance [3,4].
In order to characterize the Q-a part of the response, we use the common notations of [3,4]: the damping factor of the response  ≡ /2, the resonant frequency   = 1/ √ , the quality factor and the frequency at which the system self-oscillates: Note that it is assumed that 4 2 ≫ 1, and thus   and   are practically indistinguishable.Thus, although we never ignore  per se, the much smaller value / ∼  2 /  can be ignored.When speaking about "precise resonant excitation, " we shall mean setting  with this degree of precision, but when writing  ̸ =   , we shall mean that  −   = (), and not (/).Larger than () deviations of  from   are irrelevant to resonance.
It is remarkable that however small is , it is easy, while working with the steady state, to detect differences of order  between  and   , using the resonant curve/response described by (8).
Figure 6 illustrates the resonance curve.Though this figure is well known, it is usually not stressed that since each point of the curve corresponds to some steady state, a certain time is needed for the system to pass on from one point of the curve to another one, and the sharper the resonance is the more time is needed.The physical process is such that for a small  the establishment of this response takes a (long) time of the order of which is not directly seen from the resonance curve.The relation 1/ ∼   for the transient period should be remembered regarding any application of the resonance curve, in any technical device.The case of a mistake caused by assuming a quicker performance for measuring input frequency by means of passing on from one steady state to another is mentioned in [2].This mistake is associated with using only the resonance curve, that is, thinking only in terms of the frequency response.

The Use of Graphical Convolution
We pass on to the constructive point, the convolution integral presenting the resonant response, and its graphical treatment.It is desirable for a good "system understanding" of the topic that the concepts of zero input response (ZIR) and zero state response (ZSR), especially the latter one, be known to the reader.
Briefly, ZSR is the partial response of the circuit, which satisfies the zero initial conditions.As  → ∞ (and only then), it becomes the final steady-steady response, that is, becomes the simplest partial response (whose waveform can be often guessed).The appendix illustrates the concepts of ZIR and ZSR in detail, using a first-order system and stressing the distinction between the forms ZIR + ZSR and (3) of the response.
Our system-theory tools are now the impulse (or shock) response ℎ() (or Green's function) and the integral response to  inp () for zero initial conditions: The convolution integral (10) is an example of ZSR, and it is the most suitable tool for understanding the resonant excitation.
It is clear (contrary to (3)) that the total response (10) is directly proportional to the amplitude of the input function.
Figure 7 shows our schematic system.Of course, the system-theory outlook does not relate only to electrical systems; this "block-diagram" can mean influence of a mechanical force on the position of a mass, or a pressure on a piston, or temperature at a point inside a gas, and so forth.
Note that if the initial conditions are zero, they are simply not mentioned.If the input-output map is defined solely by ℎ() (e.g., when one writes in the domain of Laplace variable  out () = () inp ()), it is always ZSR.
In order to treat the convolution integral, it is useful to briefly recall the simple example [5] of the first-order circuit influenced by a single square pulse.The involved Figure 9: The functions appearing in the integrand of the convolution integral (10).The "block"  inp ( − ) is riding (being moved) to the right on the -axes, as time passes.We multiply the present curves in the interval 0 <  <  and, according to (10), take the area under the result, in this interval.When  < Δ, only the interval (0, ) is relevant to (10).When  > Δ, only the interval ( − Δ, ) is actually relevant, and because of the decay of ℎ(),  out () becomes decaying.physical functions are shown in Figure 8, and the associated "integrand situation" of (10) is shown in Figure 9.
It is graphically obvious from Figure 9 that the maximal value of  out () is obtained for  = Δ, when the rectangular pulse already fully overlaps with ℎ(), but still "catches" the initial (highest) part of ℎ().This simple observation shows the strength of the graphical convolution for a qualitative analysis.

The (Resonant) Case of a Sinusoidal Input Function Acting
on the Second-Order System.For the second-order system with weak losses, we use for (10) As before, we apply Figure 10 builds the solution (10) step by step; first our ℎ() and  inp (−) (compare to Figure 9), then the product of these functions, and finally the integral, that is,  out () = ().
On the upper graph, the "train"  inp ( − ) travels to the right, starting at  = 0, on the middle graph we have the integrand of (10).The area  inp () = () under the integrand's curve appears as the final result on the third graph.
In view of the basic role of the overlapping of  inp ( − ) with ℎ(), it is worthwhile to look forward a little and compare Figure 10 to Figures 14 and 15 that relate to the case of an input square wave.For the upper border of integration in (10) be  = (/  ) and for very weak damping of ℎ() the situations being compared are very similar.The distinction is that, in order to obtain the extremes of  inp (), we integrate in Figure 15 the absolute value of several sinusoidal pieces (halfwaves), while in Figure 10 we integrate the squared sinusoidal pieces.Since we integrate, in each case,  similar pieces (all positive, giving a maximum of  out (), or all negative, giving a minimum), the result of each such integration is directly proportional to .
Thus, if  = 0, when ℎ() is strictly periodic, from the periodic nature of also  inp (), it follows that for any integer , which is a linear increase in the envelope for the two very different input waves, in the spirit of Figure 1.
For a small but finite , 0 <  ≪   , the initial linear increase has high precision only for some first few  when  ∼   ∼ 1/  ≪ 1/, that is,  ≪ 1, or  − ≈ 1. (The damping of ℎ() may be ignored for these .) Observe that the finally obtained periodicity of  out () follows only from that of  inp (), while the linear increase requires periodicity of both  inp () and ℎ().
The above discussion suggests the following simplification of the impulse response of the circuit, useful for analysis of the resonant systems.This simplification is a useful preparation for the rest of the analysis.

A Simplified ℎ(𝑡) and the Associated Envelope of the
Oscillations.Considering that the parameter 1/ appears in the above (and in Figure 3) as some symbolic border for the linearity, let us take a constructive step by suggesting a geometrically clearer situation when this border is artificially made sharp by introducing an idealization/simplification of ℎ(), which will be denoted as ℎ  ().
In this idealization-that seems to be no less reasonable and suitable in qualitative analysis than the usual use of the vague expression "somewhere at "" of order 1/", we replace ℎ() by a finite "piece" of nondamping oscillations of total length 1/.
We thus consider that however weak the damping of ℎ() is, for sufficiently large , when  ≫ 1/ ∼   , we have  − ≪ 1, that is, the oscillations become strongly damped with respect to the first oscillation.For  > 1/ the further "movement" of the function  inp ( − ) to the right (see Figure 10 again) becomes less effective; the exponentially decreasing tail of the oscillating ℎ() influences (10), via the overlap, more and more weakly, and as  → ∞,  out () ceases to increase and becomes periodic, obviously.
We simplify this qualitative vision of the process by assuming that up to  = 1/, there is no damping of ℎ(), but, starting from  = 1/, ℎ() completely disappears.That is, we replace the function  − sin    by the function ℎ  () = [() − ( − 1/)] sin   , where () is the unit step function.The factor () − ( − 1/) here is a "cutting window" for sin   .This is the formal writing of the "piece" of the nondamping self-oscillations of the oscillator.See Figure 11.
For ℎ  (), it is obvious that when the "train"  inp ( − ) crosses in Figure 10 the point  = 1/, the graphical construction of (10), that is,  out (), becomes a periodic procedure.Figuratively speaking, we can compare ℎ  () with a railway station near which the infinite train  inp ( − ) passes; some wagons go away, but similar new ones enter and the total overlapping is repeated periodically.
The same is also analytically obvious, since when setting, for  > 1/, the upper limit of integration in (10) as 1/, we have, because of the periodicity of  inp (⋅), the integral: as a periodic function of .
As is illustrated by Figure 12-which is an approximation to the envelope shown in Figure 3-the envelope of the output oscillations becomes completely saturated for  > 1/.
Figure 12 clearly shows that both the amplitude of the finally established steady-state oscillations and the time needed for establishing these oscillations are proportional to , while the initial slope is obviously independent of .
It is important that ℎ  () can be also constructed for more complicated functions ℎ() (for which it may be, for instance, ℎ(+/2) ̸ = − ℎ()) and also then the graphical convolution is easier formulated in terms of ℎ  ().As an example relevant to the theoretical investigations-approximately presenting the Figure 10: Graphically obtaining the resonant response for a second-order oscillatory system and a sinusoidal input, according to (10).The envelope (not shown) has to pass via the maxima and minima of  out () appearing in the last graph.
Figure 11: The simplified ℎ() (named ℎ  ()): there is no damping at 0 <  < 1/, but for  > 1/, it is identically zero, that is, we first ignore the damping of the real ℎ() and then cut it completely.This idealization expresses the undoubted fact that the interval 0 <  < 1/ is dominant and makes the treatment simpler.A small change in 1/ which makes the oscillatory part more pleasing by including in it just the (closest) integer number of the half waves, as shown here, may be allowed, and when using ℎ  () in the following we shall assume for simplicity that the situation is such.
we can easily reduce, using periodicity of  inp () for any oscillatory ℎ() (and ℎ  ()), the analysis of the interval (0, 1/) to that of a small interval, as was for (0, /  ) in Figure 10.

Nonsinusoidal Input Waves.
The advantage of the graphical convolution is not so much in the calculation aspect.It is easy for imagination (insight) procedure, and it is a flexible tool in the qualitative analysis of the time processes.The graphical procedure makes it absolutely clear that the really basic point for a resonant response is not sinusoidality, but periodicity of the input function.Not being derived from the spectral (Fourier) approach, this observation heuristically completes this approach and may be used (see the following) in an introduction to Fourier analysis.Thus, let us now take  inp () as the rectangular wave shown in Figure 13 and follow the way of Figures 9 and 10, in the sequential Figures 14 and 15.
Here too, the envelope of the resonant oscillations can be well outlined by considering  out () at instances   = /  ; first of all /  , 2/  , and 3/  , for which we, respectively, have the first maximum, the first minimum, and the second maximum of  out ().
There are absolutely the same qualitative (geometric) reasons for resonance here, and Figure 15 explains that if the damping of ℎ() is weak, that is, some first sequential halfwaves of  inp (−)ℎ() are similar, then the respective extreme values of () =  out () form a linear increase in the envelope.
Figure 16 shows  out () = () at these extreme points.Though it is not easy to find the precise  out () everywhere, for the envelope of the oscillations, which passes through the extreme points, the resonant increase in the response amplitude is absolutely clear.
Figures 10, 14, 15, and 16 make it clear that many other waveforms with the correct period would likewise cause resonance in the circuit.Furthermore, for the overlapping to remain good, we can change not only  inp (), but also ℎ().Making the form of the impulse response more complicated means making the system's structure more complicated, and thus graphical convolution is also a valuable starting point for studying resonance in complicated systems in terms of the waveforms.This point of view will be realized in Section 5 where we generalize the concept of resonance.Thus, using the algorithm of the graphical convolution, we make two more methodological steps; a pedagogical one in Section 4.4 and the constructive one in Section 5.

Let Us Try to "Discover" the Fourier Series in Order to
Understand It Better.The conclusion regarding the possibility of obtaining resonance using a nonsinusoidal input reasonably means that when pushing a swing with a child on it, it is unnecessary for the father to develop a sinusoidal force.Moreover, the nonsinusoidal input even has some obvious advantages.While the sinusoidal input wave leads to resonance only when its frequency has the correct value, exciting resonance by means of a nonsinusoidal wave can be done at very different frequencies (one need not to kick the swing at every oscillation), which is, of course, associated with the Fourier expansions of the force.
Let us see how, using graphical convolution, we can reveal harmonic structure of a function, still not knowing anything about Fourier series.For that, let us continue with the case of square wave input, but take now such a waveform with a period that is 3 times longer than the period of selfoscillations of the oscillator.Consider Figure 17.
This time, the more distant instances,  = 3/  , 6/  , and 9/  , are obviously most suitable for understanding how the envelope of the oscillations looks.One sees that also for  = 3  , the same geometric "resonant mechanism" exists, but the transfer from  =   to  = 3  makes the excitation significantly less intensive.Indeed, see Figure 18 comparing the present extreme case of  = 3/  to the extreme case of  = /  of Figure 15.
We see that each extreme overlap is now only one-third as effective as was the respective maximum overlap in the previous case.That is, at  = 3/  , we now have what we previously had at  = /  , which means a much slower increase in the amplitude in time.
Since  out () is now increased at a much slower rate, but 1/ is the same (i.e., the transient lasts the same time), the amplitude of the final periodic oscillations is respectively smaller, which means weaker resonance in terms of frequency response.
Let us compare the two cases of the square wave thus studied to the initial case of the sinusoidal function.The case of the "nonstretched" square wave corresponds to the input sin   , while according to the conclusions derived in Figure 18, the case of the "stretched" wave corresponds to the input (1/3) sin   .We thus simply (and roughly) reduce the change in period of the nonsinusoidal function to the equivalent change in amplitude of the sinusoidal function.
Let us now try-as a tribute to Joseph Fourier-to speak not about the same circuit influenced by different waves, but about the same wave influencing different circuits.Instead of increasing , we could decrease   , thus testing the ability of the same square wave to cause resonance in the different oscillatory circuits.For the new circuit, the graphical procedure remains the same, obviously, and the ratio 1/3 of the resonant amplitudes in the compared cases of /  = 3 and /  = 1 remains.
In fact, we are thus testing the square wave using two simple oscillatory circuits of different self-frequencies.Namely, connecting in parallel to the source of the square wave voltage two simple oscillatory circuits with self-frequencies   and 3  , we reveal for one of them the action of the square wave as that of sin    and for the other as that of (1/3) sin 3  .
Let us check this result by using the arguments in the inverse order.The first sinusoidal term of series (17) roughly corresponds to the square wave with  =   (i.e.,  =   ), and in order to make the second term resonant, we have to change the self-frequency of the circuit to   = 3, that is, make  = (1/3)  , or  = 3  , which is our second "experiment" in which the reduced to 1/3 intensity of the resonant oscillations is indeed obtained, in agreement with (17).
It is possible to similarly graphically analyze a triangular wave at the input, or a sequence of periodic pulses of an arbitrary form (more suitable for the father kicking the swing) with a period that is an integer of   .
One notes that such figures as Figure 18 are relevant to the standard integral form of Fourier coefficients.However on the way of graphical convolution, this similarity arises only for the extremes ( out ()) max = | out (  )|, and this way is independent and visually very clear.

A Generalization of the Definition of
Resonance in Terms of Mutual Adjustment of  inp () and ℎ() After working out the examples of the graphical convolution, we are now in position to formulate a wider -domain definition of resonance.
In terms of the graphical convolution, the analytical symmetry of (10): means that besides observing the overlapping of  inp ( − ) and ℎ(), we can observe overlapping of ℎ( − ) and  inp ().
In the latter case, the graph of ℎ(−) starts to move to the right at  = 0, as was in the case with  inp (−).
Though equality (18) is a very simple mathematical fact, similar to the equalities  =  and ( ⃗ , ⃗ ) = ( ⃗ , ⃗ ), in the context of graphical convolution, there is a nontriviality in the motivation given by (18), because the possibility to move ℎ(−) also suggests changing the form of ℎ(⋅), that is, starting to deal with a complicated system (or structure) to be resonantly excited.We thus shall try to define resonance, that : Because of the mutual compensation of two half-waves of ℎ(), only each third half-wave of ℎ() contributes to the extreme value of  out (), and the maximum overlaps between ℎ() and  inp () are now one-third as effective as before.The reader is asked (this will soon be needed!) to similarly consider the cases of  = 5  and so forth.
is, the optimization of the peaks of  out () (or its r.m.s.value), in the terms of more arbitrary waveforms of ℎ(), while the case of the sinusoidal ℎ(⋅), that is, of simple oscillator, appears as a particular one.

The Optimization of the Overlapping of 𝑓(𝜆) ≡ 𝑓 inp (𝑡−𝜆) and ℎ(𝜆) in a Finite Interval and Creation of the Optimal
Periodic  inp ().Let us continue to assume that the losses in the system are small, that is, that ℎ() is decaying so slowly that we can speak about at least few oscillatory spikes (labeled by ) through which the envelope of the oscillations passes during its linear increase.
In view of the examples studied, the extreme points of  out () are obtained when   are the zero-crossings of ℎ(), because only then the overlapping of  inp (  −) with ℎ() can be made maximal.
Comment.Assuming that the parameters of the type / 0 of the different harmonic components of ℎ() are different, one sees that for a nonsinusoidal damping ℎ(), the distribution of the zero-crossings of ℎ() can be changed with the decay of this function, and thus for a periodic  inp () the condition or considered for  ≫ 1 need not be satisfied in the whole interval of the integration (0 <  <   ) related to the case of   ≫ .However since both the amplitude-type decays and the change in the intervals between the zeros are defined by the same very small damping parameters, the resulted effects of imprecision are of the same smallness.Both problems are not faced when we use the "generating interval" and employ ℎ  () instead of the precise ℎ().The fact that any use of ℎ  () is anyway associated with error of order  −1 ∼   points to the expected good precision of the generalized definition of resonance.
Thus, {  }, measured with respect to the time origin, that is, with respect to the moment when  inp () and ℎ() arise, is assumed to be given by the known ℎ().Of course, we assume the system to be an oscillatory one, for the parameters   and   of our graphical constructions to be meaningful.
Having the linearly increasing sequence | out (  )| =    belonging to the envelope of the oscillations and wishing to increase the finally established oscillations, obviously we have to increase the factor   .
However since   and the whole intensity of  out () can be increased not only by the proper wave-form of  inp (⋅), but also by an amplitude-type scaling factor, for the general discussion, some norm for  inp (⋅), has to be introduced.
For the definitions of the norm and the scalar products of the functions appearing during adjustment of  inp () to ℎ(), it is sufficient to consider a certain (for a fixed, not too large ) interval (  ,  +1 )-the one in which we can calculate   .This interval can be simply (0,  1 ) or (0, ).
Respectively, the scalar product of two functions is taken as With these definitions, the set of functions defined for the purpose of the optimization in the interval (  ,  +1 ) forms an (infinite-dimensional) Euclidean space.
For the quantities that interest us, we have from ( 23 Not ascribing to "(⋅)" index "" is justified by the fact that the particular interval (  ,  +1 ) to be actually used, is finally chosen very naturally.
The basic relation | out (  )| =    means that any local extremum of  out () is a sum of such scalar products as (23).
Observe that the physical dimensions of ‖ ⋅ ‖ and (⋅, ⋅) are Observe also from ( 22) and ( 23) that and that if we take  2 ∼  1 , that is, Thus, we finally have the following two points.
(a) We find the proper interval (  ,  +1 ) for creating the optimal periodic  inp ().
(b) The proportionality  ∼ ℎ in this interval is the optimal case of the influence of an oscillatory circuit by  inp ().
Items (a) and (b) are our definition of the generalized resonance.The case of sinusoidal ℎ() is obviously included since the proportionality to ℎ() requires () to also be sinusoidal of the same period.
This mathematical situation is the constructive point, but the discussion of Sections 5.3 and 5.5 of the optimization of   from a more physical point of view is useful, leading us to very compact formulation of the extended resonance condition.However, let us first of all use the simple oscillator checking how essential is the direct proportionality of  to ℎ, that is, what may be the quantitative miss when the waveform of () differs from that of ℎ() in the chosen interval (  ,  +1 ).

An Example for a Simple
Oscillator.Let us compare the cases of the square (Figures 13, 14, and 15) and sinusoidal (Figure 10) input waves of the same period for   defined in the interval (0, /2 = /).Of course, the norms of the input functions have to be equal for the comparison of the respective responses.(Note that in the consideration of the above figures, equality of the norms was not provided, and thus the following result cannot be derived from the previous discussions.) Let the height of the square wave be 1.Then,  2 inp ( − ) = 1 everywhere, and according to (22) the norm is obtained as √/.For obtaining the same norm for a sinusoidal input, we write it as  sin  and find  > 0 so that that is, Because of the symmetry of the sinusoidal and squarewave inputs, in both cases  inp () =  inp ( − ) ≡ () in the interval (0, /).For either of the input waveforms the norm of  inp () now equals √/, and for ℎ() = sin  of the simple oscillator (the damping in this interval is ignored), we have, according to (24) and (32), as the upper bund.Thus, while for the response to the square wave we have (meaning "cos  ≤ 1"), where the equality is obtained only when the vectors are mutually proportional ("" = 0), that is, similarly (may be opposite) directed: The latter relation is obvious, in particular because it is obvious that while rotation of the usual vector (say a pencil) when directing it in parallel to another vector (another pencil), the length of this vector is unchanged.This point is much more delicate regarding the norm of a function being adjusted to ℎ(), which is the "rotation" of the "vector" in the function space.Since the waveform of the function is being changed, its norm can be also changed.
Since our "vectors" are the time functions and the functional analog of ( 40) is (for simplicity, we sometimes write  instead of  − ) we very simply obtain, by the mathematical equivalence of the function and the vector spaces, condition (31); that is, only an () that is directly proportional to ℎ() can give an extreme value for   .

Comments.
One can consider  ∼ ℎ to be both a generalization and a direct analogy to the condition  =  o of the standard definitions of [1,[3][4][5].Then, both of the equalities,  =  o and  = ℎ, appear in the associated theories as sufficient conditions for obtaining resonance in a linear oscillatory system.The norms become important at the next step, namely, regarding the theoretical conditions of system's linearity, which always include some limitations on intensity of the function/process, in any application.For applications, the real properties of the physical source of  inp () (e.g., a voltage source) whose power will here be proportional to  2 obviously require  to be limited.
The requirement of preserving the norm ‖()‖ during realization of () ∼ ℎ() also necessarily originates from the practically useful formulation of the resonance problem as the optimization problem that requires calculation of the optimized peaks (or rms value) of  inp ().
If ℎ( + /2) ̸ = −ℎ(), then the interval in which the scalar products (i.e., the Euclidean functional space) are defined has to be taken over the whole period of ℎ(), that is, as   = ∫  0  inp ( − )ℎ().( =   is a necessary condition.) The interval in which we define   can be named the "generating" interval.
We can finally write the optimal  out () that resulted from the optimal  inp () as where the function ℎ (0,)periodic () is ℎ() in the generating interval, periodically continued for  > .
We turn now to an informal "physical abstraction, " suggested by the comparison of the two Euclidean spaces.This abstraction leads us to a very compact formulation of the generalized definition of resonance.

A Symmetry Argument for Formulation of the Generalized
Definition of Resonance.For the usual vector space, we have well-developed vectorial analysis, in which symmetry arguments are widely employed.The mathematical equivalence of the two spaces under consideration suggests that such arguments-as far as they are related to the scalar productsare legitimized also in the functional space.
Recall the simple field problem in which the scalar field (e.g., electrical potential) is given by means of a constant vector ⃗ , and it is asked in what direction to go in order to have the steepest change of ( ⃗ ).
As the methodological point, one need not know how to calculate gradient.It is just obvious that only ⃗ , or a proportional vector, can show the direction of the gradient, since there is only one fixed vector given, and it is simply impossible to "construct" from the given data any other constant vector, defining another direction for the gradient.Figure 19: The generalized input includes both generator inputs and initial conditions.(This becomes trivial when the Laplace transform is used to transform a system scheme.)Considering the superposition of the output function of a linear system for such an input, we obtain the associated structure of the solution of the linear system in a form that is different from (3).Respectively, the output function is written as ZIR() + ZSR().
For ZIR. Homogeneous equation (zero generator inputs) plus the needed initial conditions.For ZSR.Given equation with zero initial conditions.
Figure 19 schematically illustrates this presentation of the linear response.
Figure 7 reduces Figure 19 to what we actually need for the processes with zero initial conditions.The logical advantage of the presentation ZIR + ZSR over (3) becomes clear in the terms of the superposition.
The ZSR includes both, the decaying transient needed to satisfy zero initial conditions and the final steady state given in its general form by the following integral (A.8).The oscillations shown in Figure 3 are examples of ZSR.
The separation of the solution function into ZIR and ZSR is advantageous, for example, when the circuit is used to analyze the input signal, that is, when we wish to work only with the ZSR, when nonzero initial conditions are just redundant inputs.
The convolution integral (10) is ZSR.When speaking about system with constant parameters having one input and one output, the Laplace transform of ZSR() equals () inp () where () is the "transfer function" of the system, that is, the Laplace transform of h(t).Each time when we speak about transfer function, we speak about ZSR, that is, zero initial conditions.
It is easy to write  out () for our problem.Using the known formula for Laplace transform of periodic function and setting the optimal  inp ( − ) ≡ −ℎ(),  ∈ [0, ], that is,  inp () ≡ −ℎ( − ),  ∈ [0, ], where  is the period (in the sense of the generating interval) of ℎ(), we have, for the periodically continued  inp (), the Laplace transform of our  out () as (see (43

Figure 2 :
Figure2: The envelope of resonant oscillations, according to the definition of resonance in[3,4] and many other technical textbooks.When this steady state is attained?

Figure 5 :
Figure 5: The methodological points regarding the study of resonance in the present work.

Figure 8 :
Figure 8: The functions for the simplest example of convolution.(A first-order circuit with an input block pulse.)

Figure 12 :
Figure 12: The envelope of  out () obtained for the simplified ℎ() shown in Figure 11.

Figure 13 :Figure 14 :
Figure 13: The rectangular wave at the input.

Figure 16 :Figure 17 :
Figure 16: Linear increase of the envelope (ideal in the in the lossless situation) for the square wave input.Compare to Figures 1, 3, and 12.
Figure18: Because of the mutual compensation of two half-waves of ℎ(), only each third half-wave of ℎ() contributes to the extreme value of  out (), and the maximum overlaps between ℎ() and  inp () are now one-third as effective as before.The reader is asked (this will soon be needed!) to similarly consider the cases of  = 5  and so forth.
) ℎ (0,)periodic () , tio n wit h ℎ( ) Figure19schematically illustrates this presentation of the linear response.Figure7reduces Figure19to what we actually need for the processes with zero initial conditions.The logical advantage of the presentation ZIR + ZSR over (3) becomes clear in the terms of the superposition.The ZSR includes both, the decaying transient needed to satisfy zero initial conditions and the final steady state given in its general form by the following integral (A.8).The oscillations shown in Figure3are examples of ZSR.The separation of the solution function into ZIR and ZSR is advantageous, for example, when the circuit is used to analyze the input signal, that is, when we wish to work only with the ZSR, when nonzero initial conditions are just redundant inputs.The convolution integral (10) is ZSR.When speaking about system with constant parameters having one input and one output, the Laplace transform of ZSR() equals () inp () where () is the "transfer function" of the system, that is, the Laplace transform of h(t).Each time when we speak about transfer function, we speak about ZSR, that is, zero initial conditions.It is easy to write  out () for our problem.Using the known formula for Laplace transform of periodic function and setting the optimal  inp ( − ) ≡ −ℎ(),  ∈ [0, ], that is,  inp () ≡ −ℎ( − ),  ∈ [0, ], where  is the period (in the sense of the generating interval) of ℎ(), we have, for the periodically continued  inp (), the Laplace transform of our  out () as (see (43))  out () =  ()  inp () =  () ∫  0  inp ()  −  1 −  − inp ( +1 − ) ,   <  <  +1 .
36)only, for the response to the input  sin  we have, for the  found, Analogy with the Usual Vectors.In the mathematical sense, the set of functions that can be used for the optimization of   is analogous to the set of usual vectors.