In a previous work, we described the simulation tool (FOPS 3D) (Khankin et al., 2001) which can simulate the full three-dimensional geometrical structure of a fiber and the propagation of a light beam sent through it. In this paper we are focusing on three major points: the first concerns the improvements made with respect to the simulation tool and the second, optimizations implemented with respect to the calculations' efficiency. Finally, the major research improvement from our previous works is the simulation results of the optimal absorbance value, as a function of bending angle for a given uncladded part diameter, that are presented; it is suggested that fiber-bending may improve the efficiency of recording the relevant measurements. This is the third iteration of the FOPS development process (Mann et al., 2009) which was significantly optimized by decreasing memory usage and increasing CPU utilization.
The evanescent wave spectroscopy technique, generally used in the IR range, is useful for inspecting materials and examining their properties, as well as for establishing biomedical diagnoses [
Two primary methods can be used, the first involves tapering the untapped part of the fiber and the second, bending the fiber about its untapped part. These two actions make it more difficult for the light beam to propagate, causing more hits in the untapped part and thus creating additional evanescent waves, which in turn increase the absorption intensity. The first method was investigated experimentally by [
In a previous work, the Monte Carlo simulation tool (FOPS 3D) was described [
The user may define several properties for the simulation system, including the simulated fiber’s length, radius, the radius of the uncladded part, reflection coefficients, and bending angle (see Figure
The simulated fiber.
The simulation history is defined by a light beam that hits the uncladded part and successfully travels through the fiber up to the light sensor. The number of successful histories is an estimator of the efficiency of the simulated fiber shape. The unbiased mean value of hits in successful histories is used for calculating the Fresnel transmission coefficient [
The simulation tool, as it is virtual, has the flexibility of freely bending the fiber, thus providing the possibility of creating a fiber folded in any possible curvature. In turn, folding the fiber increases the number of beam hits in the uncladded part by slowing down the beam’s propagation. A second feature of the simulation tool is the possibility of adjusting the radius of the uncladded part alone, specifically decreasing its radius relative to the rest of the fiber’s radius; this affects the number of beam hits in the uncladded section. In addition, the fiber maybe deformed and clay modeled, providing the possibility of creating alternative geometrical shapes and inspecting their efficiency as ATR elements [
The Monte Carlo approach of simulating physical phenomena is based on the creation of a large sample of random occurrences, used in order to reconstruct the dynamics of a particular system. The simulation tool can then provide, as a system output, the utilization of a certain geometrical fiber configuration. The simulation results include estimates of the probability of rays successfully passing through a fiber in a particular, user-defined geometry.
Fiber evanescent wave spectroscopy, primarily used with an IR light source, consists of emitting rays into a flexible optical fiber. The emitted energy is passed on to the distal end of the fiber and into a Fourier transform infrared spectroscopy (FTIR) detector. In this section, we will briefly define the main physical phenomena modeled by the simulation algorithm.
The Gaussian distribution for a beam waist is given by [
The incidence angles of the rays must be less than
In the Monte Carlo approach applied in the simulation, Ruddy’s equation [
In the Monte Carlo simulation, rays are fired into the fiber according to the radial and angular distributions of (
The current simulation tool belongs to the family of scientific software [
Test-driven development is a software development approach based on “test-first development”, according to which tests should be written before coding. Since the development process is broken into small units, it is very easy to adapt it to black-box testing. This approach tests all possible combinations of end-user actions. Black-box testing does not require knowledge of code and is intended to simulate the end-user experience of the final product. These tests determine appropriately correct output for valid or invalid input. The Java platform was chosen for development since it is efficient time wise with regards to the development of existing data structures and helps avoid memory management issues.
The simulation tool was initially modeled and designed by Unified Modeling Language (UML). The simulation’s architectural blueprints were visualized by UML elements. Objects were described by object modeling techniques and information flow by data flow diagrams and entity relationship diagrams. Entity relationship diagrams represent abstract and conceptual data elements; for example, the simulation tool uses the fiber structure as the conceptual data. In order to describe the data flow between object entities, as well as modifications on data structures, data flow diagrams were created. Data flow diagrams help visualize data processing in an information system. FOPS 3D is computation-intensive software, and for this reason, more tests were conducted to ensure high level of reliability. Finally, it was verified that the results were in accordance with previous experiments and simulations.
Extensive optimization of the tool was performed. The main aspect optimized was memory usage, which was especially high after the transfer to a 64-bit JVM. As the system was first ported to the new architecture, memory usage was extensive, (around 600 MB). In previous versions, memory cleaning was mostly performed by the JVM garbage collector. However, with the advances in CPU speed and architecture, the JVM’s garbage collector was too slow to keep memory usage at a consistent level. Data structure objects were not deleted quickly enough and memory was not retrieved at a sufficient pace, while the simulation continued to run and demand memory space allocations. To solve this problem, an inner agent was introduced, with its own cleaning policy. The agent’s primary concern was to clear data structures by clearing all references to it and marking it for garbage collection. Meanwhile, the agent replaced the existing data structures with new, smaller sized structures, since in Java clearing only the data structure will not reduce its memory size. Furthermore, the size of data objects was reduced as much as possible. In addition, some redundancy was found with respect to data objects, especially in data structures regarding animation of recurring shapes, like rays (consisting of points). Such data structures were adapted to a limit of containing data objects, so they will not hold too many objects.
After implementing the above-mentioned changes, memory usage declined to 230 MB. Of course, there is always a tradeoff between resource usage and accuracy of calculation. In this case, reducing memory usage allowed for an increase in calculation accuracy. The latter depends on the number of fiber parts constituting a single fiber, where a greater number of fiber parts results in increased accuracy. Thus, the reduction in memory usage allowed for a significant increase in the number of fiber parts represented. Moreover, simplifying the data objects led to significant improvement in the overall simulation runtime. The simulation engine is now able to process both the fiber data structures and ray beam structures more quickly. In other words, calculations of the ray beam’s advancement and its collision with the fiber medium are more accurate, and the error tolerance of the calculations is of a much smaller order The running time for the previous version, at 1,000,000 histories, was days, while the running time for the new version (at the same number of histories) was around hours. Overall, there was an approximate 43% improvement in running time and 62% improvement of memory usage.
The verification phase in the software development process of the new version of the FOPS 3D simulation tool consisted of comparing the compatibility of its results with those of a previous work [
As in the previous study, in order to seek the optimal width of the uncladded section, the relative absorbance was simulated with respect to the following fiber properties: fiber length 100 mm, diameter 0.9 mm, and refractive indices
In Figure
Average relative absorbance for a flattened fiber with a narrowed midsection.
As seen in Figure
In this work, the goal was to explore the effect of fiber bending on the fiber effectiveness. Accordingly, the bending angles for the untapped midsection were changed, while the midsection itself was kept constant, in order to see determine the optimal absorbance value. In other words, the relative absorbance for different bending angles was measured, while defining the uncladded section at the constant that was shown in Figure
The results of the simulation are shown in Figure
Average relative absorbance of bending angle for a given uncladded part diameter.
As predicted, it indeed is beneficial to bend the fiber so that the ray beam advances more slowly and hits the fiber medium a greater number of times, in order to increase the efficiency of spectroscopy with fiber optics. More hits on the fiber medium transfer additional energy into the tissue or sample tested, thus increasing the accuracy of the method. The most significant finding of this work is that there are two complementary parameters determining optimal fiber efficiency: the thickness of the midsection and the bending angle, which is optimally around 45°. The relation between these two parameters is that the efficiency achieved by bending at the optimal angle is much more significant when the midsection is at its optimal width.
The simulation tool still requires further optimization at the algorithmic level. With the recent advances of multicore processors and parallel programming languages, FOPS 3D may be adapted for parallel execution in order to increase the simulation processing speed. This, along with the use of grids or genetic algorithms, can further improve the simulation tool, enabling it to find the optimal bending level once the optimal width for given fiber properties is determined.