Metasurface-Based Solar Absorption Prediction System Using Artificial Intelligence

Solar energy is a signifcant, environment-friendly source of renewable energy. Te solar absorber transforms solar radiation into heat energy as an efective green energy source. Terefore, increasing its absorbing capacity can improve a solar absorber’s efectiveness. Tis paper proposes a tungsten tantalum alloy with silicon dioxide (WTa-SiO 2 ) ceramic layer-based solar absorber system with two diferent metasurfaces to enhance absorptivity and boost the solar absorber efcacy. Te absorbance is also improved by adjusting the resonator thickness and material thickness, and the maximum visible light absorption is achieved by the suggested solar flter design. Moreover, Golden Eagle Optimization (GE)-based deep AlexNet algorithm is proposed for predicting the parameter variation and their efect on absorbance. Te optimization technique is used to increase the efectiveness of the solar absorber by optimizing the design parameters. Te features from the WTa-SiO 2 design are extracted by the proposed Principal Component-Autoencoder (PC-AE) method. Experimental results show that the proposed system can efectively predict absorptivity with a reduced computational time. Te proposed method demonstrates superior prediction performance with an absorption prediction efciency of 99.8% compared to the existing methods. Tus, the proposed WTa-SiO 2 metasurface-based solar absorber can be used for photovoltaic applications.


Introduction
Due to rapid industrialization and rising home energy use, the energy demand is increasing endlessly. Renewable energy sources are naturally existing resources and do not pollute the environment [1]. New methods of using these naturally existing resources and turning them into valuable products are constantly being developed via recent studies. Te world has abundant naturally regenerating renewable energy sources, including the sun, water, wind, and air [2]. It takes many substantially developed locations to transform one energy source into another using air, wind, or water. On Earth, there is an abundant supply of the sun's pure, natural, renewable energy [3]. One of the most excellent methods to deal with the rising energy demand is the use of solar energy [4]. Tere is still room to improve the efciency of solar energy use. In addition, solar energy is readily transformable into various energy sources that can be used to develop photovoltaic systems, solar cells, and solar absorbers [5].
An ideal absorber is prudently wise for many applications, like solar energy harvesting, light control, and sensing [6]. By absorbing incoming solar energy, the coating layers on the solar cell aid in improving the absorption. Solar energy harvesting also has a signifcant interest in broadband absorbers that span the whole solar spectrum [7]. Examples of material structures used as high-performance solar absorbers include photonic crystals, dense nanorods, multilayer planar photonic frameworks, and nanotube flms [8]. Maximum energy is present in the visible and infrared spectrum when light reaches the earth. Te design of broadband absorbers can utilize the metasurface absorber [9]. Te metasurface's two-dimensional planar surface has recently attracted the attention of scientists because of its distinct electro-optical properties and small size as opposed to bulkier optical systems [10]. As a result, they are used as polarizers, detectors, and absorbers that operate at a wide range of frequencies, including microwave, visible, infrared, ultraviolet, terahertz [11].
Due to losses and a complex three-dimensional structure, the design of metamaterial-based solar absorbers is challenging [12]. To reduce mid-and far-IR emissivity, a solar absorber with enhanced absorption bandwidth is highly desired, encompassing the full near-and visible-IR spectrum with a confgurable cutof wavelength [13]. Predicting absorption performance is an efective way to improve performance in future designs. Te inverse design approach based on deep learning, one of the most recent and frequently used sophisticated computer algorithms, allows for quick design, and exhibits excellent performance, particularly when used to the global optimization issue [14]. To forecast the optical reactions, however, a massive quantity of data gathered computationally is costly and a blind early simulation is needed. A quick, highly efective, and accurate result synergetic approach for developing photonic applications is described here to address the drawback of the traditional deep-learning-based inverse parametric study [15]. To avoid blind computations, this method uses an implicit deep learning method that unintentionally uses accurate approximation as the guideline [16,17]. Te study of absorbers with unique and challenging spectral patterns serves as a case study for the synergetic paradigm that has been presented. Our method resolves the tricky problems associated with batch models and the intricate and timeconsuming procedures that plagued conventional techniques [18][19][20].
Based on the results from the WTa-SiO 2 ceramic layer, we have engineered the design of a metasurface-solar absorber. In addition, it is recommended to use deep AlexNet with Golden Eagle Optimization (GE-deep AlexNet) to predict absorption values for varying metasurface thickness and resonator thickness with diferent wavelengths. A standardized optimal deep-learning model that concurrently considers all the essential dimensional factors is needed to forecast the optical response of solar absorbers based on metasurfaces (such as periodicity, height, width, and aspect ratio). Te design and modeling section discusses the design parameters for the proposed absorber. Te proposed design's absorption is enhanced by adjusting the resonator and substrate thickness parameters. Furthermore, the Principal Component-Autoencoder (PC-AE) approach is used to extract the features from the preprocessed dataset for effective prediction of performance. Tis study's implementation makes use of the MATLAB programming language. Comparisons between the simulation results and traditional approaches demonstrate the efectiveness of the proposed strategy. Te rest of the paper is summarized as follows: Te recent literature related to this research is detailed in Section 2. Te proposed methodology of the research design is explained in Section 3. Te result and its performances are analyzed in Section 4. At last, the work is concluded in Section 5.

Literature Review
Good numbers of solar absorbers and prediction systems have already been reported in the open literature. For example, in [21], Patel et al. propose a solar absorber that absorbs most of the energy from the available solar spectra, such as the visible spectrum and ultraviolet spectrum emissions. Te use of a long short-term memory model to forecast absorption values for various changes in substrate thickness and resonator thickness for upcoming wavelengths is an innovative aspect of this research. Te studies' fndings demonstrate that the prediction system can efectively estimate absorption levels using less time and resources during simulation. Patel et al. put out a solar absorber model employing metasurfaces and machine learning for polynomial regression analysis [22]. Here, circular array-based metasurface layout and circular array metasurface architecture are examined in the visible, infrared, and ultraviolet regions with a wavelength range of 0.2 m to 0.8 m. Te experimental fndings demonstrate that the elevated-degree polynomial regression analysis can produce prediction efciencies greater than 0.99 R 2 , and the visible zone has the highest median absorption of 89%. A multilayer grating structure made of titanium and gallium arsenide was used by Zhang et al. to build a broadband absorber [23]. When simulating the specifed model using the fnite diference time domain approach, they discovered that the absorption efectiveness was 99.69% at 867 nm, which is quite close to absolute absorption. Tis metamaterial grating optimal absorber type is expected to be widely used in optical disciplines such as thermal electronics, optical monitoring, and infrared detection. In [24], Parmar et al. suggested graphene-based solar absorber architecture with two distinct metasurfaces for better absorption and greater solar absorber efciency. Te metasurfaces' symmetry and asymmetry are considered while choosing them (L and O-shape). A convolutional neural network (CNN) in one dimension is developed to calculate intermediary wavelength absorption estimates for a range of values, and regression is utilized to build a machine learning algorithm. To boost the absorption in the infrared, visible, and ultraviolet spectrums, a metasurface solar absorber based on Ge 2 Sb 2 Te 5 substrate was proposed by Patel et al. [28]. Te absorber is also examined using a machine learning algorithm to forecast the absorptivity at various wavelengths. Experiments are used to evaluate the K-nearest neighbors (KNN) accuracy and the regressor algorithms for forecasting the absorption with missing wavelength estimates. Te experimental results demonstrate that a smaller value of K in a KNN-regressor system can achieve good prediction accuracy (greater than 0.9 estimated (R 2 )). A summary of some of the previous works is listed in Table 1.

Proposed Methodology
Te proposed network design for the prediction of the metasurface's solar absorbance from its WTa-SiO 2 ceramic basis is shown in Figure 1.
In the following sections, we go through several aspects of the proposed model and its justifcation, beginning with how to represent the metasurface, moving on to how to extract pertinent features using the PC-AE approach, and concluding with a discussion of utilizing GE-deep AlexNet to produce optical properties.

WTa-SiO 2 Experimental
Design. Tis part introduces the metasurface-based solar absorber layout that uses WTa-SiO 2 as the phase transition material. Te tungsten's phase transition reduces heat loss during annealing; cermet changes signifcantly impact the efectiveness of spectrally selective absorbances. Te deployment of the metasurface enhances the absorption of solar absorbers. Cermets were created by cosputtering the appropriate metal along with dielectric objects with various ratios of volume of WTa-SiO 2 of 2 : 3, 1 : 2, and 1 : 3, respectively. Acetone and pure ethyl alcohol were used to sanitize the substrates, and the primary compartment was then emptied to a pressure smaller than 4 × 10 −4 Pa before sparking. Every aspect of the absorbers is placed in an argon atmosphere at a pressure of 0.3 Pa. Te complete design diagram is shown in Figure 2. We have taken an O-shaped metasurface element whose inner and outer length is illustrated in Figure 2(a). Te 3D view is portrayed in Figure 2(b), while the 2D front view of the proposed metasurface-based solar absorber is shown in Figure 2(c). Figure 2(c) shows that W and Ta metal particles are preferred to form the WTa solid solution alloy for the WTa-SiO 2 ceramic. Te annealing process causes the free energy of the entire system to seem to be in the lowest energy state due to the difusion and aggregation of metal atoms in the ceramic layer. Furthermore, in the case of the binary alloy structure, the atoms with low surface energy often prefer those with bigger atom radii to precipitate from the binary alloy and create surface segregation. To assure the greater stability of absorbers at high working temperatures, SiO 2 is generated, which prevents further migration and difusion of alloy atoms.
In the case of the WTa bimetallic alloy, Ta's atomic radius (RTa, 146 pm) is greater than W's (R W , 139 pm) [29]. Furthermore, as the Ta metal has lower surface free energies than W, the Ta surface segregates on metal surfaces during the melting process. When there is still oxygen present, it is unavoidable that the relocated Ta atom will be an oxide rather than a W atom. Because Ta 2 O 5 has a lower Gibbs free energy than WO 3 during the melting process, it is a durable oxide [30]. In order to guarantee the better stability of absorbers at high working temperatures, the formed stable oxide layer (Ta 2 O 5 ) acts as a protective layer by inhibiting the further migration and dispersion of alloy atoms. Te experiment estimates the absorption analysis based on refectance and transmittance [31].

Input Design Data.
From the experimentally designed model, the parameters of metasurface height, width, aspect ratio, periodicity, physical parameter resonator thickness, substrate thickness, scattering rate, and so on are collected and stored in the dataset for performance prediction.

Data and Preprocessing.
Te designed setup parameters and values are collected and formed as a dataset for simulation validation using an optimization-based deep learning method. Te developed dataset may have some irreversible and missing data processed by the preprocessing flter methods to normalize the data for further use.

PC-AE Feature Extraction.
Te preprocessing work has already been fnished in preparation for future improvements on characteristics that may be extracted for Patel et al. [27] 2022d Polynomial regression models and MgF 2 substrate Substrate and resonator thickness, thickness, needlepoint, and plus shape width (i) Ability to absorb with great accuracy (ii) Te probability of prediction error is higher than in other methods Patel et al. [28] 2022e KNN and Ge 2 Sb 2 Te 5 (GST) substrate Tickness: metasurface, substrate, and ground plane (i) Superior prediction accuracy (ii) Te prediction phase is very slow due to the higher amount of data classifcation. We extract certain features using a combination of an autoencoder and principal component analysis. Encoder and decoder submodels make up an autoencoder. Te encoder compresses the input, and the decoder tries to reconstruct the input from the encoder's compressed form.
Following training, the decoder model is abandoned, and the encoder model is saved. However, the autoencoder's performance is deteriorating because key variables are misunderstood. As a result, the performance of efective feature vector extraction is obtained by combining the PC algorithm. To prepare raw data for GE-deep AlexNet model training, the encoder may then be used to extract features from the data. In our study, combining these two algorithms can signifcantly impact variables' extraction and signifcance. Both extraction strategies have their own identities for detecting certain characteristics. Te PC-AE combination algorithm is defned as follows: Initialize the preprocessed data into the PC-AE algorithm in the input layer function. Ten, the dense layer function is applied for the feature extraction by multiplying the weight matrix of input features via the activation function using the following equation: where m is the matrix of weight, z is the input matrix considered for the observation count, feature count for covariance, and the bias is represented as d. Te activation function used for this layer is expressed in the following equation: (2) Te encoder and decoder function is executed using the following equation: where x (j) is the sigmoid activation function. Te covariance matrix is estimated using the following equation: Estimate the eigenvalue α j and the Eigen vector e 1 , e 2 , . . . . . . e nj , and the value for G covariance is j � 1, 2, 3 . . . n, ordering the values of Eigen is the order of the descending function.
Furthermore, the batch normalization function of the autoencoder is applied for sorting the eigenvalues. Data are transformed to have a mean of 0 and a standard deviation of 1 through the process of normalization. First, we must get the mean of this concealed activation given that we have the batch input from layer e in this stage.
Te neuron count is denoted as n at the e layer. Te next step is to determine the standard deviation of the concealed activations after we have activated at the end.
Furthermore, the mean and standard deviation are available. Using these numbers, we will normalize the concealed activations. To do this, we will take the average out of each input and divide the total by the smoothing factor (φ) plus the sum of the standard deviation. By preventing division by a zero point, the smoothing term (φ) ensures numerical stability inside the operation.
Te input is resized and ofset during the fnal procedure. Here, the two autoencoder algorithm components gamma and delta enter the data (beta). With the help of these parameters, the vector comprising the results of the preceding operations is rescaled and shifted.
Tese two parameters may be learned, and during training, an autoencoder makes sure that the best values for c and β are employed. Tis will make it possible to accurately normalize each batch. Ten, select the frst largest eigenvectors. Following is how we determine how much the j th feature component contributed to the outcome of the feature extraction: where j th entry of e v , j � 1, 2, 3 . . . . . . n, v � 1, 2, 3 . . . n is denoted as e vj and the absolute value of e vj is denoted as |e vj |. Furthermore, f j is sorted as the descending order function, and it is stored in the order of g j . Te recommended study's objective is to gather more valuable characteristics that are perfect for deep learning algorithms' input to boost accuracy. Te method used an autoencoder on a labeled dataset during its training phase. It looks for the variable's coefcient that best captures the situation, assesses the error value, and attempts to keep it as low as possible for subsequent steps. Ten, covariance is applied to the dataset, and the PC analysis is used to predict the optimal feature for the dependent variables with eigenvectors and data matrices.

GE-Deep AlexNet Architecture.
After extracting the features, the optimal absorbance is predicted using the proposed GE-deep AlexNet strategy. Figure  Trough the convolutional layer, equation (11) is used to compute the convolution for the featured data.
where A(a, b) is the output of the convolutional layer that passes the data on to the subsequent layer. Te symbol denotes the convolution process, w is the kernel or flter matrix, and denotes the input data, which is made up of a collection of data. Te input and kernel's element-byelement product is computed, aggregated, and then expressed as the corresponding point in the next layer. Te result of the mathematical functions was carried out through the convolutional layer and then passed on to the nonlinearity layer, which is the next layer. Tis layer can be utilized to modify or remove the output that was created. Te output is saturated or limited with this layer. However, the convolutional layer has a nonlinearity layer permanently included in it. As the following equations show, the rectifed linear unit (ReLU) gives simpler explanations of both the functions and gradient.
For the prediction, it is frst necessary that the output feature map from pooling have a fxed size. For instance, no matter how big the flters are, when max pooling is applied to each of the 256 flters, the output is 256 dimensions. In order to reduce the data dimensionality and reduce the amount of time needed for data training for subsequent layers in the network, downsampling is a crucial step in the layer pooling process. Te fully connected layer follows the pooling layer and links and organizes every neuron in a neural network. As a result, every neuron in a completely connected layer is directly coupled to every neuron in the layer above it and the layer below it. Te softmax layer, the fnal layer in the model that is being presented, is used to calculate the probability distribution. Te softmax function is described in the following equation: where y j is the output prior the softmax function and the overall neuron output is denoted as N. Consequently, the performance of deep AlexNet is improved by parameter tuning using the GEO algorithm.

Golden Eagle Optimization.
In this research, we used the activation function and other hyperparameters, such as the number of layers in a system and the number of each layer's nodes. Te learning rate value is one of the supplementary GE technique parameters that must be given. Te recommended deep AlexNet method selects the optimal settings while considering the ftness of the GE algorithm's circling and hunting behaviors. Each data q start by randomly selecting the traits of another data h and then makes a circle around the best location that data h have so far visited each iteration. Teir personal recollections can be circled in the data h ∈ 1, 2, . . . N { }. Each data must select a feature at each cycle to carry out the cruise and assault activities. Te characteristics employed in this method are based on the top conclusion the data fock has come to so far. Each piece of knowledge can recall the most efective answer it has so far discovered. We propose a random one-to-one mapping approach where each data randomly select their  Journal of Mathematics properties for the current iteration from the memories of the other fock members to help data better navigate the terrain. It is critical to recognize that the traits preferred frequently diverge from those of the nearby or distant prey. Tis strategy assigns or maps each object to a single, unique piece of data. According to this strategy, a single, distinct piece of data is assigned to or mapped to each memory feature. Each data on the chosen features then carry out the attack and cruise operations. A vector that begins at the data's current location and ends at the location where the features are stored in the data memory may be used to depict the attack. Te data assault vector q may be identifed using the following equation: Te exploitation of data q is denoted as O → q , the best features selected by the data h are denoted as Z → * h , and the present location of the features in the data k are denoted as Z → h . Te exploitation vector directs the data populace to the most well-liked locations. Te exploration vector is calculated based on the exploitation. While the exploration vector is perpendicular to the circle, exploitation is parallel to it. Alternately, the exploration might be seen as the linear pace of the data in proportion to the features. Equation (15) is used to determine the tangent hyperplane's dimension space u: w 1 y 1 + w 2 y 2 + · · · · · · · · · w l y l � g⇒ u i�1 w i y i � g, where w 1 , w 2 . . . w u are represented for the ordinary vector and Y � [y 1 , y 2 , . . . y u ] are the changing vector of i th node. Te exploration hyperplane's overall depiction of the destination location is expressed using the following equation: Once the goal point has been determined, the exploration vector for the data q is now computed t iteratively. Tese arbitrary numbers between zero and one make up the components of the obtained destination location. It is interesting to note that the data population is pushed outside the memory-stored regions by the exploration vector. Exploration and exploitation are both involved in the data migration process. To create the step vector for data in iteration, we utilize the following equation: where the exploitation coefcient in T iteration is represented as d T s and the exploration coefcient in T iteration is considered as d T e , the random vector is in the limit of [0, 1] is denoted as r → 1 and r → 2 , the Euclidean norm of the exploitation and exploration is denoted as ‖ Q → q ‖ and ‖ E → q ‖. Beginning with the initial random value, loop through all iterations before lowering the random value in accordance with alpha. Te conditions are terminated if the optimal solution is achieved until it returns. To fnd the position of the data in iteration t, the step vector in iteration t is simply added to the positions in iteration t + 1.
Te memory of these features is updated to refect the new location if the new location of the data k is more appropriate than the position that is previously recorded in its memory. Otherwise, while the features are kept at the new place, the memory is unafected. In the present version, each feature in the population has a randomly chosen position that revolves around it. Te step vector and the new position for the subsequent iteration are then decided, followed by the calculation of exploitation and exploration. Up until one or more of the termination requirements are satisfed, this loop is still being executed.

Results and Discussion
MATLAB 2019b, windows 7 intel core, 4 GB RAM, 64-bit operating system are used to implement and train using optimized improved deep learning methods in metasurface prediction of solar absorbance in energy harvesting application.

Evaluation
Te performance of the suggested technique, which is taught using the remaining records of simulated data, is assessed using records that were randomly chosen. Te prediction accuracy of suggested models is assessed using the R 2 score as a criterion. Equation (19) is used to compute the R 2 score Here, SS r is the sum of squares of the residual errors and SS t is the total sum of the errors, and M is the number of testing records.  (21), the method for calculating MAPE is shown.
where n is the number of forecasted values by the model after training. Te predicted performance is validated via the value of MAPE.

Performance Analysis.
Te input of developed WTa-SiO 2 solar absorbance of metasurface is preprocessed for removing disturbance in the data and applied PC-AE algorithm. From there, the optimal features are selected for the prediction algorithm based on the eigenvalues and eigenvectors. Te highest eigenvalue is considered as 50. Consequently, the GE-deep AlexNet algorithm is applied for the prediction performance. Te optimized hyper parameters learning rate 0.01, batch size 128, and 15 epochs are obtained using GE for deep AlexNet algorithm. Te workfow of the proposed prediction model is detailed in Table 2. By altering several physical factors including the metasurface thickness, resonator thickness, and angle of incidence, a complete study of the high-performing architecture of the O-shaped perforated metamaterialbased solar absorber is conducted. Figures 4(a) and 4(b) displays the dependence of metasurface thickness and absorbance on the corresponding wavelength. Te change in the absorption response with regard to the variation in metasurface thickness is shown in Figure 4(a). Tere is a 0.2 μm step increase in the metasurface thickness, which ranges from 0.1 to 1.0 μm. It is clear that this adjustment has little to no impact on the absorption response. Terefore, we may conclude that the metasurface thickness is fxed at 0.3 μm to keep the solar absorber afordable. Figure 5 displays the simulation and model predicted results for a few random test specimens for the solar absorber dataset. As can be seen, the model's prediction of the optical absorption efect and the simulated response accord well. Several very efective absorber unit cells with more than 80% absorbance across the solar spectrum are shown in graphical representation. From Figure 5, it is revealed that a unit cell can exhibit excellent visible-only absorption effciency and such absorbers are quite helpful in numerous applications. Terefore, we can use the model to determine the efciency if the designed absorber structure covers the complete spectrum. For fve test scenarios, the proposed method's predicted absorption value is compared to the actual absorption value and is portrayed in Figure 6. In the test, the resonator thickness is either 0.6 μm, 0.8 μm, or 1.0 μm while the incidence angel takes the value of 0°, 10°, and 20°. Tese fndings clearly indicate a possible use for the enhancement of photovoltaic devices as well, where the concentration of solar radiation is high. Since the resonator structure of the shown device is symmetrical, we will able to get the same absorption response.
For the solar absorber to be widely used in nature, it must be insensitive and polarization independent to large incidence angles. Te wide-angle and angle insensitive characteristics of the suggested O-shaped perforated metamaterial solar absorber are demonstrated in Figures 6(d)-6(f). Te absorption response is the same for most angles, except for the 10°angle of incidence, as shown in Figure 6(e). Te mean absorption for the whole area, ranging from 0.6 to 1.0 μm, is still approximately 90% for 10°. And for the remaining incidence angles, the lower absorption response is around 93%. Terefore, we can say that the suggested solar absorber is broad angle sensitive for 0°to 20°. In simulations, the frst 80% of simulation data are used to train the GE-deep AlexNetbased prediction model, while the remaining 20% of records are used to evaluate how well the designs predict the future. To predict the absorption value for next wavelengths, simulations are run utilizing various durations of previous inputs.   However, a graph shows that the optimum deep learning mode is only trained using half of the simulation data points.
Te model can still predict the absorption values for the remaining half of the wavelength values with high accuracy (R 2 score >0.9998). Table 3 thoroughly examines several designs, their measurements, and their efciency based on the absorption area under the curve (AUC) % throughout the full spectrum.

Experimental Analysis.
A kind of WTa-SiO 2 ceramicbased absorber with transition wavelengths of around 0.6 μm, 0.8 μm, and 1.0 μm was experimentally proven. Using the appropriate metal and dielectric substrates, cosputtering was used to deposit cermets with various WTa and SiO 2 volume ratios. Te high-purity W, Ta, and SiO 2 are among the target materials that are commercially accessible. We compare the observed absorption coefcients to those predicted by our enhanced algorithm for deep learning to evaluate our process for the assessment session. We chose to simulate an absorber with the GE-deep AlexNet algorithm working at 66 Hz as the operational frequency bandwidth of our measurement equipment begins at about 50 Hz to allow accurate comparison with the experimental data. At the tube's end, the attached structure allows us to detect the absorption spectra of the associated metasurface. Figure 10 displays the predicted and experimental absorption curves. Despite some discrepancies, the two results agree well, confrming the suitability of the proposed method for predicting acoustic absorption qualities utilizing the proposed improved AI algorithm program. Te results of experimental and simulation studies are illustrated in Table 4.
Te absorption efciency value of the proposed system is estimated individually by varying the metasurface thickness 0.6, 0.8, and 1.0 μm is defned as absorption. For instance, while changing the metasurface thickness to 0.6 μm, the     Table 4.
Te overall absorption defnes the average of a total individual obtained absorption. Te absorption is calculated from the corresponding transmittance and refectance rates. Moreover, the overall absorption is calculated by the average of these three absorption values. Our proposed method's average total absorption performance efciency evaluation also estimates the overall absorption. Both absorption and overall absorption estimation are needed because absorption is the evaluation of individual performance, yet the overall absorption shows the average of the system's total performance. Table 5, we have compared the performance of the suggested solar absorber with earlier research outcomes. Te frequency-dependent resonant response of the metamaterial absorbers in the infrared, visible, and ultraviolet spectrums make them a topic of current research. In [22], Patel et al. show that the visible regime has the highest median absorption of 89%, which is better than 0.99 prediction efciency (R 2 ). Titanium and gallium arsenide were used to develop a broadband multilayer grating structure that attained an absorption of 99.69% at 867 nm [23]. In Patel et al. present a SiO 2 -based substrate-based DLMP that achieves more signifcant than 90% absorption. In [24], the authors ofer a graphene-based solar absorber design with two distinctive L and O-shaped metasurfaces. Te experimental results show that polynomial regression analysis can accurately predict the numbers of absorption capacity (R 2 score) in [27]. Te metasurface solar absorber based on Ge 2 Sb 2 Te 5 (GST) substrate can still achieve better performance even with a lower value of K in a KNNregressor system, with excellent prediction accuracy more signifcant than 0.9 estimated (R 2 ) [28].  Te commercial software only examines the performance of the individual parameter because the absorber is made up of randomly arranged parameters rather than computing the overall efciency of the metamaterial. We emphasized that every material from the WTa-SiO 2 ceramic method is readily available and afordable compared to the precious materials that are often used for the solar absorber. Furthermore, the materials used in the proposed structure are thinner than 1 μm in thickness. Furthermore, compared to other absorbers, our absorber's absorption efciency is better at 99.8% for 0.2 μm and 95% for 2.5 μm, making it superior. Moreover, compared to the commercial softwarebased parametric analysis in metasurface absorbance, the proposed method achieved very less execution time and higher prediction by varying cases. Our fndings show that, compared to other current technologies, our absorber has very good average absorption efciency. Considering everything, it is evident that solar absorbers, with their straightforward construction and superior performance, play a signifcant role in solar absorption. It is clear that the proposed deep learning model performs signifcantly faster and uses less memory than the typical commercial simulation software once it has been trained. Te proposed model's average prediction time is 15 seconds, compared to the current approaches' average prediction time of more than 17 seconds. It is therefore revealed that the optical response predicted by the proposed model is 99.8% faster than by simulations. Tis model can be scaled up by providing enough data and training structures to completely replace commercial software.

Conclusion
Tis paper presents the design of a WTa-SiO 2 ceramic layer O-shaped metasurface solar absorber. Te designed Oshaped absorber achieved an overall absorption rate of 99.8% in the light spectrum. A thorough analysis is also performed by adjusting the physical factors, including resonator thickness, angle of incidence, and metasurface thickness that infuence the absorption rate. Experiment fndings reveal that the proposed Golden Eagle Optimization (GE)-based deep AlexNet design and Principal Component-Autoencoder (PC-AE) technique can efectively develop the prediction model. It has good precision in predicting absorptivity at middle frequencies (higher than 0.9998 R 2 score). It is also revealed that a high prediction accuracy is obtained when the model is designed using a greater exponential degree of features. Te obtained absorption response is also insensitive for the incidence angle of 0°to 20°, and the absorber consistently exhibits excellent absorption performance even when the incident angle varies from 0°to 50°which makes the proposed absorber more versatile with fewer limitations. Te engineered absorber has a more expansive absorbing range and a simple structural footprint when compared to other absorbers of the same type.

Data Availability
Data will be available by the authors upon reasonable request.