Distributional Censored and Uncensored Validation Testing under a Modified Test Statistic with Risk Analysis and Assessment

,


Introduction
A novel continuous distribution will be introduced and studied in this work, but we will approach it from new viewpoints that depart from those often covered by specialists. We will not focus on numerous theoretical fndings and algebraic derivations-not because they are unimportant, but rather to give us the chance to highlight more practical features of risk analysis, distributive verifcation, and their associated applications in both complete and censored data. We will discuss certain theoretical facets of the new distribution, though. But, we will pay particular attention to features that are applicable and useful in the following areas: (1) A set of frequently used fnancial indicators, such as value-at-risk (VAR), tail-value-at-risk (TVAR) (also known as conditional tail expectation), conditionalvalue-at-risk, tail variance (TV), tail mean-variance (TMV), and the mean excess loss (MEL) function, is studied when examining and evaluating the risks that insurance companies face. Te maximum likelihood estimation (MLE), the ordinary least squares (OLS), the weighted least squares estimation (WLSE), and the Anderson-Darling estimation (ADE) are all described as estimate strategies for the main key risk indicators (KRIs). Tese four methodologies are applied in two distinct ways for fnancial and actuarial evaluation, including simulation with three confdence levels (CLs), under diferent sample sizes, and applications to data from insurance claims. (2) We present a simulation experiment to compare the performance of the estimators of VAR based on insurance data in order to satisfy the requirements of the actuarial analysis of risks. (3) Te well-known Nikulin-Rao-Robson (NRR) statistic (Y 2 α (r − 1)), which is based on the uncensored maximum likelihood estimators (UMLEs) on initial nongrouped data, is considered under the new Rayleigh generalized gamma (RGG) model in the framework of distributional validation and statistical hypothesis tests for complete data. Tree real datasets and a simulation study evaluate the statistic Y 2 α (r − 1). (4) Te RGG model considers a modifed NRR statistic (M 2 α (r)), which is based on the censored maximum likelihood estimators (CMLEs) on the original nongrouped data. Tis statistic is used in the framework of distributional validation and statistical hypothesis testing for censored data. Te statistic M 2 α (r) is evaluated using three real datasets and a thorough simulation analysis. (5) It is worth noting that risk indicators can be applied in the feld of engineering, especially in the feld of structural engineering, with the aim of developing mathematical measurement and statistical modelling processes in these felds. In engineering, maximum likelihood estimation can be used to estimate the parameters of a distribution for the failure time of a product or system, which can be used to assess reliability and inform maintenance decisions. Moreover, in engineering, right censored maximum likelihood estimation can be used to estimate the reliability function of a system, such as a machine or a bridge. For example, if the failure time of a machine follows a Weibull distribution, the parameters of the distribution can be estimated using right censored maximum likelihood estimation, and the reliability of the machine can be assessed based on the estimated distribution. Generally, it is important to monitor the system over time and review the risk assessment periodically to ensure that any changes or updates to the design or operating conditions are taken into account. Tis will help to ensure that the system remains safe and reliable over its lifetime. For more details, see Amini et al. [1] and El-Morshedy et al. [2].
Te cumulative distribution function (CDF) of the generalized gamma model [3] can be expressed as where z ≥ 0, Φ � (λ, θ), and λ, θ > 0, which is fexible enough to accommodate both monotonic and non-monotonic failure rates. Following Yousof et al. [4], the CDF of the RGG model has the following form: where ξ(z) � 1 − (1 + λz) exp(− λz). Te probability density function (PDF) corresponding to (2) can be expressed as When comparing probability distributions for applications in insurance, several criteria are commonly considered. Tese criteria help insurers select the most appropriate distribution to model the claims data accurately. Here are some of the main criteria:1. Tis criterion assesses how well a probability distribution fts the observed historical claims data. Insurers typically use statistical tests, such as the Kolmogorov-Smirnov test or the chi-square test, to evaluate the goodness of ft. A distribution that closely matches the data is preferred, as it provides more reliable estimates for future claims.2. Skewness measures the asymmetry of a distribution, while kurtosis measures its tail heaviness. In insurance, it is important to consider the skewness and kurtosis of the claims data to capture any non-normal characteristics. Some distributions, such as the lognormal or Pareto distributions, are better suited for modeling skewed and heavy-tailed data commonly observed in insurance claims.3. Probability distributions often have parameters that need to be estimated from the data. Insurers consider the ease and accuracy of parameter estimation methods for a given distribution. Maximum likelihood estimation is a common approach used to estimate parameters, but other methods like the method of moments or Bayesian estimation may also be applicable. A distribution with easily estimable parameters is desirable, as it simplifes the modeling process.4. Te interpretability of a probability distribution is crucial for insurance applications. Insurers and actuaries need to understand the underlying assumptions and characteristics of the selected distribution. Distributions like the normal (Gaussian) distribution or the gamma distribution are well-known and have established interpretations, making them popular choices. However, it is important to balance interpretability with the goodness of ft to ensure accurate modeling.5. Te tail behavior of a distribution is important for capturing extreme events or catastrophic losses in insurance. Insurers need to assess whether a distribution adequately represents the tail risks inherent in the claims data. Heavy-tailed distributions, such as the Pareto or generalized Pareto distributions, are often considered for modeling extreme events.6. Te historical credibility of a probability distribution is based on its track record and applicability to similar insurance portfolios or lines of business. Insurers often rely on industry experience, expert judgment, and historical data from similar risks to assess the suitability of a particular distribution. Distributions that have been successfully used in the past for similar insurance applications may carry more weight in the selection process.It's important to note that the choice of probability distribution should be based on a comprehensive analysis of the specifc insurance context and the characteristics of the claims data. Insurance companies often employ experienced actuaries and statistical experts to evaluate and compare various distributions, considering these criteria, to make informed decisions about the most appropriate probability distribution for their insurance applications. Te test statistic Y 2 α (r − 1) depends on the MLEs on the initial non-grouped real datasets, and the test statistic Y 2 α (r − 1) is of particular importance among all goodness-of-ft tests. Te test statistic Y 2 α (r − 1), pioneered by Nikulin [5,6] and Rao and Robson [7], has a chi-square distribution and recovers information lost during data grouping. However, censoring makes all widely used goodness-of-ft tests worthless and leads to a variety of practical issues. As a result, additional researchers provide a wide range of improvements to the material that was already accessible. Bagdonavicius and Nikulin [8] developed a modifed NRR statistic for statistical distributions with right censoring and unknown parameters. Since it recovers all information lost during data regrouping, this variant of the NRR statistic may be used to ft data from felds where the data are frequently censored, such as in survival analysis, dependability, and others. Following Nikulin [5,6] and Rao and Robson [7], we will present modifed NRR χ 2 goodness-of-ft tests for adjusting the proposed model to full and right censored data.
In case of complete data, the NRR statistic is a wellknown alternative to the conventional χ 2 tests. It is founded on variations between two estimates of the chance of falling within grouping intervals. One estimate is based on the empirical distribution function, while the other is based on maximum likelihood calculations employing ungrouped starting data to estimate the unobserved parameters of the tested model. For further information, see Nikulin [5,6] and Rao and Robson [7].
In general, the development of statistical methods for testing hypotheses and the validity of parametric distributions under censorship is accelerating, although the existence of censorship is considered as a huge challenge. Tere have been numerous contributions to the feld of application in the history of statistical literature for verifcation tests in the case of controlled data.
Te NRR test has been the subject of several research studies in the statistical literature. Because of their rarity, these studies may be tallied; here, we will list the most recent ones. In this work, the RGG distribution is derived in risk analysis and distributional validation, and the uncensored and the right censored scenarios are used to validate a modifed χ 2 goodness-of-ft test statistic based on the NRR statistic test (Y 2 α (r − 1)) and the modifed NRR statistic (M 2 α (r)), respectively. Te statistic Y 2 α (r − 1) is adopted for testing the null hypothesis H 0 according to which a certain complete sample follows the RGG model. Te statistic Y 2 α (r − 1) is assessed utilizing a comprehensive simulation study using the Barzilai-Borwein (BB) algorithm for complete data (see Ravi and Gilbert [9]). Ten, the statistic Y 2 α (r − 1) is assessed utilizing a comprehensive simulation study using the BB algorithm for censored data. To examine the performance of the test when the sample size is increased, we have relied on the standard mean square error (MSE) in all simulated experiments, accounting for varied sample sizes. Tree uncensored real datasets (uncensored times between failures for repairable items' data, uncensored reliability data, and uncensored strength data) are used in statistical testing under the statistic Y 2 α (r − 1) for distributional validation. Te uncensored real data included in the analysis have a wide reputation in the statistical literature and have had a great deal of analysis. In this work, we will focus on an important aspect of statistical analysis, which is the aspect of hypothesis tests using these data. Tis most important material is motivated in the introduction, as most of the statistical works did not study and analyze this aspect despite the importance of hypothesis tests in statistical theory. On the other hand, three right censored real datasets are used to assess the statistic M 2 α (r) for distributional validation under the RGG model. Te censored real data that have been considered and included in the analysis are the censored bone marrow transplant data, the censored times to infection of kidney dialysis patients data (censored times to infection data), and censored strength of certain type of braided cord data.
Te new NRR statistical test showed that using the new model as a stand-in for looking at two right censored datasets is successful. In this context, we shall outline several recent studies that added to or changed the NRR. It is important to note that the browser for statistical literature on this topic (NRR goodness-of-ft test) will not fnd many new NRR goodness-of-ft extensions and will fnd few research studies that applied this test because the NRR goodnessof-ft test has specifc requirements and strict procedures and demands censored data. As is generally known, obtaining novel censored data to apply to and emphasize the significance of the new test is a difcult duty. In the next few sections, we will discuss few recent research studies that looked at using this test on real data that had been subject to right censoring datasets, along with a description of the conclusions from each study independently.
In this introduction, we do not fail to review some of the limitations of the current study, while we present two basic limitations: (i) Te datasets used must be positive only because the new probability distribution is bound by this constraint and is conditioned by this condition. (ii) Te new modifed NRR test introduced in this version can only be applied to data subject to right censored only.
Te main novelties of this work can be highlighted as follows: (1) Employing the new probability distribution in the analysis and evaluation of actuarial risks through a set of actuarial measures. Tese actuarial measures Journal of Mathematics 3 (indicators) have been carefully selected due to their quality in results and popularity in application. (2) Te use of the new probability distribution in modelling processes and statistical hypothesis tests using the NRR test. (3) Developing the theory of statistical hypothesis testing for controlled data by presenting a modifed NRR test and applying it to real data. (4) Applying the modifed NRR test in distributional validations using the new distribution. (5) Evaluating the performance of several diferent estimation methods in risk analysis processes. Tis approach to risk assessment using diferent estimation methods is considered a recent approach, and many studies are not available.
Tis paper is distinguished from these competing papers with the following points: (i) Actuarial application on insurance data. (ii) Using diferent estimation methods in evaluating and analyzing risks. (iii) Combining the original test and the modifed test. (iv) Providing various applications on the original statistical test and the modifed test.

Risk Indicators
Te key risk indicators (KRIs) play a crucial role in understanding actuarial analysis of insurance risks. Tese indicators provide valuable insights into the level and nature of risks associated with insurance portfolios, allowing actuaries to quantify, measure, and manage these risks efectively. Here are some key points highlighting the importance of risk indicators in actuarial analysis:1. KRIs help actuaries quantify the level of risk in insurance portfolios. By using various metrics, such as loss ratios, claim frequencies, severity measures, or aggregate reserves, actuaries can assess the potential fnancial impact of risks. Tese indicators provide a numerical representation of the risk exposure and assist in decision-making processes related to pricing, reserving, and capital allocation.2. KRIs enable actuaries to measure and assess the magnitude and likelihood of potential losses. Actuarial models and techniques, such as probability distributions, statistical methods, and simulation studies, are used in conjunction with risk indicators to estimate the probability and severity of adverse events. Tis helps insurers evaluate the fnancial implications and potential solvency risks associated with the insurance business.3. KRIs serve as monitoring tools, allowing actuaries to track changes in risk levels over time. By regularly analyzing and comparing risk indicators against predefned thresholds or benchmarks, actuaries can identify emerging risks or deviations from expected risk levels. Tese indicators act as early warning signals, enabling insurers to take proactive measures to mitigate risks, adjust pricing, or revise underwriting strategies.4. KRIs assist in segmenting insurance portfolios based on the level of risk. Actuaries can use risk indicators to identify high-risk segments or policyholders and allocate appropriate resources for risk mitigation. Tis segmentation helps in portfolio management, allowing insurers to optimize their risk exposure, diversify risks, and balance their overall risk portfolio.5. KRIs play a crucial role in regulatory compliance and fnancial reporting for insurers. Regulators often require insurers to report and disclose risk-related information to ensure solvency and protect policyholders. Risk indicators, such as risk-based capital (RBC) ratios, economic capital models, or stress testing results, provide insights into the fnancial stability and resilience of insurance companies, ensuring compliance with regulatory requirements.6. KRIs support informed decision-making and strategic planning for insurers. Actuaries rely on risk indicators to assess the proftability of insurance products, evaluate potential risks associated with new business ventures, and determine appropriate pricing strategies. Tese indicators help in setting risk appetite, formulating risk management policies, and aligning business strategies with the risk tolerance and objectives of the insurance company. Tis indicator is frequently used to calculate the amount of capital needed to deal with such probable negative events. Te VAR of the RGG distribution at the 100q% level, say VAR (Z) or π Z (q), is the 100q% quantile (or percentile). Ten, we can simply write where Q(U) can be derived by inverting (2). For a one-year time when q � 99%, the interpretation is that there is only a very small chance (0.01) that the insurance company will be bankrupted by an adverse outcome over the next year (see Wirch [10] for more details). Generally speaking, if the distribution of gains (or losses) is limited to the normal distribution, it is acknowledged that the number VAR(X) meets all coherence requirements. Te datasets for insurance such as the insurance claims and reinsurance revenues are typically skewed to the right or to the left. Te normal distribution is not suitable to describe the revenues from reinsurance and insurance claims. Te TVAR of Z at the 100q% confdence level is the expected loss given that the loss exceeds the 100q% of its distribution. Ten, the TVAR of Z can be expressed as Te quantity TVAR (X), which gives further details about the tail of the RGG distribution, is therefore the average of all the VAR values mentioned above at the confdence level q. Moreover, TVAR (X) can also be expressed as TVAR (Z) � e(Z)+ VAR (Z), where e(Z) is the mean excess loss (MEL) function evaluated at the 100q% th quantile (see Wirch [10]; Acerbi and Tasche [11]; and Tasche [12]). If e(Z) vanishes, then TVAR (Z) � VAR (Z), and for very small values of e(Z), the value of TVAR (Z) will be very close to VAR (Z). Te TV risk indicator, which Furman and Landsman [13] developed, calculates the loss's deviation from the average along a tail. Explicit expressions for the TV risk indicator under the multivariate normal distribution were also developed by Furman and Landsman [13]. Te TV risk indicator (TV (Z)) can then be expressed as As a statistic for the best portfolio choice, Furman and Landsman [13] developed the TMV risk indicator, which is based on the TV risk indicator. Tus, the measure TMV has the following form: Ten, for any RV, TMV (Z) > TV (Z), and for π � 1, TMV (Z) � TVAR (Z). We will use the methodologies that provide numerical solutions to this complex function, and we will use ready-made programs like "R" and "MATH-CAD" to facilitate numerical operations. Te use of numerical methods has recently become popular for many reasons, the most important of which is the availability of ready-made statistical programs, and the quantum function of the RGG is not known in a certain closed form.
Numerical approaches were applied in this paper's risk analysis and evaluation procedure (see Section 4), as well as in the issue of distributional validation under the NRR and its updated equivalent version (see Section 7). For more details, see Alanzi et al. [14], Zhou and Gao [15], Hamed et al. [16], and Yousof et al. [17]. (1) Te measures VAR (Z), TVAR (Z), and TMV (Z) increase when q increases for all estimation methods.

Risk Assessment under Artifcial
(2) Te measures TV (Z) and MEL (Z) decrease when q increases for all estimation methods. OLSE for most values of q. (4) We can confrm by the numbers of the four tables that all functions are satisfactory and that one approach cannot be clearly recommended over another method. Based on these fndings, we are obligated to provide an application based on real data that can select one way over another, determining the best and most appropriate methods. In other words, the simulation study did not help weighting the four methods decisively because they showed similar results in risk assessment. Tese convergent results assure that all methods have good and acceptable performance in modelling actuarial data and risk assessment.

Risk Assessment under the Insurance Payment Claims
Data. Analyzing historical insurance data of claims using probability distributions is essential for several reasons: 1. Probability distributions provide a mathematical framework to model and analyze the frequency and severity of insurance claims. By ftting historical claims data to appropriate probability distributions, insurers can estimate the likelihood of diferent claim amounts and frequencies occurring in the future. Tis information is crucial for assessing the overall risk exposure of the insurance company.2. Probability distributions help insurers determine appropriate premiums for insurance policies. By understanding the distribution of claim amounts and frequencies, insurers can calculate the expected value of claims and incorporate it into the pricing structure. Tis ensures that premiums charged to policyholders align with the potential risks faced by the insurer, maintaining a fair and sustainable pricing model.3. Accurate estimation of potential claim costs is vital for insurers to set aside adequate reserves and plan their fnancial stability. By analyzing historical claims data using probability distributions, insurers can estimate the potential range of future claim payments and allocate sufcient funds to cover these liabilities. It enables insurers to make informed decisions about capital management, investment strategies, and fnancial reserves.4. Probability distributions provide insights into the potential volatility and tail risks associated with insurance claims. Insurers can analyze the shape of the distribution and its parameters to identify extreme events and tail risks that may have a signifcant impact on the fnancial health of the company. Tis knowledge helps insurers manage risks efectively, develop appropriate underwriting guidelines, and implement risk mitigation strategies.5. Probability distributions allow insurers to simulate various scenarios and evaluate the potential outcomes of diferent policy design choices. By incorporating historical claims data into probabilistic models, insurers can assess the impact of diferent policy terms, coverage limits, deductibles, and other factors on claim frequencies and amounts. Tis information helps insurers make data-driven decisions when designing Journal of Mathematics   Overall, analyzing historical insurance data of claims using probability distributions enables insurers to gain insights into the nature of risks they face, make informed decisions, and efectively manage their operations. It supports pricing, reserving, risk assessment, and underwriting activities, contributing to the fnancial stability and success of insurance companies. In this work, we will consider and analyze a claims data collected from 2007 to 2013 Tis work proposes certain KRI quantities for the left-skewed insurance claims data under the EEC distribution, including VAR, TVAR, TV, and TMV [18]. One of the fnest techniques for heavy-tailed distributions is based on the t-Hill approach, an upper order statistic modifcation of the testimator. Table 5 reports the KRIs under the insurance claims data and MLE method for the RGG (Φ 1 ) and GG (Φ 2 ) models, where Φ 1 � (0.00015, 0.28234) and Φ 2 � (0.00082,1.22414). Table 6 gives the KRIs under the insurance claims data and OLSE method for the RGG (Φ 1 ) and GG (Φ 2 ) models, where Φ 1 � (0.000102, 0.22171) and Φ 2 � (0.00065,0.91821). Table 7 provides the KRIs under the insurance claims data and WLSE method for the RGG (Φ 1 ) and GG (Φ 2 ) models, where Φ 1 � (0.00013, 0.24966) and Φ 2 � (0.00015, 0.28234). Table 8 presents the KRIs under the insurance claims data and WLSE method for the RGG (Φ 1 ) and GG (Φ 2 ) models, where Φ 1 � (0.00013, 0.25097) and Φ 2 � (0.00073, 1.06384). Based on these tables, the following results can be highlighted: (1) For all risk assessment methods: (2) For all risk assessment methods: (3) For all risk assessment methods: (4) For all risk assessment methods: (6) For the RGG model: Nearly for all q values, the OLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MLE method is recommended as a second one. However, the other two methods perform well. For the GG model: Nearly for all q values, the OLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MLE method is recommended as a second one. However, the other two methods perform well. (7) For all q values and under all risk methods: Te RGG model is better than the GG model. It is worth noting that the distributions have the same number of parameters, but the new distribution is the best in the process of modelling insurance claims reimbursement data and assessing actuarial risk. We hope that the proposed distribution will gain a great deal of interest from actuaries and practitioners in future actuarial and applied studies.

Distributional Validity under the UMLE Method.
Here, the UMLE method is used to estimate the RGG distribution's parameters. Let z 1 , . . . , z n be random samples distributed according to the RGG model. Te uncensored likelihood function is obtained by L(Φ) � n i�1 f Φ (z i ). Ten, the uncensored log-likelihood function reduces to l Φ ( ) � n ln(2θ) + 2n ln(λ)  Journal of Mathematics 9 where Te MLEs can be obtained by solving the following equation:

Distributional Validity Utilizing the Right CMLE Method.
Let z � (z 1 , . . . , z n ) T be a right censored data sample with fxed censoring time τ from the RGG distribution with parameter vector Φ. Each z i can be written as z i � (z i , π i ). Te right censored log-likelihood function L n (Φ) has the form is the survival function of the RGG model, and based on (15), we have To this end, we use (16) to obtain the non-linear scoring equations as Similar to the complete data scenario, we employ numerical techniques like the Newton-Raphson method, the Monte Carlo method, or the BB-solve package to compute the MLEs.

Testing Procedures from the Statistic Y 2
α (r − 1). For testing the null hypothesis H 0 according to which a sample Z 1 , Z 2 , . . . .Z n belongs to (2), where consider r equiprobable grouping intervals I 1 , I 2 , . . ., , I r where I j � [b j− 1 , b j ]; I i ∩ I j � ϕ i ≠ j and ∪ r j�1 I j � R 1 such as and b j � F − 1 (j/r), j � 1, . . . , r. If v � (v 1 , . . . , v r ) T denotes the number of observed grouping z i into these intervals I j and the NRR statistic Y 2 α (r − 1) due to Nikulin [5] and Rao and Robson [7] is defned by Here, I(Φ) and J(Φ) are the estimated information matrices on non-grouped and grouped data, respectively, and Φ is the vector of the MLEs on initial data. Te elements of the vector l(Φ) � (l k (Φ)) T 1×b are where ...,b,i�1,...,r , (23) and b is the number of the model parameters. Te distribution of Y 2 n (r − 1; Φ) is χ 2 r− 1 . To construct the test statistic Y 2 α (r − 1) corresponding to the RGG with a parameter vector Φ � (λ, θ) T , we frst calculate the MLEs Φ � (λ, θ) T and the limit intervals b j . Secondly, the derivatives z/zΦ k p j (Φ) are derived as Finally, we obtain the statistic Y 2 n (r − 1; Φ) which allows to verify if data belong to the RGG distribution.

Testing Procedures for the M 2
α (r) Test Statistic with Right Censorship. Te NRR statistic described above was adjustment by Bagdonavičius and Nikulin [8]. Generally, the NRR statistic is established based on the vector Λ j � 1/ � n √ (O j (z) − e j (z))| j�1,...,randr≻b , where O j (z) and e j (z) are the observed number of failures and expected number of failures to fall into the grouping intervals I j , and the statistic M 2 α (r) is defned by where Σ − refers to the generalized inverse of the well-known covariance matrix Σ. For facilitating the calculation process, this novel NRR statistical test can be expressed as follows: with the quadratic form Ω obtained as is the hazard rate function of the RGG model. Under the null hypothesis H 0 , the limit distribution of the statistic M 2 α (r) is a chi-square with r � rank(Σ) degrees of freedom. For more details on modifed chi-square tests, one can see the book of Voinov et al. [19]. For testing the null hypothesis that a right censored sample is described by the RGG distribution, we develop M 2 α (r) corresponding to this distribution. To this end, we have to compute the MLEs Φ � (a, λ, θ) T on initial data (see Section 3), the estimated information matrix i ll ′ which can be deduced from the score functions and the estimated limit intervals b j (z). To apply this test statistic, the expected failure times e j (z) to fall into the grouping intervals I j must be the same for any j, so the estimated interval limits b j (z) are equal to is the cumulative rate function of the RGG model. So, the numbers e j (z) and O j (z) can be obtained. Ten, we can derive (and then calculate) the components of the estimated matrix K as follows: Journal of Mathematics and the estimated matrix W is derived from the matrix K. Terefore, the test statistic can be obtained easily: 6. Assessing Y 2 α (r − 1) and M 2 α (r) with Some Applications We performed a signifcant investigation using numerical simulations in this section to demonstrate the fexibility and efectiveness of the tests suggested in this work. We then used actual data from reliability and survival analysis to run these tests.
For simulating M 2 α (r) under the UMLE, the data were simulated N � 10, 000 times under the sample sizes n 1 � 25, n 2 � 50, n 3 � 130, n 4 � 350, n 5 � 500, and n 6 � 1000. Using the BB algorithm and the R software, the MLEs and their mean square errors (MSEs) are calculated and presented in Table 9. For testing the null hypothesis H 0 according to which the data follow the RGG distribution, we calculate the Y 2 α (r − 1) statistical test, and then it is compared with the diferent empirical levels of rejection of the null hypothesis H 0 when Y 2 α (r − 1) > χ 2 α (r − 1) where empirical levels of rejection of the null hypothesis are α 1 � 0.01, α 2 � 0.05, and α 3 � 0.10. Table 10 gives the theoretical risk and empirical risk for complete case. Te levels simulated for the statistic Y 2 α (r − 1) agree with those corresponding to the theoretical levels of the chi-square distribution with (r − 1) degrees of freedom, and it is noticed after accounting for simulation errors. In light of this, we can state that the test suggested in this study can appropriately adapt the data obtained from a RGG model.

Simulating
For simulating the M 2 α (r) under the uncensored maximum likelihood method, the data were simulated N � 10, 000 times under the sample sizes n 1 � 25, n 2 � 50, n 3 � 130, n 4 � 350, n 5 � 500, and n 6 � 1000. Using the BB algorithm and the R software, the MLEs and their mean square errors (MSEs) are calculated and presented in Table 11. For testing the null hypothesis H 0 according to which the data follow the RGG distribution, we calculate the Y 2 α (r − 1) statistical test, and then it is compared with the diferent empirical levels of rejection of the null hypothesis H 0 when M 2 α (r) > χ 2 α (r) where empirical levels of rejection of the null hypothesis are α 1 � 0.01, α 2 � 0.05, and α 3 � 0.10. Table 12 gives the theoretical risk and empirical risk for censored case. Te levels simulated for the statistic Y 2 α (r − 1) agree with those corresponding to the theoretical levels of the chisquare distribution with (r) degrees of freedom, and it is noticed after accounting for simulation errors. In light of this, we can state that the test suggested in this study can appropriately adapt the data obtained from a RGG model.

Data Analysis
Tree examples from various felds are used to demonstrate the applicability of the proposed paradigm. We utilize M 2 α (r) to ft the frst one's censored data from a survival analysis to predicted distributions. For the whole data scenario, Y 2 α (r − 1) is built to see if the suggested model can accurately represent the two more occurrences (for more relevant datasets, see Emam et al. [20] we can accept the null hypothesis that the reliability data follow the RGG distribution.

Strength of Glass Fiber Data.
Te third dataset includes the strength of glass fber data given by Smith and Naylor [24]. Additionally, as many researchers and academics have modelled, examined, and drawn several inferences from the strengths of fber glass data, they have received considerable attention in statistical modelling. Te data are as follows:  Table 5 gives the values of a j , e j (z), O j (z), K 1j,Φ , and K 2j,Φ under r � 5. Table 13 gives the values of b j (z), e j (z), O j (z), K 1j,Φ , K 2j,Φ , K 3j,Φ for data of times to infection of kidney dialysis patients. Ten, by calculating the modifed NRR test statistic M 2 0.05 (5), we have M 2 0.05 (5) � 7.6324. Since

Concluding Remarks
A novel continuous probability distribution called the Rayleigh generalized gamma (RGG) distribution is introduced and studied in this work, but we will approach it from fresh angles that depart from those often covered by scholars. In order to highlight more practical aspects in the areas of risk assessment and analysis, distributive verifcation, and its related practical applications on complete data and censored data, we chose to ignore many theoretical results and algebraic derivations, and this is not to say that they are not important. However, by presenting and discussing some novel characterizations based on some related theories, such as characterizations based on the two truncated moments, characterizations in terms of the hazard function, and characterizations based on the basis of the   conditional expectation of a function of the random variable, we were able to cover some theoretical aspects of the RGG distribution. By analyzing a collection of commonly used fnancial indicators, such as the value-at-risk (VAR), tailvalue-at-risk (TVAR), tail variance (TV), tail mean-variance (TMV), and mean excess loss (MEL) function, it is possible to analyze and evaluate the risks that insurance frms face. Te maximum likelihood estimation approach, the ordinary least squares method, the weighted least squares estimation method, and the Anderson-Darling estimation method are all described as estimate strategies for the major important risk indicators. Tese four methods were used and applied for the actuarial evaluation, and a comparison is presented for determining the best method under a simulation study (for artifcial assessment) and under an application to insurance claims data. Te simulation is performed under three degrees of confdence, considering various sample sizes. With regard to the application to insurance claims data, the following results can be highlighted: (1) For all risk assessment methods: (6) For the RGG model: nearly for all q values, the OLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MLE method is recommended as a second one. For the GG model: nearly for all q values, the OLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MLE method is recommended as a second one. (7) For all q In comparing various q values and risk methods, it has been found that the RGG (Risk Gamma-Gamma) model outperforms the GG (Gamma-Gamma) model. Te RGG distribution demonstrates superior performance when used to model insurance claims reimbursement data and assess actuarial risk, even though both probability distributions have an equal number of parameters. Given this observation, it is expected that actuaries and practitioners will show signifcant interest in adopting the RGG distribution for future actuarial and applied research endeavors. In the framework of distributional validation and statistical hypothesis tests for the complete data, the wellknown Nikulin-Rao-Robson statistic (Y 2 α (r − 1)), which is based on the uncensored maximum likelihood estimators on initial non-grouped data, is considered under the RGG model. Te Y 2 α (r − 1) statistic is assessed under a simulation study and under three real datasets as well, and the following results can be highlighted: (i) For the uncensored times between failures for repairable items data: M 2 0.05 (5) � 7.6324 < χ 2 0.05 (5) � 11.0705; therefore, we can accept the null hypothesis that the times between failures for repairable items data follow the RGG distribution. (ii) For the uncensored reliability data: Y 2 0.05 (6) � 8.6325 < χ 2 0.05 (6) � 12.59159; therefore, we can accept the null hypothesis that the reliability data follow the RGG distribution. (iii) For the uncensored strength data: Y 2 0.05 (6) � 7.0362 < χ 2 0.05 (6) � 12.59159; therefore, we can accept the null hypothesis that the strength data follow the RGG distribution.
In the framework of distributional validation and statistical hypothesis tests for the censored data, a modifed NRR statistic (M 2 α (r)), which is based on the censored maximum likelihood estimators on initial non-grouped data, is considered under the RGG model. Te M 2 α (r) statistic is assessed under comprehensive simulation study and under three real datasets, and the following results can be highlighted: (i) For the censored times to infection of kidney dialysis patients data, M 2 0.05 (5) � 7.6324 < χ 2 0.05 (5) � 11.0705; therefore, we can accept the null hypothesis that the data of times to infection of kidney dialysis patients follow the RGG distribution.
(ii) For the censored bone marrow transplant data, M 2 0.05 (4) � 6.2345 < χ 2 0.05 (4) � 9.4877; therefore, we can accept the null hypothesis that the bone marrow transplant data follow the RGG distribution. (iii) For the censored strength of certain type of braided cord data, M 2 0.05 (4) � 5.9637. < χ 2 0.05 (4) � 9.4877; therefore, we can accept the null hypothesis that the strength of certain type of braided cord data follow the RGG distribution.