From Humoral Theory to Performant Risk Stratification in Kidney Transplantation

The purpose of the present review is to describe how we improve the model for risk stratification of transplant outcomes in kidney transplantation by incorporating the novel insights of donor-specific anti-HLA antibody (DSA) characteristics. The detection of anti-HLA DSA is widely used for the assessment of pre- and posttransplant risks of rejection and allograft loss; however, not all anti-HLA DSA carry the same risk for transplant outcomes. These antibodies have been shown to cause a wide spectrum of effects on allografts, ranging from the absence of injury to indolent or full-blown acute antibody-mediated rejection. Consequently, the presence of circulating anti-HLA DSA does not provide a sufficient level of accuracy for the risk stratification of allograft outcomes. Enhancing the predictive performance of anti-HLA DSA is currently one of the most pressing unmet needs for facilitating individualized treatment choices that may improve outcomes. Recent advancements in the assessment of anti-HLA DSA properties, including their strength, complement-binding capacity, and IgG subclass composition, significantly improved the risk stratification model to predict allograft injury and failure. Although risk stratification based on anti-HLA DSA properties appears promising, further specific studies that address immunological risk stratification in large and unselected populations are required to define the benefits and cost-effectiveness of such comprehensive assessment prior to clinical implementation.


Introduction
Circulating anti-donor-specific HLA antibodies (anti-HLA DSA) were recognized in hyperacute rejection in 1969 [1]; however, it took more than 40 years for the transplant community to consider the presence of anti-HLA DSA as the main reason for allograft rejection and long-term failure [2,3]. There is mounting evidence both experimental and clinical in support of Dr. Terasaki's prediction as outlined in "the humoral theory of transplantation" [4,5]. Furthermore, the transplant community has recognized circulating anti-HLA DSA detected prior to or after transplantation as one of the most informative biomarkers for predicting worse allograft outcome [6].
Although the detection of anti-HLA DSA is widely used in clinical practice for the assessment of pre-and posttransplant risks of rejection and allograft loss, it has become indisputable that not all anti-HLA DSA carry the same risk for transplant outcomes [7]. These antibodies have been shown to cause a wide spectrum of effects on allografts, ranging from the absence of injury to indolent or full-blown acute antibody-mediated rejection (ABMR) [8,9]. Consequently, the presence of circulating anti-HLA DSA does not provide a sufficient level of accuracy for the risk stratification of allograft outcome. Enhancing the predictive performance of anti-HLA DSA is currently one of the most pressing unmet needs for facilitating individualized treatment choices that may improve outcomes [7].
Over the last decade, studies have been focused on defining how the level of circulating anti-HLA DSA may explain the substantial phenotypic variability in allograft injury. First, anti-HLA DSA strength (mean fluorescent intensity [MFI] as defined by Luminex single antigen bead testing [SAB]) has been associated with antibody-mediated allograft injury  )  B53  14522  1247  5280  2023  1022  19999  B35  10128  44  2473  178  1516  20667  A23  11440  89  4733  1413  40  0  A2  10605  0  4265  985  475  4  A68  10062  6  29  3463  3  4  B13  8056  1  2763  88  0  0  DR12  11741  30  3864  89  0 [10]. In addition, a more comprehensive assessment of circulating anti-HLA DSA that includes their capacity to bind complement and their IgG subclass composition would also provide clinically relevant information with respect to the prediction of allograft injury and loss. The purpose of the present review is to describe how we improve the model for risk stratification of transplant outcomes in kidney transplantation by incorporating the novel insights of anti-HLA DSA characteristics.

Contemporary Multidimensional Assessment of Circulating Donor-Specific Anti-HLA Antibodies
Introduction of multiplex-bead array assays has significantly improved the sensitivity and precision of circulating anti-HLA DSA detection. The benefits and limitations of the solid-phase assays using SAB have been captured in many reviews identifying potential problems that may impact test interpretation of antibody strength and patient management [7,12]. For example, false positive results may be reported due to antibodies to denatured HLA molecules, or false weak or negative results may occur in the presence of intrinsic and extrinsic factors inhibiting the SAB assay [13]. It was elegantly demonstrated in two studies that the false low MFI in SAB assays, "prozone," was caused by C1 complex formation that initiates classical complement activation culminating in dense C3b/d deposition, thus preventing secondary antibody binding [14,15]. Furthermore, biologic confounding factors related to epitope-sharing may also impact the MFI values. Currently SABs may provide a semiquantitative measurement of antibody strength but are not approved for quantitative assessment of antibody level. Removing potential inhibitors in the sera with various treatment modalities has improved HLA antibody detection, but it did not address the potential oversaturation of the beads in the presence of high titer antibody. Tambur et al. demonstrated that serial dilution of sera pre-SAB testing provided a reliable measure of antibody strength over time and was informative for monitoring antibody levels pre-and postdesensitization protocols [10,16]. Although the standard SAB assay has improved the sensitivity of HLA antibody testing, it does not discriminate between complement-binding IgG and noncomplementbinding subclasses [7]. Flow cytometry based detection of HLA antibody using FlowPRA beads was the first cellindependent assay to demonstrate complement activation in vitro [17]. Recently, two SAB assays have been developed to detect C1q-or C3d-binding antibodies [18][19][20][21][22][23][24][25][26][27]. The ability of HLA antibody to bind complement has been shown to depend on the composition of IgG subtypes: complementbinding IgG1 or IgG3 versus noncomplement-binding IgG2 and IgG4 subtypes [28]. However, we have shown in sensitized renal transplant recipients that merely the presence of complement-binding IgG subtype in the mixture was not enough to detect C1q-binding antibody [28]. Many studies attempted to show a strong correlation between strength of antibody (>8000 MFI) and C1q-binding reactivity [29]. The best correlation, however, was found between HLA antibody titer >1 : 16 or 1 : 32 and complement-binding ability [10,30]. We have also compared the neat MFI, C1q reactivity, and IgG subtype level (MFI) in a group of sensitized renal transplant recipients [28]. For example, despite the strong total IgG SAB MFI (8000-11000), C1q reactivity was negative for anti A2, A68, A23, B13, DR12, and DR1; IgG subtypes for these specificities consisted of only low level IgG1 and/or IgG2 (Table 1). In contrast, HLA antibodies that consisted of a combination of multiple IgG subtypes were more often C1qreactive, as long as one of the subtypes was IgG1 or IgG3 (anti-B53, DR10, DQ6, DQ7/DQA1 * 05, DQ7/DQA1 * 03, and DQB1 * 05:01). Interestingly, anti-HLA-B53 was complementbinding even though it consisted of strong level IgG4 but in combination with IgG1, IgG2, and IgG3 whereas anti-HLA-B35 was not complement-binding; it consisted of similar strong level IgG4 in combination with low level IgG1 and IgG3 (Table 1). These few examples illustrate the complexity of complement-binding capacity of HLA antibody and considering the composition of the IgG subtypes and their level may be more informative to predict C1q reactivity rather than the neat MFI of SAB assay. Of note, none of the examples depicted in Table 1 were considered "prozone" since the total IgG MFI was >8000 with or without C1q binding. In contrast, in prozone the SAB MFI value for the HLA antibody is low while the C1q-SAB MFI is high [7,10,13,30]. Removing the complement interference by DTT, heat, or EDTA treatment has improved the interpretation of the SAB assay; however, it did not address the limitations of SAB assay for determining the titer of DSA nor the composition of the IgG subtypes.
In summary, based on the current knowledge of SAB testing, to use a single MFI value to predict clinical outcomes is not sufficient. Comprehensive monitoring to facilitate risk assessment and patient-tailored management should incorporate an algorithm that addresses HLA antibody characteristics.

Circulating Donor-Specific Anti-HLA Antibodies for Risk Stratification in Organ Transplantation
The present review was focused on prospective cohort studies that used hard endpoint (allograft loss) among observational studies that assessed the clinical value of anti-HLA DSA in order to provide the best level of evidence. To date, most studies in kidney transplantation have been limited to association analyses between the anti-HLA DSA and ABMR occurrence, allograft histological lesions, or allograft failure. Furthermore, the detection of anti-HLA DSA in an individual patient has not been shown to improve the accuracy of existing prediction model based on conventional risk factors [31]. In contrast, in other fields such as cancer or cardiovascular diseases, emerging biomarkers have made an important impact on risk prediction [32,33]. A novel strategy using a dynamic integration of anti-HLA DSA and their characteristics should be addressed using dedicated metrics for discrimination and risk reclassification [34][35][36].
An illustration of such a strategy is provided in Figure 1.

The Value of Donor-Specific Anti-HLA Antibody Detection for Predicting Outcomes of Kidney Transplantation: Role of Systematic
Monitoring. Short-term and long-term kidney allograft survival have been shown to be substantially worse among patients with pretransplant anti-HLA DSA detected by cell-based assays using complement-dependent cytotoxicity testing [1] or flow cytometry crossmatching [37], compared with both sensitized patients without anti-HLA DSA and nonsensitized patients. This observation remains valid even in patients with preexisting anti-HLA DSA detected only by solid-phase assays such as the SAB Luminex technique with a 1.98-fold increase in the risk of ABMR and a 1.76-fold increase in the risk of allograft failure [38]. Because of the detrimental effect of preexisting anti-HLA DSA on kidney allograft outcome it became important to include this factor in national, regional, and local allocation policies worldwide. These policies have implemented rules to prevent transplantation in the presence of preexisting anti-HLA DSA by defining acceptable and unacceptable mismatches and performing virtual crossmatching [39][40][41].
In the posttransplant setting, the development of de novo anti-HLA DSA has also been reported to dramatically increase the risk of ABMR and allograft loss. Wiebe et al. [42] found a 10-year allograft survival rate of 57% in patients with de novo anti-HLA HLA DSA compared to 96% in patients without de novo anti-HLA DSA. Recently, the relevance of a prospective strategy of systematic posttransplant anti-HLA DSA monitoring using SAB Luminex for the prediction of the risk of allograft loss was demonstrated at the population level [11]. In this study, the detection of posttransplant anti-HLA DSA improved the performance of a conventional model defined at the time of transplantation (which included donor age, donor serum creatinine, cold ischemia time, and anti-HLA DSA status at day 0) for predicting allograft loss (increase in c-statistic from 0.67 to 0.72) [11].
Importantly, the detrimental effects of posttransplant anti-HLA DSA can occur in the absence of initial allograft dysfunction, and 12 to 58% of sensitized recipients with preexisting or de novo anti-HLA DSA might develop subclinical forms of ABMR and have an increased risk of allograft loss [42][43][44][45]. This further emphasizes the need for anti-HLA DSA monitoring to identify patients who might be at risk for developing ABMR. However, the low positive predictive value of anti-HLA DSA for identifying subclinical ABMR [11,42,46] has required allograft biopsies to be performed when posttransplant anti-HLA-DSA are detected to accurately determine if subclinical ABMR is present. Recent advances for characterizing anti-HLA DSA have been implemented to improve their predictive performance by identifying harmful anti-HLA DSA that are responsible for allograft injury and failure.

The Strength of Donor-Specific Anti-HLA Antibodies for
Predicting Outcomes of Kidney Transplantation. Currently, the assessment of circulating anti-HLA DSA strength is widely used by transplant centers worldwide to stratify the pre-and posttransplant risks for ABMR and allograft loss [7]. Anti-HLA DSA strength is commonly assessed by the MFI value provided by SAB tests or the mean channel shift provided by cell-based flow cytometry crossmatches [47]. Although determining anti-HLA DSA level by solidphase assay was not approved by the US Food and Drug Administration as a quantitative measurement [48], studies have defined clinically relevant anti-HLA antibodies detected only by this assay. Several groups have demonstrated correlations between increased MFI/mean channel shift levels and increased incidences of ABMR and allograft loss [49,50]. These studies may imply that additional clinically relevant information beyond the presence or absence of anti-HLA DSA may be derived by considering the numeric values reported by these assays. Higher strength defined by MFI of circulating anti-HLA DSA have also been correlated with increased microvascular inflammation and increased C4d deposition in the peritubular capillaries of the allograft [47,51]; thus, a biological relationship exists between anti-HLA DSA strength and the allograft lesion intensity. However, the correlation between the MFI and the antibody level is far from perfect. Despite recent efforts toward the standardization and normalization of solid-phase multiplex-bead arrays [52], there are significant limitations that compromise the use of MFI as a surrogate marker of the antibody level as previously summarized [7,10,53,54]. As a consequence, no consensual threshold for risk categories based on anti-HLA DSA MFI have been defined, a limitation that was pointed out by the Transplantation Society Antibody Consensus Group in 2013 [7]. Recently, Tambur et al. addressed the importance of how best to determine antibody strength and have suggested that the quantification of the antibody level is best achieved by titration [10]. However, the use of anti-HLA DSA titration to predict ABMR and allograft loss has not been incorporated in the routine assessment of anti-HLA DSA and for patient management.

Additional Value of the Complement-Activating Capacity of Donor-Specific Anti-HLA Antibodies for Predicting
Outcomes of Kidney Transplantation. Since the pioneering discovery in 1969 that anti-HLA antibodies are lymphocytotoxic [1], activation of the complement cascade has been considered to be a key component of antibody-mediated allograft rejection. However, complement-dependent cytotoxicity assays lack sensitivity and specificity and cannot be used in large scale in transplantation follow-up. The recent development of sensitive solid-phase assays for detecting complement-binding anti-HLA antibodies has revealed novel insights into the associations between anti-HLA DSA and transplant outcomes. Growing evidence supports the notion that the capacity of anti-HLA DSA to bind complement significantly improves our ability to predict ABMR and allograft loss. The clinical relevance of posttransplant complementbinding anti-HLA DSA detected using C1q or C3d assays has been recently shown by several groups in kidney transplantation in the United States and in Europe [18,[20][21][22][23][24][25][26][27] and has also been extended to other solid transplant organs, including heart [19,30], liver [55], and lung [56]. In the study by Loupy et al. [24], posttransplant C1q-binding anti-HLA DSA detected within the first year after transplantation were found to be an independent determinant of allograft loss with a 4.8-fold increased risk.
Patients with posttransplant C1q-binding anti-HLA DSA exhibited a higher incidence of ABMR and an increased rate of allograft injuries, including microvascular inflammation,  [11]. Net benefit is shown in the 110 patients identified with pretransplant anti-HLA DSA (a) and in the 186 patients identified with posttransplant anti-HLA DSA (b). Net benefit of a clinical intervention is provided assuming that all patients will lose their graft at 5 years after transplantation (grey) and none of patients will lose their graft at 5 years after transplantation (black), based on anti-HLA DSA MFI level (green), C1q-binding status (blue), and IgG3 subclass status (red). The net benefit is determined by calculating the difference between the expected benefit and the expected harm associated with each decisional strategy. The expected benefit is represented by the number of patients who will lose their allograft and who will undergo clinical intervention (true positives) using the proposed decision rule. The expected harm is represented by the number of patients without allograft loss who would undergo clinical intervention in error (false positives) multiplied by a weighting factor based on the risk threshold. The highest curve at any given risk threshold is the optimal strategy for decision-making in order to maximize net benefit. tubular and interstitial inflammation, endarteritis, transplant glomerulopathy, and C4d deposition in the peritubular capillaries compared with patients with nonC1q-binding anti-HLA DSA and patients without anti-HLA DSA [24].
Many centers feel that MFI strength is the best predictor of anti-HLA DSA pathogenicity and complementactivating capacity. Recently, anti-HLA DSA complementbinding status following transplantation has been shown to be associated with ABMR occurrence and allograft loss independently of the anti-HLA DSA MFI [24,57], suggesting an additional value beyond MFI level for outcome prediction. Our team confirmed in a recent prospective study [11] that the detection of complement-binding anti-HLA DSA improved the prediction accuracy for allograft loss at the population level. In this study, the information provided by anti-HLA DSA complement-binding capacity adequately reclassified the individual risk of allograft loss in more than 62% of patients compared with anti-HLA DSA MFI level alone.

The IgG Subclass Composition of Donor-Specific Anti-HLA Antibodies for Predicting Outcomes of Kidney Transplantation.
The determinants of anti-HLA DSA complement-binding capacity are complex as discussed previously, including the presence of complement-fixing IgG subclasses (IgG1 and IgG3) and the levels of IgG subclasses [29,30] (Table 1). Experimental data suggest that antibodies exhibit different abilities to bind complement, to recruit immune effector cells through the Fc receptor, and to display different kinetics of appearance during the immune response according to their IgG1-4 subclass status. [58][59][60]. Emerging data support the clinical relevance of the IgG subclass composition of anti-HLA DSA and their relationships with allograft injury phenotype and survival in kidney [11,28,[61][62][63] and liver [55,64] transplantation. In particular, several teams have showed a significant association between the IgG3 subclass status of circulating anti-HLA DSA and worse transplant outcome [11,28,55,61,63,64].
In a study [28] that included 125 kidney transplant recipients the majority of patients with IgG3 anti-HLA DSA that were detected within the first year after transplantation had acute clinical ABMR that was characterized by intense microvascular inflammation and increased complement deposition in the allografts. In contrast, the majority of patients with IgG4-containing anti-HLA DSA had features of subclinical ABMR with a predominance of chronic features represented by transplant glomerulopathy and interstitial fibrosis. In this study, the IgG3 and IgG4 positivity showed good predictive performance to identify patients with clinical and subclinical ABMR, respectively. Furthermore, it was also shown that circulating anti-HLA DSA IgG3 status improved the performance of MFI level in predicting the individual risk for allograft loss in more than 76% of patients [42].
Overall, in future studies we should evaluate how IgG subtype information may add value to the assessment of sensitized patients and to our current available tools for anti-HLA DSA analysis.

Risk Stratification Based on Donor-Specific Anti-HLA Antibody Characterization for Transplant Outcome Management
The ultimate goal of accurate risk stratification for allograft injury and failure is to improve clinical transplantation outcomes. The risk-stratified approach is greatly needed to tailor therapeutic strategies in the pre-and posttransplant periods, incorporating predicted risks for adverse outcomes to maximize benefits and minimize harms and costs from medical care (Figure 2) [65]. Moreover, risk stratification is also needed to improve our ability to design and interpret therapeutic trials [66]. Averaged results of clinical trials may obscure treatment effect on specific populations, because their aggregated results including patients at various risk levels can be misleading when applied to individual patients [67]. Finally, the risk-stratified approach using anti-HLA DSA properties has direct consequences for patient care.
In the pretransplant setting, this approach has the potential to increase allocation policy efficiency by providing more reliable discrimination of the antibodies that are more or less harmful, thereby potentially expanding the donor pool for sensitized patients. In hypersensitized patients with an insufficient stream of potential donors, immunological risk stratification will help to more accurately select the patients in whom specific intensive pretransplant conditioning should be considered to eliminate deleterious antibodies. In the posttransplant setting, systematic monitoring and characterization of circulating anti-HLA antibodies provide a noninvasive tool for clinical decision-making regarding further tests and treatment. In terms of therapeutic strategies, risk assessment based on anti-HLA DSA properties could provide a basis for more targeted pathogenesis-driven therapies. The identification of specific injury phenotypes based on anti-HLA DSA characteristics could provide a rationale for the development of more specific therapeutic approaches, such as B-cell depletion with rituximab in patients with IgG4associated allograft injury [68] and complement blockade using the C5 inhibitor [69,70] Eculizumab or C1 inhibitors [71,72] in patients with complement-binding and/or IgG3positive anti-HLA DSA. Thus, collaborative prospective analysis of anti-HLA DSA using multiple assays will be critical to reconcile these issues and to create recommendations for best practices.

Competing Interests
The authors declare that they had no financial relationships with any organizations that might have an interest in the submitted work and no other relationships or activities that could appear to have influenced the submitted work.