Decision Support Systems in Prostate Cancer Treatment: An Overview

Background A multifactorial decision support system (mDSS) is a tool designed to improve the clinical decision-making process, while using clinical inputs for an individual patient to generate case-specific advice. The study provides an overview of the literature to analyze current available mDSS focused on prostate cancer (PCa), in order to better understand the availability of decision support tools as well as where the current literature is lacking. Methods We performed a MEDLINE literature search in July 2018. We divided the included studies into different sections: diagnostic, which aids in detection or staging of PCa; treatment, supporting the decision between treatment modalities; and patient, which focusses on informing the patient. We manually screened and excluded studies that did not contain an mDSS concerning prostate cancer and study proposals. Results Our search resulted in twelve diagnostic mDSS; six treatment mDSS; two patient mDSS; and eight papers that could improve mDSS. Conclusions Diagnosis mDSS is well represented in the literature as well as treatment mDSS considering external-beam radiotherapy; however, there is a lack of mDSS for other treatment modalities. The development of patient decision aids is a new field of research, and few successes have been made for PCa patients. These tools can improve personalized medicine but need to overcome a number of difficulties to be successful and require more research.

the information of that data extraction element is described somewhere else). This is denoted by the answer option "R". For data extraction elements that do not apply to a specific situation, there is the answer option "NA". Some TRIPOD items do not apply to all four types of prediction model studes, e.g. TRIPOD item 10a "Describe how predictors were handled in the analyses", is not applicable when reporting about external validation, whereas TRIPOD item 10c "For validation, describe how the predictions were calculated" does not apply to the reporting of model development. In such instances we state 'not applicable' and grey shaded these data extraction elements.

Calculating adherence to TRIPOD
First, adherence of a report is calculated per TRIPOD item. If the answer to all data extraction elements of a particular TRIPOD item is scored "yes", adherence to that TRIPOD item is scored as "1", and non adherence as "0". In some situations a different scoring rule is used, which is described in the data extraction checklist below for the corresponding items.
Subsequently, a report's overall TRIPOD adherence score can be calculated. This is calculated by dividing the sum of the adhered TRIPOD items by the total number of applicable TRIPOD items for that report. This total can vary since some TRIPOD items may be not applicable to all four types of prediction model studies. The total number of applicable TRIPOD items for D studies is 30, for V 30, for D+V 36 and for IV 35. 1 In addition, five TRIPOD items (5c, 10e, 11, 14b, and 17) might not be applicable for specific reports.
If one reviews multiple prediction model studies on their adherence to TRIPOD, overall adherence per TRIPOD item can be calculated by dividing the number of studies that adhered to a specific TRIPOD item by the number of studies in which the specific TRIPOD item was applicable. ii The words prediction, risk prediction, prediction model, risk models, prognostic models, prognostic indices, risk scores (or synonyms) are reported in the title The target population is reported in the title The outcome to be predicted is reported in the title Score 1 if all extraction items are scored as "Y", "NA", or "R" Score 1 if all extraction items are scored as "Y", "NA", or "R" Score 1 if all extraction items are scored as "Y", "NA", or "R" Score 1 if all extraction items are scored as "Y", "NA", or "R" i The starting date of accrual is reported The end date of accrual is reported The length of follow-up and prediction horizon/time frame are reported, if applicable E.g. "Patients were followed from baseline for 10 years" and "10-year prediction of…"; notably for prognostic studies with long term follow-up. If this is not applicable for an article (i.e. diagnostic study or no follow-up), then score Not applicable.

Participants 5a
Specify key elements of the study setting (e.g., primary care, secondary care, general population) including number and location of centres.
Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" i The study setting is reported (e.g. primary care, secondary care, general population) E.g.: 'surgery for endometrial cancer patients' is considered to be enough information about the study setting.
The number of centres involved is reported If the number is not reported explicitly, but can be concluded from the name of the centre/centres, or if clearly a single centre study, score Yes.
The geographical location (at least country) of centres involved is reported If no geographical location is specified, but the location can be concluded from the name of the centre(s), score Yes.
Describe eligibility criteria for participants.

Score 1 if extraction item is scored as "Y"
Score 1 if extraction item is scored as "Y" Score 1 if extraction item is scored as "Y"

Score 1 if extraction item is scored as "Y"
i In-/exclusion criteria are stated These should explicitly be stated. Reasons for exclusion only described in a patient flow is not sufficient.
Outcome 6a Clearly define the outcome that is predicted by the prediction model, including how and when assessed.
Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" i The outcome definition is clearly presented This should be reported separately for development and validation if a publication includes both.
ii It is described how outcome was assessed (including all elements of any composite, for example CVD [e.g. MI, HF, stroke]).
iii It is described when the outcome was assessed (time point(s) since T0) Report any actions to blind assessment of the outcome to be predicted.

Score 1 if extraction item is scored as "Y"
Score 1 if extraction item is scored as "Y" Score 1 if extraction item is scored as "Y" Score 1 if extraction item is scored as "Y" i Actions to blind assessment of outcome to be predicted are reported If it is clearly a non-issue (e.g. all-cause mortality or an outcome not requiring interpretation), score Yes. In all other instances, an explicit mention is expected.

Predictors 7a
Clearly define all predictors used in developing or validating the multivariable prediction model, including how and when they were measured.
Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" Score 1 if all extraction items are scored as "Y" or "R" i All predictors are reported For development, "all predictors" refers to all predictors that potentially could have been included in the 'final' model (including those considered in any univariable analyses). For validation, "all predictors" means the predictors in the model being evaluated.
ii Predictor definitions are clearly presented iii It is clearly described how the predictors were measured =R if D7aiii=R AND V7aiii=R iv It is clearly described when the predictors were measured i It is clearly described whether predictor assessments were blinded for outcome For predictors for which it is clearly a non-issue (e.g. automatic blood pressure measurement, age, sex) and for instances where the predictors were clearly assessed before outcome assessment, score Yes. For all other predictors an explicit mention is expected.
ii It is clearly described whether predictor assessments were blinded for the other predictors Sample size 8 Explain how the study size was arrived at.

Score 1 if extraction item is scored as "Y"
Score 1 if extraction item is scored as "Y" Score 1 if extraction item is scored as "Y" Score 1 if extraction item is scored as "Y" i It is explained how the study size was arrived at Is there any mention of sample size, e.g. whether this was done on statistical grounds or practical/logistical grounds (e.g. an existing study cohort or data set of a RCT was used)?
Missing data 9 Describe how missing data were handled (e.g., complete-case analysis, single imputation, multiple imputation) with details of any imputation method.
Score 1 if all extraction items are scored as "Y" or "NA" Score 1 if all extraction items are scored as "Y" or "NA" Score 1 if all extraction items are scored as "Y" or "NA" Score 1 if all extraction items are scored as "Y" or "NA" iii If missing data were imputed, a description of which variables were included in the imputation procedure is given.
When under 9i explicit mentioning of no missing data, complete case analysis or no imputation applied, score Not applicable. i Differences or similarities in definitions with the development study are described Mentioning of any differences in all four (setting, eligibility criteria, predictors and outcome) is required to score Yes. If it is explicitly mentioned that there were no differences in setting, eligibility criteria, predictors and outcomes, score Yes. For incremental value reports, in case additional predictors are not added to a previously developed prediction model but rather added to conventional predictors in a newly fitted model, score Not applicable. ii Summary information is provided for all predictors included in the final developed/validated model The number of participants with missing data for predictors is reported The number of participants with missing data for the outcome is reported