We wanted to test the progress of medical students at our university in a pharmacology course. The formal teaching was given as lectures to the full class of students. We gave the very same written test of multiple-choice (MC) questions (single best choice) to third-year medical students before and after a one semester course of basic pharmacology. The initial voluntary test (containing 30 MC questions) was taken by 79% of the eligible students (
Assessing a gain in knowledge in education is a continuous task [
Others tried to assess gain of knowledge in a medical education by giving identical tests before, during, or after completion of a curriculum [
In order to exclude the possibility that students simply learned the right answers from the pretest by heart and thus passed the end of semester examination, we did not give out the questions of the pretest to students. Moreover, one left students unaware that the second test would offer the same questions as the pretest. In addition, we included a control test group of students to whom the same questions were given in the final exam but without the possibility of seeing the questions in a pretest.
Our research hypothesis was that giving the same MC test twice (a pretest before the teaching period and a final test after the teaching period) would be a proper way to assess the success of pharmacology lectures given to medical students, at least for those students that took part in the lectures.
An initial voluntary written test (pretest) to show academic achievement in (in this case general) pharmacology contained 30 multiple-choice (=MC) questions with a passing grade of 60% (see Figure
The initial voluntary test (pretest) contained 30 multiple-choice questions (single best choice, MC) with a passing grade of 60%. Students were motivated to take the initial test by the promise to get merit-based supplementary bonus points in the final examination. The final, obligatory test at the end of the course lectures contained the very same set of multiples choice questions. Students of a different semester which obtained the very same obligatory test after a course of pharmacology but without pretest served as control group. Total number of participating students: control group,
The summative test at the end of the course contained the same MC questions as the pretest (Figure
Students who previously took the same required test after a pharmacology course, but without a pretest, served as a control group (Figure
The attendance of students in lectures (one lecture per week with a total of 11 lectures) in both groups of students was monitored: students were asked to fill in a paper attendance sheet.
Arithmetic mean values and standard error of the mean (SEM) were calculated using Excel 2010. Correlations (Spearman correlation) and parametric or nonparametric tests were established using SPSS 25 [
Interested readers can obtain all data (original data and statistical analysis) in electronic format from any of the authors.
In order to establish a baseline distribution function of test results after taking the basic pharmacology course (Figure
Distribution of points in the obligatory test of the control group (cohort 1,
In the voluntary pretest (Figure
Distribution of points in the voluntary test of the test group (cohort 2). 3rd year medical students had to answer questions in pharmacology before a semester course of pharmacology (
Distribution of points in the obligatory test of the complete test group (cohort 2 + 3). 3rd year medical students had to answer questions in pharmacology after a semester course of pharmacology (
Students (cohort 3 only, obligatory test) who had not participated in the pretest (20% of those taking the obligatory test) reached 25.3 ± 0.55 mean points (Figure
Distribution of points in the obligatory test of the pretest group (cohort 2). 3rd year medical students had to answer questions in pharmacology after a semester course of pharmacology (
Interestingly, one student deteriorated from 13 to 12 points from the first (pretest) to the second examination (obligatory final test). In contrast, the highest improvement (one student) was from 6 to 29 points. Three students improved from 8 points to 29 points, and one student exhibited the poorest improvement, from 7 to 13 points. The final, obligatory test at the end of the course lectures was taken by all possibly participating students (
This might be interpreted as gain of knowledge by the course, but also (judged from informal talks with students) due to memorization of the questions (which however were never formally released) by students.
The difference in final exam points for students who took the initial exam and those who did not, is of interest. These groups are separately plotted as Figures
Distribution of points in the obligatory test of a subset of the test group shown above (cohort 3). These are students which did not attend the voluntary but the obligatory test (
Summary of distributions of points of cohort 1 (control group,
In the obligatory exam in clinical pharmacology (which was taught in the sixth and seventh semesters to the same class of medical students, see Figure
Taking 60% as the passing grade (18 points) in this obligatory exam at the end of the seventh semester, only 74 from 213 would have passed (34.74%). Taking 60% as a passing grade only in our subgroup of cohort 4, just 47 (19 male and 28 female) from 147 students would have passed (31.97%). The following mean points were reached: 16.15 ± 0.287, of which 54 were male and 85 were female students who reached similar points, 16.69 ± 0.425 and 17.01 ± 0.391 points. The range was between 8 and 24 points. As mentioned above, we were able to follow up 27 students in the seventh semester of the 37 students who have written only the obligatory test (final test: Figure
Percentage of students who passed and failed in the study arms (cohorts). Numbers of students are also given for each cohort. Moreover, the sequence of the study steps is reproduced in order to facilitate comparison between groups. Note that some students (
Moreover, we tried to correlate the findings in the exams in basic pharmacology (fifth semester) with the results of students’ final exam (board exam, Germany-wide, written, MC, comprising all the clinical medicine topics, including basic and clinical pharmacology: = M2 exam). We obtained data from 114 students. Students took the M2 exam in April 2016, when up to 319 points could be obtained, or in October 2016, when up to 317 points were available. Among 96 students (the range was between 210 and 295) who took the pretest and the final exam, 41 male students obtained mean points of 257.17 ± 3.054 and 55 female students obtained 256.29 ± 2.780 points. There was a significant correlation between the points in the final exam in the introductory pharmacology course, the subsequent clinical pharmacology course (Spearman correlation,
Besides using MC tests for summative exams, many medical faculties also use MC questions for formative exams. Successful learning can be understood as observable changes if the learners’ behavior originates from external conditions [
However, one might use tests to enhance retention of important clinical facts, in our context clinically important drugs, e.g., their indications, contraindications, and relevant pharmacokinetics parameters. While the testing effect has been clearly demonstrated in an artificial psychological laboratory setting, it is critical to know whether this testing effect is also present in a current medical curriculum in this study in pharmacology for medical students. It has been argued that in real life, medical students also learn outside the classrooms (e.g., during ward rounds and their clerkships), they are exposed to pharmacological knowledge in other lectures and courses (internal medicine, dermatology, etc.), and they do homework on their own or in groups, and get reading assignments or at least suggested papers or textbook chapters in pharmacology (compare [
A well-established way to assess progress in knowledge acquisition is to use progress tests (usually in electronic form [
Others have given identical questions repeatedly to assess competency in clinical examination but not in pharmacology [
A study similar to ours but in a different environment was recently published by colleagues in Canada [
It might be gratifying to note that students who had sat for the pretest performed better in the final test (Figure
We would like to make the point that the present study, with quite a number of participants (147–219 students per semester) is at odds with other studies with lower numbers of participants, where identical tests were given twice and an improvement in mean points was regarded as proof of the efficacy of the teaching intervention. For example, clinical students in an intensive care rotation were given the same questions initially and four weeks later, the 32 participants experienced an increase in exam points from baseline (65.7) by 4.6 points [
One can ask how we know that the control group was a valid control group and not simply a cohort of generally poorer performing students. One could argue that without randomly assigning students to experimental and control groups, it would be necessary to confirm in some other fashion that the control group matches the experimental group on all relevant background variables. This is admittedly a limitation of our study. However, we noted that the control group in the written test after the course in clinical pharmacology (end of seventh semester) obtained mean scores that were not statistically different than of the study cohort. This argues against the assumption that generally an academically weaker student group was used here as the control cohort compared to the study cohort. Furthermore, one can ask why lecture attendance was uncorrelated with final exam. This is admittedly surprising for us: we had anticipated a strong positive correlation. However, many colleagues in several countries privately mentioned similar findings: attendance of medical students in lectures (where they are not forced to participate in most universities worldwide) sharply declines over time. Students usually explain this by competing time needs like learning for other forthcoming examinations.
Moreover, one can argue, since there was very little difference in performance among those who took vs. those who did not take the pretest on the final summative test performance, what were the benefits of administering the pretest. This clearly questions the usefulness of the pretest. One way to address this issue might be to assess in a subsequent study in an additional questionnaire whether or not students found the pretest subjectively helpful (for better understanding the lectures, the textbook or preparation for subsequent test). If a strong desire of students was reported to retain this pretest that should merit consideration, as student satisfaction plays a role in curriculum development, in most faculties. Otherwise, we would not use a pretest again as it binds resources.
In the future, for reasons of lower demands on our resources, we intend to use the basic format of this study for online tests as pretests. It will be interesting to see whether this will lead to worse, similar, or better results in the final written exams than written pretests. Moreover, if one would repeat the present investigation, it would be informative to find out which other sources of information students under our testing conditions really use. One could offer an open questionnaire on learning tools and habits and correlate these learning habits to the final test: one would then use the pretest results as a contributing factor to the final test results.
In summary, giving the same MC questions twice to test an intervention in between has probably overestimated the impact of the intervention on the gain of knowledge. To the best of our knowledge, this is the first study of this kind in medical students in pharmacology.
Progress tests, consisting of a pretest and a final test, are useful to measure gain in knowledge in medical students, but they hardly measure alone the gain in knowledge through attendance in, e.g., a basic pharmacology lecture (the intervention); they also measure other sources of new knowledge, such as textbook reading or memorizing only the initial questions.
All original data are available in electronic form.
The authors declare that they have no conflicts of interest.
J. N. designed the research. S. S. and U. G. performed research. S. S., J. N., and U. G. analyzed data. U. G. and J. N. wrote the paper.
The authors acknowledge the support of PD Dr. Alp Aslan (Institute for Psychology, University Halle) with the design, statistical tests, and interpretation of the study. The authors thank the state board of medical examiners (Landesprüfungsamt Halle), especially Frau Roscher, for making data available to us. The authors acknowledge the financial support within the funding program Open Access Publishing by the German Research Foundation (DFG). The work did not receive any external funding. All internal funding was through the state-owned Martin Luther University Halle-Wittenberg.