Subjective Outcome Evaluation of the Project P.A.T.H.S.: Findings Based on the Perspective of the Program Implementers

A total of 52 schools (n = 8679 students) participated in the experimental implementation phase of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes). After completion of the Tier 1 Program, 344 instructors completed the Subjective Outcome Evaluation Form (Form B) to assess their views of the program, instructors, and perceived effectiveness of the program. Based on the consolidated reports submitted by the schools to the funding body, the research team aggregated the consolidated data to form a “reconstructed” overall profile on the perceptions of the program implementers. Results showed that high proportions of the workers had positive perceptions of the program and their own performance, and roughly 90% of the workers regarded the program as helpful to the program participants. The present study provides additional support for the effectiveness of the Tier 1 Program of the P.A.T.H.S. Project in Hong Kong.


INTRODUCTION
Subjective outcome evaluation is a commonly used strategy to evaluate programs in the context of human services and it has been used by different professionals in different fields, such as education, social work, psychology, medicine, and allied health professions. Although there are many criticisms of this approach, the client satisfaction approach is widely used in different service settings [1,2]. As pointed out by Royse [3], "despite the generally positive bias and the problems associated with collecting representative samples of clients, there is much to recommend client satisfaction studies as one means of evaluating a program. Because professionals do not experience the agency in the same way as the clients, it is important to ask clients to share their experiences" (pp. 264-265).
Although it is important to understand the experiences of the clients, it is equally important to examine the experiences of the workers who conduct the intervention or implement the program. This practice is particularly important for programs that are implemented by workers who are not directly involved in the design of the program. In many positive youth development or adolescent prevention programs, the programs are often developed by academics and experienced workers in the field (e.g., school-based drug prevention programs) with the involvement of some frontline workers. The developed programs are then implemented by frontline workers, such as teachers and social workers. Under such contexts, frontline workers may have strong resistance in the implementation of programs for which they have had little involvement in the design process. Furthermore, organizational constraints may also adversely affect staff morale that, in turn, lowers the motivation of the workers to implement the program in an authentic manner.
There are several reasons why subjective outcome evaluation should include the perceptions of the program implementers. First, since the program implementers are also stakeholders of the developed programs, their views should be gathered. According to the Joint Committee on Standards for Educational Evaluation [4], stakeholders should be identified (Standard U1) and their views should be taken into account (Standard F2). According to utilization-focused evaluation [5], relevant stakeholders should also be involved in the evaluation process. From the interpretive and constructivist perspectives, as the reality is fluid, it is also important to look at the experiences of different stakeholders.
Second, as program implementers are usually more experienced than the clients, it can be argued that their views may be more accurate than those of the clients. For example, in adolescent prevention programs, it is common to ask the program participants and workers about their perceptions of the program design, objectives, and rationales. It can be argued that the program implementers in this context possess better skills and experience in judging the quality of the program designed. Similarly, with their professional training and experience, workers will be in a better position to assess the effectiveness of the programs.
Third, it can be argued that subjective outcome evaluation based on the perspective of the workers is important as far as reflective practice is concerned. According to Osterman and Kottkamp [6], professionals need and desire feedback about their own performance, and personal reflections can lead to professional growth and development. Similarly, Taggart and Wilson [7] pointed out the importance of reflective practice in teaching. As reflective practice has become more important in different disciplines, the practice of subjective outcome evaluation can help professionals to reflect on the program they have delivered, and to assess their input and quality of the implementation. In short, subjective outcome evaluation based on the perspective of the workers constitutes a vehicle that facilitates reflective practice in helping professionals.
Fourth, the inclusion of subjective outcome evaluation based on the worker's perspective can give the workers a sense of fairness, which is an important determinant of the morale of the workers. Obviously, if only the clients have the right to assess the program implementers, the workers may sense that the evaluation is rather unfair because only the voices of the clients are heard. Furthermore, when the workers are invited to express their views and feelings, they would feel more respected, thus not regarding themselves as the victims of consumerism.
Fifth, in situations where a developed program is used in different sites (e.g., school-based positive youth development programs), implementation experiences may vary across schools. For some sites where the implementation experiences are negative, such news may spread quickly and the related rumors may adversely affect the process of the implementation. As such, if the researchers can build up a systematic profile of the experiences of the workers and disseminate the related findings, such research findings can demystify the rumors and distorted news, and they can help to provide a transparent and accurate picture on the implementation quality.
Finally, based on the principle of triangulation, the collection of subjective outcome evaluation data from different sources definitely can help to answer the question of whether data collected from different sources generate the same picture. For example, while the workers may perceive themselves as performing well in the implementing process, the students may not have the same perceptions. Similarly, the students and instructors may have different views of the learning motivation of the students. In short, inclusion of subjective outcome evaluation data from different perspectives can enable researchers to paint a more complete picture regarding perceived program attributes and effects.
The Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) is a twotier positive youth development program financially supported by the Hong Kong Jockey Club Charities Trust [8,9]. In the Tier 1 Program, a universal positive youth development program typically involving 20-h of training (40 teaching units) in the school year at each grade is provided to Secondary 1 to 3 students at each year. While the experimental implementation phase of the project will last for 3 years  [10,11,12,13]. With particular reference to subjective outcome evaluation based on students randomly selected from four schools (n = 546 students), results showed that the program participants perceived the program in a favorable manner and roughly 85% of them regarded the program as helpful to them [10]. Interim evaluation based on telephone interviews of the program implementers also showed that nearly all workers (97.1%) regarded the program to be beneficial to the students and most of them (78.6%) had positive global evaluation of the project [13]. While the above studies can give some evidence on the positive aspects of the program, there is a need to know more about the views of the program implementers.
In this paper, subjective outcome evaluation findings based on the perspective of the workers in the Tier 1 Program in the experimental implementation phase are reported. As each participating school was required to submit an evaluation report with the consolidated subjective outcome evaluation profile of the workers to the Hong Kong Jockey Club Charities Trust (i.e., workers were expected to conduct program evaluation as part of their professional practice), we could make use of such reports to "reconstruct" the overall profile of the subjective outcome evaluation data. The major advantage of this strategy is that we can promote practice evaluation in the field while, at the same time, conduct secondary data analyses of the reports submitted.

Participants and Procedures
There were 52 schools joining the experimental implementation phase. The mean number of students per school was 166.90 (ranging from 37-240 students), with an average of 4.58 classes per school (ranging from 2-7 classes). Among them, 29 schools adopted the full program (i.e., 20-h program involving 40 units). The mean number of sessions used to implement the program was 17.75 (ranging from 3-50 sessions). While 21 (40.4%) schools incorporated the program in the formal curriculum (e.g., Liberal Studies, Life Education), 31 schools (59.6%) used other modes (e.g., using form master's periods and other combinations) to implement the program. A total of 419 workers implemented the program in the schools. The mean numbers of social workers and teachers implementing the program per school were 2.63 (ranging from 0-8) and 5.13 (ranging from 0-17), respectively.
After the Tier 1 Program was completed, the workers were invited to respond to a subjective outcome evaluation questionnaire. A total of 344 workers responded to the Subjective Outcome Evaluation Form (Form B) developed by the research team. The data collection was normally carried out after the completion of the program. To facilitate the program evaluation, the research team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data [14]. In addition, adequate training was provided to the workers during the 20-h training workshops on how to collect and analyze the data collected by Form B.

Instruments
The Subjective Outcome Evaluation Form (Form B) was designed by Daniel Shek and Andrew Siu [14]. Broadly speaking, there are several parts in this evaluation form as follows: • Program implementer's perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondent's participation during class (10 items) • Program implementer's perceptions of his/her own practice, including his/her understanding of the course, teaching skills, professional attitude, involvement, and interaction with the students (10 items) • The worker's perceptions of the effectiveness of the program, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items The workers collecting the data were requested to input the data in an Excel file developed by the research team, which would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the reports, they were also requested to submit the soft copy of the consolidated data sheets. After receiving the consolidated data by the funding body, the data were aggregated to "reconstruct" the overall profile based on the subjective outcome evaluation data.

RESULTS
The quantitative findings based on the closed-ended questions are presented in this paper. There are several observations that can be highlighted from the findings. First, the participants generally had positive perceptions of the program (see Table 1), including the objectives of the teaching units (90.7%), systematic design of the teaching activities (81.4%), and active involvement of the students (83.4%). Second, a high proportion of the workers had positive evaluation of their performance (see Table 2). For example, 94.1% of the workers had positive evaluation of their performance, 98.5% of the workers expressed that they were concerned about the students, and 97% believed that they had very good professional attitude. Third, as shown in Table 3, many workers perceived that the program promoted the development of students, including their bonding (85.1%), resilience (81.3%), social competence (91.3%), emotional competence (88.3%), moral competence (90.4%), self-understanding (93%), and overall development (89.8%). Fourth, 84.3% of the workers would recommend the program to students with similar needs. Fifth, roughly 85% of the workers expressed that they would teach similar courses again in future. Finally, roughly four-fifths of the respondents indicated that they were satisfied with the program (see Table 4).

DISCUSSION
In this study, the Subjective Outcome Evaluation findings based on the perspective of the workers showed that a high proportion of the respondents had positive perceptions of the program, including the program design, workers, and effectiveness. The present findings are consistent with the subjective outcome evaluation findings based on Form A (i.e., evaluation based on the students), which also showed that a high proportion of the program participants had favorable perceptions of the program, workers, and helpfulness of the program. Furthermore, the findings are also in line with those evaluation findings based on objective outcome evaluation, process evaluation, and interim evaluation [10,11,12,13]. Generally speaking, the overall picture based on the subjective outcome evaluation findings obtained from different sources is quite positive in nature. From an information flow perspective, the findings can give more transparent information regarding the feelings and experiences of the workers who implement the Tier 1 Program of the P.A.T.H.S. Project. In the context of human services, there is a growing emphasis on the importance of understanding the views and experiences of the workers who conduct the intervention. In their discussion of client and worker satisfaction in a child protection agency, Winefield and Barlow [15] argued that monitoring staff perception was important because "staff have valuable first-hand experience of how, when, and how well programs work" (p. 898). With specific reference to school-based prevention programs, Peterson and Esbensen [16] pointed out that "because personnel, consciously or unconsciously, influence the effectiveness of prevention program lessons, it is important to assess their perceptions when evaluating a specific program to provide insight into the context in which the program operates" (p. 219). In the same vein, Flannery and Torquati [17] remarked that "teachers who are not satisfied with a program are less likely to use the program materials, regardless of whether their principal or district administration is supportive of the program" (p. 395). Unfortunately, there are limited research studies in the literature that document the perceptions of workers in positive youth development and adolescent prevention programs [12]. As pointed out by Najavits et al. [18], there are few empirical studies on therapists' satisfaction with treatment guided by manuals and there is limited understanding of the "inner world of clinicians" (p. 36).
There are three strengths of this study. First, the subjective outcome evaluation findings are based on a large sample size (n = 344 workers involving 52 schools). Such a big sample size substantially enhances the generalizability of the research findings to other student populations. Second, different aspects of subjective outcome including views of the program, workers, perceived effectiveness, and overall satisfaction were covered in the study. Third, the present study demonstrates the strategy of "reconstructing" the overall profile of the subjective outcomes based on the reports submitted by the participating schools. In fact, this study is the first published scientific study utilizing this "reconstruction" approach based on such a large number of workers in the Chinese culture.
On the other hand, it is noteworthy that there are several limitations of the study. First, as the data were reconstructed from the reports submitted by the schools, the unit of analysis was schools rather than individual program participants. As such, characteristics at the individual level cannot be examined. Second, while the reconstructed profile can give some ideas about the global picture, those unfavorable responses were diluted. It is suggested that it is important to examine such unfavorable responses by looking further at the qualitative findings. The third limitation of the study is that although it is possible to interpret the positive findings in terms of program success, it is noteworthy that there are several alternative explanations of the findings. The first alternative explanation is "beauty on the beholder side" hypothesis. As the workers are the stakeholders and they are personally involved in implementing the program, they tend to look at the program effect and their own performance in a more favorable light. The second alternative explanation is the "cognitive dissonance" hypothesis. As the workers may have beliefs about the value of the program, it would be difficult for them to rate the program and themselves in an unfavorable manner. In particular, unfavorable evaluation would pose a threat to the professional self and self-esteem of the workers. The third alternative explanation is the "survival" hypothesis that maintains that the positive subjective outcome evaluation findings occurred as a result of the participants' anxiety that the program would be cut if the evaluation findings are not positive. This possibility could be partially dismissed because the funding body has never linked funding with program success, and there was no league table in the evaluation findings. The final alternative interpretation is that the workers may consciously respond in a "nice" manner to help the researchers to illustrate positive program effect. However, this alternative explanation could be partially dismissed because negative ratings were recorded (e.g., whether the workers would teach similar courses again) and the workers responded in an anonymous manner. Despite these limitations, the present findings suggest that the Tier 1 Program and its implementation were perceived in a positive manner by the program implementers and the workers perceived the program to be beneficial to the development of the students and the program implementers.