Sunday 26 February 2012

Assignment 2: Description and Evaluation of a Prenatal Exercise Program for Urban Aboriginal Women

For the provided case study, choose a model or approach that you feel is appropriate to evaluate the program and explain why you think it would work.
Description and Evaluation of a Prenatal Exercise Program for Urban Aboriginal Women

 
Upon reading the title of the program case study, “Description and Evaluation of a Prenatal Exercise Program for Urban Aboriginal Women,” the Stake’s Countenance Model immediately came to mind as a possible model to evaluate this program.  The two key components or countenances of Stake’s evaluation model are description and judgement or evaluation, so it seemed like a logical fit as both components appear in the title.  However, after careful reading of the text, it is apparent that this paper is more descriptive about a variety of its program components and less evaluative.  Another possible evaluation process that would be effective in addressing this particular program is the Utilization-Focused Evaluation or U-FE.

What is Utilization-Focused Evaluation?
     Foundational to U-FE is that an evaluation and its findings should focus on the “…intended use by intended users” (Patton, 2002, p.1).  Utilization-focused evaluation “…is a process for making decisions about issues in collaboration with an identified group of primary users focusing on their intended uses of evaluation” (Patton, 2002, p.1).  From the beginning, the evaluator or facilitator works with the primary users to plan an appropriate evaluation based on the nature and situation of the program.  Through this ongoing interactive process, the evaluator and primary users collaborate to determine the following components of evaluation:

  • Purpose – formative, summative, process
  • Data collection – qualitative, quantitative, mixed
  • Design – experimental, naturalistic, quasi-experimental
  • Focus – inputs, outputs, outcomes, cost-benefit

This evaluation process is not static, but instead, is based on situational responsiveness which guides U-FE.  Another key component of U-FE is that it “answers the question of whose values will frame the evaluation by working with clearly identified, primary intended users who have responsibility to apply evaluation findings and implement recommendations” (Patton, 2002, p. 1).  Utilization-focused evaluation takes a constructivist approach to evaluation as primary users build their understanding of the process and use of evaluation.  By actively being involved in the process, primary users are more likely to take ownership for the evaluation and implement its findings.  Although U-FE is based on collaborative and constructivist learning, the evaluation process is framed by a twelve part checklist that is organized according to the primary tasks of evaluation and the challenges identified for each task.

Why use U-FE to evaluate the Prenatal Exercise Program for Urban Aboriginal?
  Utilization-focused evaluation would be an effective process for evaluating the Prenatal Exercise Program for Urban Aboriginal Women.  In the discussion section of the paper, the authors suggest that continuance of the prenatal exercise program may have occurred ``had it been a `grassroots` initiative or designated as having priority status by Aboriginal leaders″ (Klomp, Dyck, & Sheppard, 2003, p. 237).  Program evaluation needs to be sensitive to cultural values and norms and in this particular program, Aboriginal values and norms.  In U-FE, the intended users are more likely to enact the recommendations if they know that their values and norms are the framework for the process.  This evaluation process would provide a collaborative working relationship between the facilitator and the primary users of the findings and recommendations to be implemented.  Primary users, as identified by the U-FE checklist, include people who have a direct stake in the evaluation and meet identified criteria (criteria are negotiable).  With this particular program, the stakeholders are not clearly identified, but primary users for this evaluation could include the Aboriginal Project Facilitator, the Elder, a representative from the National Health Research and Development Program (NHRDP), a representative from the YMCA, a health representative such as the Registered Nurse Coordinator, and the authors of the paper, Klomp, Dyck and Sheppard as they developed the program.  Although it may be a challenge to coordinate U-FE, the detailed checklist provides a framework for constructing an evaluation including identification of intended users, and role clarification for the facilitator and intended users.

 The U-FE checklist is intended to guide the facilitator and the primary users to select an appropriate evaluation purpose, data collection methods, design and focus that align with the context and values of the primary users.  To begin this process, the facilitator and the primary users would work together to establish a purpose and focus, and one method to accomplish this would be the co-construction of a logic model.  This collaborative process would provide an overview of the program and identify the inputs, outputs and outcomes as well as any assumptions.  This process would also help to build understanding of the evaluation process which is a main goal of U-FE, as well as clarify the program for both the facilitator and the primary users.  Although the description of the program begins with an implied long-term objective, reduced gestational diabetes mellitus (GDM) for urban Aboriginal women and ultimately prevention of type 2 diabetes in future generations, the purpose of the program is never explicitly stated.  Some of the inputs (personnel, funding, materials, partners), outputs (45 minute fitness classes), and outcomes (medium term - improved level of fitness and self-esteem; long term - reduced GDM and type 2 diabetes) are mentioned, but not explicitly identified, so determining if they are present, intended, or achieved is challenging.  However, the purpose of an evaluation is to provide an assessment of the program and base that on the program information provided in whatever format or state it is presented.  This is why U-FE is an appropriate process for evaluation of this program.  The ongoing collaborative process lends itself to developing a logic model to clarify the purpose and focus of the evaluation alongside the primary users whose values remain at the forefront of the process.

            Once the purpose (formative, summative or process) and focus (inputs, outputs, outcomes, or cost-benefit) are identified, then data collection and evaluation design can be addressed.  A possible document that could be reviewed by the evaluator and primary users to support these next steps would be ``First Nations Child and Family Caring Society (FNCFCS) of Canada Program Evaluation Research: Final Report.″   In this March 2006 report, the evaluator reviewed various evaluation resources that could be used or adapted to meet the needs of the FNCFCS, First Nations programs, and general programs.  One of the recommendations of the report was a proposed evaluation framework, and a component of this framework included that evaluation be utilization focused.  This report supports a utilization-focused evaluation as it identified the importance of a participatory model, and of the evaluation meeting the needs of the primary users.  This report would provide a framework for evaluation of the prenatal program as well as a data base for accessing evaluation models that address First Nations programs as the report includes features and limitations of the various models reviewed. 

        Utilization-Focused Evaluation is about a process of evaluation rather than a particular model.  Although this approach to program evaluation could be costly in terms of time and money, a constructivist and collaborative inquiry based U-FE, framed by the values of the primary users, may be the most appropriate way to ensure intended use by intended users.







    











Thursday 9 February 2012

Assignment 1: Windows to Youth Health

Choose a completed evaluation; any kind, your choice. Determine the model used and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken.



Windows to Youth Health
Please note:

PHF refers to Health Canada's Population Health Fund

PPHB refers to Health Canada's Population and Public Health Branch


There are a myriad of definitions for program evaluation. Weiss (1978) defines program evaluation as “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy” (p. 4). According to the Treasury Board of Canada Secretariat, program evaluation is “… the systematic collection and analysis of evidence on the outcomes of programs to make judgements about their relevance and performance, and to examine alternative ways to deliver them or to achieve the same results” (Government of Canada, 2009, p. 1). In simplified terms, these definitions suggest that program evaluation is about the assessment of a program’s identified outcomes, and the communication of both intervention and success feedback for the purpose of sustaining and improving the program. As a means to construct my understanding of this form of evaluation, I chose to analyze the completed program evaluation, Windows to Youth Health: Population Health Fund Program Evaluation for BC Region Youth Projects 2004/2005. By identifying the evaluation model used to assess the program, and examining the strengths and weaknesses of this evaluation, I can help to build and clarify my knowledge and understanding of program evaluation.

One important step in assessing an evaluation of a program is to identify the assessment model. There are a several models that Zena Simces and Associates, the external evaluators, could have applied to evaluate the youth projects program. One such model is Stufflebeam's CIPP model, a holistic decision-based model. This systematic and comprehensive approach to evaluation focuses on a program's context, inputs, process and product to aid the stakeholders in making decisions about program improvement. Provus' Discrepancy Model (DIPPC) and Scriven's focus on formative and summative evaluation could also have been used to evaluate the youth projects program. However, Stake’s Countenance Model, a judgement-oriented model, is clearly evident in this program evaluation. Stake identified that the two acts or countenances of evaluation are description and judgement of a program and are viewed according to their phase in the program: antecedent is prior to the program; transaction is during the process; and outcome is the effect of the program (Wood, 2001). This framework provides for a comprehensive description and judgement of a program, and examines the congruency of what was intended and what was observed, as well as the links or “…contingencies, between antecedents, transactions, and outcome variables” (Wood, 2001, p. 19).  By employing the Stake's Countenance Model to evaluate the program, the evaluators provided the stakeholders with descriptions, learnings and recommendations to inform future program planning to support youth at risk.

The primary focus of this program evaluation was summative as its purpose was to examine “… the overall effectiveness of the PHF regional funding for youth, and to produce a program evaluation roll-up report” (Public Health Agency of Canada, 2005, p. 7), or Stake’s description and judgement of the program’s outcomes.  The evaluation was also formative, as the intent of the PPHB was to use the report recommendations for future planning.  The external evaluator, Zena Simces and Associates, also identified seven key objectives for the evaluation of the ten youth projects funded by the PHF in British Columbia which included summative, process, and formative evaluation.  One of the objectives that addressed process evaluation or Stake’s component of transaction was the focus on evaluating the projects’ implementation over the course of the two year funding period.  The data collection methodology used to address the evaluation purposes and objectives included document reviews (Stake’s antecedent) and content analysis of the documents using a set of pre-determined evaluation questions, as well as interviews with key stakeholders using the same evaluation questions.  Data analysis was qualitative which allowed for in-depth descriptions and judgements of the successes and challenges of the funding program.

The evaluation Windows to Youth Health: Population Health Fund Program Evaluation for BC Region Youth Projects 2004/2005, conducted by Zena Simces and Associates, is an exemplary evaluation for a number of reasons.  The following bullets outline some of the evaluation’s strengths as well as suggestions for improvement:

Strengths of Evaluation:

Comprehensive, Connected, and Clear



-the 73 page evaluation included an executive summary, introduction and context, methodology including limitations and impacts, findings including learnings and recommendations, and appendices including an extensive list of documents reviewed prior to the evaluation (antecedent)
-written in accessible language for the stakeholders

Congruency and Contingency
-descriptions and judgements were based on the identified purpose of the evaluation as well as the seven stated objectives of the evaluation
-the evaluators focused on describing what was intended and what was observed or congruency
-the evaluators described the indicators or objectives, identified what could be learned from the findings, and stated recommended actions based on the indicators and learnings
-evaluators reviewed documents to gather data about what occurred before program funding (antecedent), examined the implementation process through review of documents including logic models developed by each program team, as well as conducting interviews (transactions), and evaluated the outcomes of the program (outcomes)
- a logic model was developed in consultation with the projects and five main program outcomes were identified
-the evaluators identified the purpose of the evaluation, the stakeholders and their intended used of the findings and recommendations
-limitations of conducting the evaluation were identified and addressed: for example, there were challenges around the availability and quality of documentation by the projects, so the evaluators collected additional data to address the void

Suggestions for Improvement:
Logic Model:
-perhaps the evaluators could have suggested that the PHFBC logic model, which was developed in consultation with the project leaders, be revised to address the challenges exerienced by the project leaders in developing their own logic models based on the BC logic model


Cost:
-inclusion of a cost analysis of the evaluation in terms of time and money
-the projects were government funded, so for transparency and accountability, tax payers should know how their tax dollars are allocated
-as this was an exemplary evaluation, other program evaluators could benefit from knowing the amount of time and money required to conduct this type of evaluation, and then use it as a benchmark for their own evaluation context


Zena Simces and Associates provided a descriptive and judgement based evaluation of Windows to Youth Health: Population Health Fund Program Evaluation for BC Region Youth Projects 2004/2005.  The comprehensive evaluation report was written in the language of the stakeholders, and focused on addressing and responding to the identified objectives of the evaluation.  The evaluator’s recommendations addressed both the strengths of the funding program, and areas for improvement.  Stake’s Countenance Model provided a framework for the evaluators to effectively examine the congruency of what was intended and what was observed, as well the contingency between the variables resulting in an exemplary evaluation.  As with all assessment and evaluation, the most important piece is what the stakeholders do with the provided information.


Kennedy, M. (2001). Race matters in the life/work of four,white female teachers. Unpublished

            doctoral dissertation, University of Alberta, Edmonton, Canada.
Treasury Board of Canada Secretariat. (2009). Policy on evaluation. Government of
Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies (2nd ed.).
           Upper Saddle River, NJ: Prentice Hall.

Wood, B.B. (2001). Stake’s countenance model: Evaluating and environmental education

            professional development course. Journal of Environmental Education, 32 (2): 18-27.

Stake's Countenance Model