Which type of evaluation is used to assist in the forming of new programs?

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful. Evaluators help to answer these questions, but the best way to answer the questions is for the evaluation to be a joint project between evaluators and stakeholders.

The process of evaluation is considered to be a relatively recent phenomenon. However, planned social evaluation has been documented as dating as far back as 2200 BC. Evaluation became particularly relevant in the U.S. in the 1960s during the period of the Great Society social programs associated with the Kennedy and Johnson administrations. Extraordinary sums were invested in social programs, but the impacts of these investments were largely unknown.

Program evaluations can involve both quantitative and qualitative methods of social research. People who do program evaluation come from many different backgrounds, such as sociology, psychology, economics, social work, and public policy. Some graduate schools also have specific training programs for program evaluation.

Conducting an evaluation[edit]

Program evaluation may be conducted at several stages during a program's lifetime. Each of these stages raises different questions to be answered by the evaluator, and correspondingly different evaluation approaches are needed. Rossi, Lipsey and Freeman [2004] suggest the following kinds of assessment, which may be appropriate at these different stages:

  • Assessment of the need for the program
  • Assessment of program design and logic/theory
  • Assessment of how the program is being implemented [i.e., is it being implemented according to plan? Are the program's processes maximizing possible outcomes?]
  • Assessment of the program's outcome or impact [i.e., what it has actually achieved]
  • Assessment of the program's cost and efficiency

Assessing needs[edit]

A needs assessment examines the population that the program intends to target, to see whether the need as conceptualized in the program actually exists in the population; whether it is, in fact, a problem; and if so, how it might best be dealt with. This includes identifying and diagnosing the actual problem the program is trying to address, who or what is affected by the problem, how widespread the problem is, and what are the measurable effects that are caused by the problem. For example, for a housing program aimed at mitigating homelessness, a program evaluator may want to find out how many people are homeless in a given geographic area and what their demographics are. Rossi, Lipsey and Freeman [2004] caution against undertaking an intervention without properly assessing the need for one, because this might result in a great deal of wasted funds if the need did not exist or was misconceived.

Needs assessment involves the processes or methods used by evaluators to describe and diagnose social needs This is essential for evaluators because they need to identify whether programs are effective and they cannot do this unless they have identified what the problem/need is. Programs that do not do a needs assessment can have the illusion that they have eradicated the problem/need when in fact there was no need in the first place. Needs assessment involves research and regular consultation with community stakeholders and with the people that will benefit from the project before the program can be developed and implemented. Hence it should be a bottom-up approach. In this way potential problems can be realized early because the process would have involved the community in identifying the need and thereby allowed the opportunity to identify potential barriers.

The important task of a program evaluator is thus to: First, construct a precise definition of what the problem is. Evaluators need to first identify the problem/need. This is most effectively done by collaboratively including all possible stakeholders, i.e., the community impacted by the potential problem, the agents/actors working to address and resolve the problem, funders, etc. Including buy-in early on in the process reduces potential for push-back, miscommunication, and incomplete information later on.

Second, assess the extent of the problem. Having clearly identified what the problem is, evaluators need to then assess the extent of the problem. They need to answer the ‘where’ and ‘how big’ questions. Evaluators need to work out where the problem is located and how big it is. Pointing out that a problem exists is much easier than having to specify where it is located and how rife it is. Rossi, Lipsey & Freeman [2004] gave an example that: a person identifying some battered children may be enough evidence to persuade one that child abuse exists. But indicating how many children it affects and where it is located geographically and socially would require knowledge about abused children, the characteristics of perpetrators and the impact of the problem throughout the political authority in question.

This can be difficult considering that child abuse is not a public behavior, also keeping in mind that estimates of the rates on private behavior are usually not possible because of factors like unreported cases. In this case evaluators would have to use data from several sources and apply different approaches in order to estimate incidence rates. There are two more questions that need to be answered: Evaluators need to also answer the ’how’ and ‘what’ questions The ‘how’ question requires that evaluators determine how the need will be addressed. Having identified the need and having familiarized oneself with the community evaluators should conduct a performance analysis to identify whether the proposed plan in the program will actually be able to eliminate the need. The ‘what’ question requires that evaluators conduct a task analysis to find out what the best way to perform would be. For example, whether the job performance standards are set by an organization or whether some governmental rules need to be considered when undertaking the task.

Third, define and identify the target of interventions and accurately describe the nature of the service needs of that population It is important to know what/who the target population is/are – it might be individuals, groups, communities, etc. There are three units of the population: population at risk, population in need and population in demand

  • Population at risk: are people with a significant probability of developing the risk e.g. the population at risk for birth control programs are women of child-bearing age.
  • Population in need: are people with the condition that the program seeks to address; e.g. the population in need for a program that aims to provide ARV's to HIV positive people are people that are HIV positive.
  • Population in demand: that part of the population in need that agrees to be having the need and are willing to take part in what the program has to offer e.g. not all HIV positive people will be willing to take ARV's.

Being able to specify what/who the target is will assist in establishing appropriate boundaries, so that interventions can correctly address the target population and be feasible to apply The shoestring approach helps to ensure that the maximum possible methodological rigor is achieved under these constraints.

Budget constraints[edit]

Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation [Bamberger et al., 2004]. Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation. Due to the budget constraints it might be difficult to effectively apply the most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation [Bamberger et al., 2004]. Budget constraints may be addressed by simplifying the evaluation design, revising the sample size, exploring economical data collection methods [such as using volunteers to collect data, shortening surveys, or using focus groups and key informants] or looking for reliable secondary data [Bamberger et al., 2004].

The most time constraint that can be faced by an evaluator is when the evaluator is summoned to conduct an evaluation when a project is already underway if they are given limited time to do the evaluation compared to the life of the study, or if they are not given enough time for adequate planning. Time constraints are particularly problematic when the evaluator is not familiar with the area or country in which the program is situated [Bamberger et al., 2004]. Time constraints can be addressed by the methods listed under budget constraints as above, and also by careful planning to ensure effective data collection and analysis within the limited time space.

Data constraints[edit]

If the evaluation is initiated late in the program, there may be no baseline data on the conditions of the target group before the intervention began [Bamberger et al., 2004]. Another possible cause of data constraints is if the data have been collected by program staff and contain systematic reporting biases or poor record keeping standards and is subsequently of little use [Bamberger et al., 2004]. Another source of data constraints may result if the target group are difficult to reach to collect data from - for example homeless people, drug addicts, migrant workers, et cetera [Bamberger et al., 2004]. Data constraints can be addressed by reconstructing baseline data from secondary data or through the use of multiple methods. Multiple methods, such as the combination of qualitative and quantitative data can increase validity through triangulation and save time and money. Additionally, these constraints may be dealt with through careful planning and consultation with program stakeholders. By clearly identifying and understanding client needs ahead of the evaluation, costs and time of the evaluative process can be streamlined and reduced, while still maintaining credibility.

All in all, time, monetary and data constraints can have negative implications on the validity, reliability and transferability of the evaluation. The shoestring approach has been created to assist evaluators to correct the limitations identified above by identifying ways to reduce costs and time, reconstruct baseline data and to ensure maximum quality under existing constraints [Bamberger et al., 2004].

Five-tiered approach[edit]

The five-tiered approach to evaluation further develops the strategies that the shoestring approach to evaluation is based upon. It was originally developed by Jacobs [1988] as an alternative way to evaluate community-based programs and as such was applied to a statewide child and family program in Massachusetts, U.S.A. The five-tiered approach is offered as a conceptual framework for matching evaluations more precisely to the characteristics of the programs themselves, and to the particular resources and constraints inherent in each evaluation context. In other words, the five-tiered approach seeks to tailor the evaluation to the specific needs of each evaluation context.

The earlier tiers [1-3] generate descriptive and process-oriented information while the later tiers [4-5] determine both the short-term and the long-term effects of the program. The five levels are organized as follows:

  • Tier 1: needs assessment [sometimes referred to as pre-implementation]
  • Tier 2: monitoring and accountability
  • Tier 3: quality review and program clarification [sometimes referred to as understanding and refining]
  • Tier 4: achieving outcomes
  • Tier 5: establishing impact

For each tier, purpose[s] are identified, along with corresponding tasks that enable the identified purpose of the tier to be achieved. For example, the purpose of the first tier, Needs assessment, would be to document a need for a program in a community. The task for that tier would be to assess the community's needs and assets by working with all relevant stakeholders.

While the tiers are structured for consecutive use, meaning that information gathered in the earlier tiers is required for tasks on higher tiers, it acknowledges the fluid nature of evaluation. Therefore, it is possible to move from later tiers back to preceding ones, or even to work in two tiers at the same time. It is important for program evaluators to note, however, that a program must be evaluated at the appropriate level.

The five-tiered approach is said to be useful for family support programs which emphasise community and participant empowerment. This is because it encourages a participatory approach involving all stakeholders and it is through this process of reflection that empowerment is achieved.

Methodological challenges presented by language and culture[edit]

The purpose of this section is to draw attention to some of the methodological challenges and dilemmas evaluators are potentially faced with when conducting a program evaluation in a developing country. In many developing countries the major sponsors of evaluation are donor agencies from the developed world, and these agencies require regular evaluation reports in order to maintain accountability and control of resources, as well as generate evidence for the program's success or failure. However, there are many hurdles and challenges which evaluators face when attempting to implement an evaluation program which attempts to make use of techniques and systems which are not developed within the context to which they are applied. Some of the issues include differences in culture, attitudes, language and political process.

Culture is defined by Ebbutt [1998, p. 416] as a "constellation of both written and unwritten expectations, values, norms, rules, laws, artifacts, rituals and behaviors that permeate a society and influence how people behave socially". Culture can influence many facets of the evaluation process, including data collection, evaluation program implementation and the analysis and understanding of the results of the evaluation. In particular, instruments which are traditionally used to collect data such as questionnaires and semi-structured interviews need to be sensitive to differences in culture, if they were originally developed in a different cultural context. The understanding and meaning of constructs which the evaluator is attempting to measure may not be shared between the evaluator and the sample population and thus the transference of concepts is an important notion, as this will influence the quality of the data collection carried out by evaluators as well as the analysis and results generated by the data.

Language also plays an important part in the evaluation process, as language is tied closely to culture. Language can be a major barrier to communicating concepts which the evaluator is trying to access, and translation is often required. There are a multitude of problems with translation, including the loss of meaning as well as the exaggeration or enhancement of meaning by translators. For example, terms which are contextually specific may not translate into another language with the same weight or meaning. In particular, data collection instruments need to take meaning into account as the subject matter may not be considered sensitive in a particular context might prove to be sensitive in the context in which the evaluation is taking place. Thus, evaluators need to take into account two important concepts when administering data collection tools: lexical equivalence and conceptual equivalence. Lexical equivalence asks the question: how does one phrase a question in two languages using the same words? This is a difficult task to accomplish, and uses of techniques such as back-translation may aid the evaluator but may not result in perfect transference of meaning. This leads to the next point, conceptual equivalence. It is not a common occurrence for concepts to transfer unambiguously from one culture to another. Data collection instruments which have not undergone adequate testing and piloting may therefore render results which are not useful as the concepts which are measured by the instrument may have taken on a different meaning and thus rendered the instrument unreliable and invalid.

Thus, it can be seen that evaluators need to take into account the methodological challenges created by differences in culture and language when attempting to conduct a program evaluation in a developing country.

Utilization results[edit]

There are three conventional uses of evaluation results: persuasive utilization, direct [instrumental] utilization, and conceptual utilization.

Persuasive utilization[edit]

Persuasive utilization is the enlistment of evaluation results in an effort to persuade an audience to either support an agenda or to oppose it. Unless the 'persuader' is the same person that ran the evaluation, this form of utilization is not of much interest to evaluators as they often cannot foresee possible future efforts of persuasion.

Direct [instrumental] utilization[edit]

Evaluators often tailor their evaluations to produce results that can have a direct influence in the improvement of the structure, or on the process, of a program. For example, the evaluation of a novel educational intervention may produce results that indicate no improvement in students' marks. This may be due to the intervention not having a sound theoretical background, or it may be that the intervention is not conducted as originally intended. The results of the evaluation would hopefully cause to the creators of the intervention to go back to the drawing board to re-create the core structure of the intervention, or even change the implementation processes.

Conceptual utilization[edit]

But even if evaluation results do not have a direct influence in the re-shaping of a program, they may still be used to make people aware of the issues the program is trying to address. Going back to the example of an evaluation of a novel educational intervention, the results can also be used to inform educators and students about the different barriers that may influence students' learning difficulties. A number of studies on these barriers may then be initiated by this new information.

Variables affecting utilization[edit]

There are five conditions that seem to affect the utility of evaluation results, namely relevance, communication between the evaluators and the users of the results, information processing by the users, the plausibility of the results, as well as the level of involvement or advocacy of the users.

Guidelines for maximizing utilization[edit]

Quoted directly from Rossi et al. [2004, p. 416].:

  • Evaluators must understand the cognitive styles of decisionmakers
  • Evaluation results must be timely and available when needed
  • Evaluations must respect stakeholders' program commitments
  • Utilization and dissemination plans should be part of the evaluation design
  • Evaluations should include an assessment of utilization

Internal versus external program evaluators[edit]

The choice of the evaluator chosen to evaluate the program may be regarded as equally important as the process of the evaluation. Evaluators may be internal [persons associated with the program to be executed] or external [Persons not associated with any part of the execution/implementation of the program]. [Division for oversight services,2004]. The following provides a brief summary of the advantages and disadvantages of internal and external evaluators adapted from the Division of oversight services [2004], for a more comprehensive list of advantages and disadvantages of internal and external evaluators, see [Division of oversight services, 2004].

Internal evaluators[edit]

Advantages

  • May have better overall knowledge of the program and possess informal knowledge of the program
  • Less threatening as already familiar with staff
  • Less costly

Disadvantages

  • May be less objective
  • May be more preoccupied with other activities of the program and not give the evaluation complete attention
  • May not be adequately trained as an evaluator.

External evaluators[edit]

Advantages

  • More objective of the process, offers new perspectives, different angles to observe and critique the process
  • May be able to dedicate greater amount of time and attention to the evaluation
  • May have greater expertise and evaluation brain

Disadvantages

  • May be more costly and require more time for the contract, monitoring, negotiations etc.
  • May be unfamiliar with program staff and create anxiety about being evaluated
  • May be unfamiliar with organization policies, certain constraints affecting the program.

Three paradigms[edit]

Positivist[edit]

Potter [2006] identifies and describes three broad paradigms within program evaluation . The first, and probably most common, is the positivist approach, in which evaluation can only occur where there are "objective", observable and measurable aspects of a program, requiring predominantly quantitative evidence. The positivist approach includes evaluation dimensions such as needs assessment, assessment of program theory, assessment of program process, impact assessment and efficiency assessment [Rossi, Lipsey and Freeman, 2004]. A detailed example of the positivist approach is a study conducted by the Public Policy Institute of California report titled "Evaluating Academic Programs in California's Community Colleges", in which the evaluators examine measurable activities [i.e. enrollment data] and conduct quantitive assessments like factor analysis.

Interpretive[edit]

The second paradigm identified by Potter [2006] is that of interpretive approaches, where it is argued that it is essential that the evaluator develops an understanding of the perspective, experiences and expectations of all stakeholders. This would lead to a better understanding of the various meanings and needs held by stakeholders, which is crucial before one is able to make judgments about the merit or value of a program. The evaluator's contact with the program is often over an extended period of time and, although there is no standardized method, observation, interviews and focus groups are commonly used. A report commissioned by the World Bank details 8 approaches in which qualitative and quantitative methods can be integrated and perhaps yield insights not achievable through only one method.

Critical-emancipatory[edit]

Potter [2006] also identifies critical-emancipatory approaches to program evaluation, which are largely based on action research for the purposes of social transformation. This type of approach is much more ideological and often includes a greater degree of social activism on the part of the evaluator. This approach would be appropriate for qualitative and participative evaluations. Because of its critical focus on societal power structures and its emphasis on participation and empowerment, Potter argues this type of evaluation can be particularly useful in developing countries.

Despite the paradigm which is used in any program evaluation, whether it be positivist, interpretive or critical-emancipatory, it is essential to acknowledge that evaluation takes place in specific socio-political contexts. Evaluation does not exist in a vacuum and all evaluations, whether they are aware of it or not, are influenced by socio-political factors. It is important to recognize the evaluations and the findings which result from this kind of evaluation process can be used in favour or against particular ideological, social and political agendas [Weiss, 1999]. This is especially true in an age when resources are limited and there is competition between organizations for certain projects to be prioritised over others [Louw, 1999].

Empowerment evaluation[edit]

Empowerment evaluation makes use of evaluation concepts, techniques, and findings to foster improvement and self-determination of a particular program aimed at a specific target population/program participants. Empowerment evaluation is value oriented towards getting program participants involved in bringing about change in the programs they are targeted for. One of the main focuses in empowerment evaluation is to incorporate the program participants in the conducting of the evaluation process. This process is then often followed by some sort of critical reflection of the program. In such cases, an external/outsider evaluator serves as a consultant/coach/facilitator to the program participants and seeks to understand the program from the perspective of the participants. Once a clear understanding of the participants perspective has been gained appropriate steps and strategies can be devised [with the valuable input of the participants] and implemented in order to reach desired outcomes.

According to Fetterman [2002] empowerment evaluation has three steps;

  • Establishing a mission
  • Taking stock
  • Planning for the future

Establishing a mission[edit]

The first step involves evaluators asking the program participants and staff members [of the program] to define the mission of the program. Evaluators may opt to carry this step out by bringing such parties together and asking them to generate and discuss the mission of the program. The logic behind this approach is to show each party that there may be divergent views of what the program mission actually is.

Taking stock[edit]

Taking stock as the second step consists of two important tasks. The first task is concerned with program participants and program staff generating a list of current key activities that are crucial to the functioning of the program. The second task is concerned with rating the identified key activities, also known as prioritization. For example, each party member may be asked to rate each key activity on a scale from 1 to 10, where 10 is the most important and 1 the least important. The role of the evaluator during this task is to facilitate interactive discussion amongst members in an attempt to establish some baseline of shared meaning and understanding pertaining to the key activities. In addition, relevant documentation [such as financial reports and curriculum information] may be brought into the discussion when considering some of the key activities.

Planning for the future[edit]

After prioritizing the key activities the next step is to plan for the future. Here the evaluator asks program participants and program staff how they would like to improve the program in relation to the key activities listed. The objective is to create a thread of coherence whereby the mission generated [step 1] guides the stock take [step 2] which forms the basis for the plans for the future [step 3]. Thus, in planning for the future specific goals are aligned with relevant key activities. In addition to this it is also important for program participants and program staff to identify possible forms of evidence [measurable indicators] which can be used to monitor progress towards specific goals. Goals must be related to the program's activities, talents, resources and scope of capability- in short the goals formulated must be realistic.

These three steps of empowerment evaluation produce the potential for a program to run more effectively and more in touch with the needs of the target population. Empowerment evaluation as a process which is facilitated by a skilled evaluator equips as well as empowers participants by providing them with a 'new' way of critically thinking and reflecting on programs. Furthermore, it empowers program participants and staff to recognize their own capacity to bring about program change through collective action.

Transformative paradigm[edit]

The transformative paradigm is integral in incorporating social justice in evaluation. Donna Mertens, primary researcher in this field, states that the transformative paradigm, "focuses primarily on viewpoints of marginalized groups and interrogating systemic power structures through mixed methods to further social justice and human rights". The transformative paradigm arose after marginalized groups, who have historically been pushed to the side in evaluation, began to collaborate with scholars to advocate for social justice and human rights in evaluation. The transformative paradigm introduces many different paradigms and lenses to the evaluation process, leading it to continually call into question the evaluation process.

Both the American Evaluation Association and National Association of Social Workers call attention to the ethical duty to possess cultural competence when conducting evaluations. Cultural competence in evaluation can be broadly defined as a systemic, response inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates epistemology of the evaluation endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholder-generated, interpretive means to arrive at the results and further use of the findings. Many health and evaluation leaders are careful to point out that cultural competence cannot be determined by a simple checklist, but rather it is an attribute that develops over time. The root of cultural competency in evaluation is a genuine respect for communities being studied and openness to seek depth in understanding different cultural contexts, practices and paradigms of thinking. This includes being creative and flexible to capture different cultural contexts, and heightened awareness of power differentials that exist in an evaluation context. Important skills include: ability to build rapport across difference, gain the trust of the community members, and self-reflect and recognize one's own biases.

Paradigms[edit]

The paradigms axiology, ontology, epistemology, and methodology are reflective of social justice practice in evaluation. These examples focus on addressing inequalities and injustices in society by promoting inclusion and equality in human rights.

Axiology [Values and Value Judgements][edit]

The transformative paradigm's axiological assumption rests on four primary principles:

  • The importance of being culturally respectful
  • The promotion of social justice
  • The furtherance of human rights
  • Addressing inequities

Ontology [Reality][edit]

Differences in perspectives on what is real are determined by diverse values and life experiences. In turn these values and life experiences are often associated with differences in access to privilege, based on such characteristics as disability, gender, sexual identity, religion, race/ethnicity, national origins, political party, income level, age, language, and immigration or refugee status.

Epistemology [Knowledge][edit]

Knowledge is constructed within the context of power and privilege with consequences attached to which version of knowledge is given privilege. "Knowledge is socially and historically located within a complex cultural context".

Methodology [Systematic Inquiry][edit]

Methodological decisions are aimed at determining the approach that will best facilitate use of the process and findings to enhance social justice; identify the systemic forces that support the status quo and those that will allow change to happen; and acknowledge the need for a critical and reflexive relationship between the evaluator and the stakeholders.

While operating through social justice, it is imperative to be able to view the world through the lens of those who experience injustices. Critical Race Theory, Feminist Theory, and Queer/LGBTQ Theory are frameworks for how we think others should think about providing justice for marginalized groups. These lenses create opportunity to make each theory priority in addressing inequality.

Critical Race Theory[edit]

Critical Race Theory[CRT]is an extension of critical theory that is focused in inequities based on race and ethnicity. Daniel Solorzano describes the role of CRT as providing a framework to investigate and make visible those systemic aspects of society that allow the discriminatory and oppressive status quo of racism to continue.

Feminist theory[edit]

The essence of feminist theories is to "expose the individual and institutional practices that have denied access to women and other oppressed groups and have ignored or devalued women"

Queer/LGBTQ theory[edit]

Queer/LGBTQ theorists question the heterosexist bias that pervades society in terms of power over and discrimination toward sexual orientation minorities. Because of the sensitivity of issues surrounding LGBTQ status, evaluators need to be aware of safe ways to protect such individuals’ identities and ensure that discriminatory practices are brought to light in order to bring about a more just society.

Government requirements[edit]

Framework for program evaluation in public health

Given the Federal budget deficit, the Obama Administration moved to apply an "evidence-based approach" to government spending, including rigorous methods of program evaluation. The President's 2011 Budget earmarked funding for 19 government program evaluations for agencies such as the Department of Education and the United States Agency for International Development [USAID]. An inter-agency group delivers the goal of increasing transparency and accountability by creating effective evaluation networks and drawing on best practices. A six-step framework for conducting evaluation of public health programs, published by the Centers for Disease Control and Prevention [CDC], initially increased the emphasis on program evaluation of government programs in the US. The framework is as follows:

  1. Engage stakeholders
  2. Describe the program.
  3. Focus the evaluation.
  4. Gather credible evidence.
  5. Justify conclusions.
  6. Ensure use and share lessons learned.

In January 2019, the Foundations for Evidence-Based Policymaking Act introduced new requirements for federal agencies, such as naming a Chief Evaluation Officer. Guidance published by the Office of Management and Budget on implementing this law requires agencies to develop a multi-year learning agenda, which has specific questions the agency wants to answer to improve strategic and operational outcomes. Agencies must also complete an annual evaluation plan summarizing the specific evaluations the agency plans to undertake to address the questions in the learning agenda.

Types of evaluation[edit]

There are many different approaches to program evaluation. Each serves a different purpose.

  • Utilization-Focused Evaluation
  • CIPP Model of evaluation
  • Formative Evaluation
  • Summative Evaluation
  • Developmental Evaluation
  • Principles-Focused Evaluation
  • Theory-Driven Evaluation
  • Realist-Driven Evaluation

CIPP Model of evaluation[edit]

History of the CIPP model[edit]

The CIPP model of evaluation was developed by Daniel Stufflebeam and colleagues in the 1960s.CIPP is an acronym for Context, Input, Process and Product. CIPP is an evaluation model that requires the evaluation of context, input, process and product in judging a programme's value. CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation.

CIPP model[edit]

The CIPP framework was developed as a means of linking evaluation with programme decision-making. It aims to provide an analytic and rational basis for programme decision-making, based on a cycle of planning, structuring, implementing and reviewing and revising decisions, each examined through a different aspect of evaluation –context, input, process and product evaluation.

The CIPP model is an attempt to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a programme. Stufflebeam's context, input, process, and product [CIPP] evaluation model is recommended as a framework to systematically guide the conception, design, implementation, and assessment of service-learning projects, and provide feedback and judgment of the project's effectiveness for continuous improvement.

Four aspects of CIPP evaluation[edit]

These aspects are context, inputs, process, and product. These four aspects of CIPP evaluation assist a decision-maker to answer four basic questions:

This involves collecting and analysing needs assessment data to determine goals, priorities and objectives. For example, a context evaluation of a literacy program might involve an analysis of the existing objectives of the literacy programme, literacy achievement test scores, staff concerns [general and particular], literacy policies and plans and community concerns, perceptions or attitudes and needs.

This involves the steps and resources needed to meet the new goals and objectives and might include identifying successful external programs and materials as well as gathering information.

  • Are we doing it as planned?

This provides decision-makers with information about how well the programme is being implemented. By continuously monitoring the program, decision-makers learn such things as how well it is following the plans and guidelines, conflicts arising, staff support and morale, strengths and weaknesses of materials, delivery and budgeting problems.

By measuring the actual outcomes and comparing them to the anticipated outcomes, decision-makers are better able to decide if the program should be continued, modified, or dropped altogether. This is the essence of product evaluation.

Using CIPP in the different stages of the evaluation[edit]

As an evaluation guide, the CIPP model allows evaluators to ask formative questions at the beginning of the program, then later supports evaluation the programs impact through asking summative questions on all aspects of the program.

What are the 4 types of evaluations?

The four basic types of evaluation: clinical reviews, clinical trials, program reviews, and program trials.

What type of evaluation should be used to measure the effectiveness of a program?

Impact evaluation assesses program effectiveness in achieving its ultimate goals. Process Evaluation determines whether program activities have been implemented as intended and resulted in certain outputs.

Which methods is used to conduct evaluation of the program?

The three main types of evaluation methods are goal-based, process-based and outcomes-based.

Which form of evaluation occurs during the program?

Outcome evaluation is conventionally used during program implementation. It generates data on the program's outcomes and to what degree those outcomes are attributable to the program itself.

Chủ Đề