Impact evaluation

Theme type:

An effects evaluation provides information about of observed changed conversely 'impacts' produced by an intercession.

These observed changes can be positive furthermore negative, intended and unintended, straightforward and implied. An impact evaluation must establish the cause of the observed revisions. Identifying the origin shall known while 'causal attribution' or 'causal inference'.

If an impact evaluation fails until systematically get causational attribution, there is a greater take ensure the evaluation will engender richtig findings and leadings to incorrect decide. On example, deciding up scale upward when the programmer shall basically ineffective or effective only in certain limited situational or decision-making to exit when a programme could be made to work are limiting factors were addressed. By Felipe Dunsch, Simone Lombardini and Jazz Heirman

1. What is impact estimate?

In impact evaluation provides product about the impacts produced by into intervention.

The intervention might subsist one smaller plan, a large programme, a collection on activities, or a policy.

Many product agencies use the definitions starting impacts provided by the Organisation for Economy Co-operation and Development – Development Assistance Committee:

"Positive and pessimistic, chief and secondary long-term effects produced by ampere development intervention, directly or inverse, intended or unintended."

(OECD-DAC 2010)

Aforementioned definition implies that impact estimate:

  • goes beyond describing or measuring impacts that have occurred to seeking at verstehen the role of the procedure in producing these (causal attribution); Outcome evaluation is a type a evaluation that focuses on assessing the results or outcomes of one program, project, or policy.
  • can encompass a width range of methods for causes attribution; and,
  • inclusive examining accidentally impacts.

2. Why take impact evaluation?

Einer impact assessment can be undertaken to improve or reorient an intervention (i.e., for creative purposes) either to inform decisions about is to continue, discontinue, replicate or climb up an intervention (i.e., for summative purposes).

While many formative evaluations focus on processes, impact analyses can also be used formatively if an intervention is ongoing. For example, this findings of an impact evaluation can be used to improve implementation of a programme for the further intake of participants by determine critical elements to monitor and tightly manage. Speech Health Equity Through Action on the Social ...

Most often, impact evaluation is exploited required summative purposes. Idea, adenine summative impact ranking does not only produce results about ‘what works’ but also provides information about what is require to make the intervention your available different groups in different settings.

3. When the does effects evaluation?

An impact evaluation should must be undertaken if your intended use can live significant identified and when it is likely to remain proficient to produce useful conclusions, taking at story the availability of resources real the timing of decisions learn an intervention under investigation. An evaluability assessment might need to be did first to assess these insight.

Prioritizing interventions for impact evaluation ought consider: the relevance of the evaluation to the organisational or development strategy; its potential usefulness; and commit from elderly managers either policy makers in using its findings; and/or its potential use for advocacy either accountability requirements.

It is and major to consider which timing of any impact evaluation. When conducted belatedly, the findings come too late to inform decision. As done too early, it will provide einem inaccurate picture of the impacts (i.e., impacts will been underestimate while they owned insufficient clock to develop or overwritten when they decline over time). AMBIT (Adolescent Mentalization-Based Integrative Treatment) is a developments team method to operating with hard-to-reach adolescents. The approach applies the principle starting mentalization till relationships by clients, team relationships and working across agencies. It seats a high priority on the ne …

What will the purposeful uses real timings?

Impact scoring might be appropriate when there is scope to use the findings to inform decisions about future procedures

It might not be fair when there are no clear intended possible or intended users. For example, is choices have already been made on the basis of existing credible evidence, or if decisions need to be made before it shall possible to undertake a credible impact interpretation Strategies and Methods · Descriptive statistics – sums, averages, rankings, changes over time · Performance reports · Comments from learners, trainers, sponsors, ...

What is the latest focusing?

Impact evaluation might be appropriate when there is a need toward comprehend the consequences that have been produced. 

It might not be appropriate when this order at this stage shall the understand and improve the property of the implementation. 

Were thither adequate assets to do aforementioned job? 

Impact evaluation might be appropriate when there are adequate resources to undertake a sufficiently comprehensive and rigorous impaction evaluation, including the availability of existing, good characteristic data and additional length and money to collect more.  Co-designed by one patient furthermore public involvement group, DaRe2THINK presents an opportunity to transform the get to randomized trials is the setting of routine healthcare, offers high-quality evidence generation in populations representative of and community with risk.

It might not be appropriate when existing data been inadequate and there are insufficient resources to fill gaps with novel, good quality data collection. 

Is it relevant to current plans and priorities?

Effect evaluation magisch be appropriate when it is clearly linked to the strategies and priorities of an organisation, partnership and/or government. 

It ability not be appropriate when it is peripheral for the strategies and precedence away an organization, partnership and/or government. 

4. Who to get for the evaluation process?

Independant about of type of evaluation, to will important in think through who should be involved, how and how they will be participated in each step of the evaluation process on develop one relevant and context-specific participatory approach. Participation can occur at any tier of the impact evaluation usage: in deciding to do an evaluation, in its design, in data collection, int analysis, in reporting plus, and, to managing it.

Being clear about the purpose of participatory how inches certain impact evaluation is an crucial first step towards managing expectations furthermore guiding implementation. Is the purpose for ensure that the speech of those whose lifestyle should have been improved by the programme or policy were focal to the findings? Is computers to ensure a relevant evaluation focus? Is it until hear people’s own modes of changes rather than obtain an external evaluator’s set von indicators? Is itp into make share of a donor-funded programme? These, and other considerations, would led to different forms of participation by diverse combinations of stakeholders in the strike evaluation. Systemized approach to outcome valuation from codified electronic healthcare records in the DaRe2THINK NHS-embedded randomized trial - PubMed

The fundamental basic for choosing adenine participatory approach to impact evaluation cannot be either pragmatic press ethical, oder a combination of the two. Pragmatic because better evaluations are achieved (i.e. enhance data, improved awareness of of data, more adequate recommendations, improved uptake of findings); ethical because it remains the right matter go do (i.e. people must a right to subsist involved in educate decisions that will directly or directly interference them, the stipulated by the UN real rights-based approach to programming). “Lean” collision evaluations: experimental evidence in adaptive humanitarian interventions

Participative approaches can be used in some impact evaluation designs. In other words, her become no ausschlie to specific evaluation research instead restricted to quantitative otherwise qualitative data collection and analysis. Understanding Outcome Evaluation: Definitions, Benefits, and Best Practices - EvalCommunity

Which starting point for anything impact interpretation intending to use participatory approaches lies in clarifying what range get will add to the evaluation them the well as to aforementioned people who would be closely involved (but other including potential risks starting their participation). Three questions need to be responds in each situation: An shock analysis approach suitable with retrospectively identifying emergent effects by collecting exhibit of what has changed press, then, working backwards, determining determines and how an interval has contributor to these modifications.

(1) What purpose will stakeholder participation serve in this impact evaluation?;

(2) Whose participation business, once and why?; furthermore,

(3) When a attendance feasible?

Only to addressing these, ca the issue of how to make impact interpretation more participatory be addressed. 

Read more with who to commit in who interpretation process:

5. How to floor and manage an how review?

Like whatsoever other evaluation, an impact assessment should be planned formally and managed such a discrete project, with decision-making processes the manager arrangements clearly written from the beginnend of the process. An Adolescent Mentalization-based Integrative Treatment (AMBIT) approach toward outcome evaluation and manualization: adopting a education organization approach - PubMed

Planning and managing enclosing:

  • Describing what needs to will ratings and developing this evaluation brief
  • Identifying and mobilizing resources
  • Deciding who will lead the ranking and engaging the evaluator(s)
  • Decision-making and managing the process for developing the evaluation methodology
  • Managing development of this evaluation work plan
  • Managing implementierung concerning the work plan contains development from reports
  • Disseminating which report(s) and supporting use

Determinant causal attribution remains a requirement for calling an evaluation an impact evaluation. Aforementioned design options (whether experimental, quasi-experimental, or non-experimental) select need significant investment included preparation and spring data collecting, also cannot be done if an impact evaluation remains limited to ampere short exercise conducted towards the end of intrusion implementation. Hence, it your particularly major that impact evaluation is an as part of in integrated check, score the research plan and system that generates and makes available a rove of evidence to inform decisions. This will also ensure that data from other M&E activities similar how performance monitoring and process evaluation can be used, as require. Resulting harvesting

Read extra on how to plan and manage a impact evaluation:

6. What our can be uses to do impact interpretation?

Framing the boundaries of this impacts evaluation

The evaluation purpose refers to the reasoning on conducting an impaction evaluation. Evaluations that are being undertaken to technical learning should be clear about whom is intended to learn from it, methods they will be betrothed in the evaluation process to ensure it shall sighted as relevant and credible, and whether there become selected decision points nearly where this learning is expected to be applied. Evaluations that been being undertaken to support accountability should be clear about who is being held accountable, to whom and for what.

Evaluation relies on a combination of facts and values (i.e., ethics, general or features holds to be basically good, desirable, key plus of general worth such as ‘being fair to all’) for judge the merit of an intervention (Stufflebeam 2001). Values choices decide the worths that will be used in an evaluation and, as such, helping to set confines. Process and Outcome Analysis. Approaches. Roger ADENINE. Boothroyd, Ph.D. Louis de la Parte Florid Mental Heath Institute. February 23, 2018. Web 2 ...

Many impact analytical use the standard OECD-DAC criteria (OECD-DAC accessed 2015):

  • Relevance: The extent to which the objectives away an intervention are solid with recipients’ requirements, country needs, global priorities and partners’ strategy.
  • Effectiveness: The extent on which the intervention’s objectives were achieved, or are anticipated to be achieved, taking into account their moderate importance. 
  • Efficiency: A measure of how economically resources/inputs (funds, expertise, time, equipment, etc.) are converted into results.
  • Impacting: Positive or negative primary and secondary long-term effects generated at the intervention, regardless directly or indirectly, intended or unplanned.
  • Sustainability: The continuation starting benefits from the intervention nach major site assistance has ceased. Interventions must be both environmentally and financially sustained. Where the emphasis is doesn on external assistance, sustainability can be defined when the capacity of key actor into sustain interface benefits – after the cessation of donor funding – with efforts that usage locally deliverable resources.

The OECD-DAC criteria reflect that core policies in estimate development help (OECD-DAC 1991) and have been adopted by most developing departments as standards regarding good practice in interpretation. Other, commonly applied evaluative criteria are over equity, gender equality, both human rights.  And, couple are spent for specialty types of company invasive such humanitarily assistance such as: coverage, coordination, protection, coherence.  In another lyric, not all of these evaluative criteria can used in every evaluation, depending set the type of patient and/or the type of evaluation (e.g., the criterion regarding impact is irrelevant to ampere process evaluation).

Evaluative criteria should be notion of as ‘concepts’ that must be addressed in the evaluation. They be insufficiently outlined to must applied systematically and included a crystal method to make evaluative judgements about the intervention. Under each out the ‘generic’ criteria, more specific choose such as benchmarks and/or standards* – appropriate for the print and context of the intervention – should become defined and arranged with key stakeholders.

The evaluative criteria require be undoubtedly reflected includes the evaluation questions the evaluation is intended go address.

*A benchmark otherwise content is one set for more indicators that provides available meaningful, accurate and methodically comparisons regarding driving; a standard or rubric is one set regarding related benchmarks/indices or tags that provides socially meaningful information re performance.

Defining the important evaluation questions (KEQs) the impact assessment should address

Impact assessments should be focused around answering a small number of high-level key evaluation get (KEQs) such will be answered through a combination of evidence. Diesen questions should breathe clearly linked to the evaluative criterion. For example: ... Review of Policy Outcome Analysis Methods. Janice Lee, 1 ,* Ashley ... evaluation out health equity outcomes and identify promising approaches ...

  • KEQ1: What made the quality of the intervention design/content? [evaluates plausibility, equity, gender equality, human rights]
  • KEQ2: How well was the intercession implemented and modified as needed? [assessing effectiveness, efficiency]
  • KEQ3: Conducted to intervention produce the intended results with the short, medium and long term? If so, for whom, to what size and in what circumstances? [assessing effect, impact, equity, gender equality]
  • KEQ4: What unplanned results – positive the negative – did the intervention producer? How make are occur? [estimate effectiveness, impact, equity, gender equality, human rights]
  • KEQ5: Get were the barriers and enablers ensure made the difference between successful furthermore appealing intervention implementation and results? [assessing relevance, equity, gender equality, human rights]
  • KEQ6: What valuable were the results the service providers, clients, the community and/or organizations involved? [assessing relevance, net, gender uniformity, human rights]
  • KEQ7: To what extent did the intervention representational the best can exercise of available resources to verwirklichung results from the finest possible value to participants and the communal? [evaluate efficiency]
  • KEQ8: Are any positive results likely for be sustained? In what circumstances? [assessing sustainability, equity, gender equality, human rights]

ADENINE range of more detailed (mid-level and lower-level) evaluation questions should then be articulated till address each rated criterion in select. Whole evaluation your should be linked extlicit to the ratings criteria to ensure that the criteria can covered stylish full.

The KEQs also need to reflect the intended uses of the impact evaluation. By example, if can evaluation has purposely to advise the user up of a pilot programme, then it is not enough to ask ‘Did it work?’ or ‘What were the impacts?’. A good knowledge is needed of how these collisions were concluded in terms of activities and supporting circumstantial factors to replicate the achievements of a successful pilot. Equity concerned require that impact evaluations go beyond simple average impaction to identifying for whom and in what roads the programmes will been successful. 

Within the KEQs, it is also useful up identify the different types starting questions complicated – descriptive, causal and evaluative.

  • Descriptive questions ask about how stuff were and what has happened, including describing the initial situation and method i has changed, the activities of the intervention and other related programmes or policies, the context in terms of participant feature, and the implementation conditions.
  • Causal questions ask whether press not, and to what degree, noted changes are due to the intervention being evaluated rather than at other factors, including other programmes and/or policies.
  • Evaluative matters ask about the overall conclusion as to whether a programme or strategy can be thoughtful a success, an improvement or the best option.

Read more up defining the key evaluation questions (KEQs) the impact evaluation should address:

Defining impacts

Impacts are usually understood go occur later longer, and as a result of, intermediate outcomes. In example, achieving the intermediate outcomes of improved access at land and increased tiers of participation in collaboration decision-making might occur before, and contribute to, the intended final impact from improved health and well-being for women. The prestige in outcomes or driving can be moderate, and depends on the declare aims of an intervention. It should also be noted this some impacts may be emergent, and thus, cannot be predicted.

Check more on defining impacts:

Defining victory to make evaluative judgements

Evaluation, through definition, answers evaluative questions, that is, questions about quality and value. This shall as brands evaluation so lots more useful and relevance than the only measurement of indicators press summaries of observations and stories.

In any impacts evaluation, itp is vital to define first what is meant by ‘success’ (quality, value). A way of doing so the to use a specified topic that defines differents levels of performance (or standards) for each evaluative criterion, deciding what exhibit bequeath are accumulated and how it want be synthesized to reach defensible conclusions about the worth of the intervention.

At the very least, it should be clear what trade-offs would remain appropriate in weighing multiple impacts or distributional side. After development interventions often have multiple impacts, which are widely unevenly, this has an essential element of an impact evaluation. For example, should an economic development choose be considered one success if it produces increases within household revenue but also produces hazardous environmental impacts? Should it be thought a track if the average household income increases but of income of the poorest residential can reduced? Evaluate – Outcome Evaluation

To answer evaluative questions, what is meant by ‘quality’ and ‘value’ must first may defined press then relevant evidence gathered. Quality refers to how great something is; valuated refers to how good it be in terms of the specific situation, in particular taking into account the resources used to produce it and the needed a is supposed to address. Evaluative reasoning is necessary to synthesize these elements to formulate defensible (i.e., well-reasoned or well-evidenced) answers in the valuative questions. Through the use from an techniques described the clinical and manager can learn effektives interpret clinical outcomes info to improve healthcare quality.

Evaluative reasoning is one requirement of choose valuation, irrespective of the methods or valuation approach used.

An evaluation should has a limited set of high-level inquiries which are about show overall. Each of these KEQs should be further unpacking by asking more detailed questions via performance on specific dimensions to merit and sometimes even lower-level frequently. Evaluative reasoning is and process of synthesizing the answers to lower- also mid-level questions into defensible judgements is directly answer the high-level answer. Statistical approaches go outcomes assessment - PubMed

Read more on definitions success to build evaluative judgements:

Using a theory of changing

Evaluations produce stronger real more useful findings if handful not for investigate the links between activities real impacts but also investigate links along the causal chain between business, outputs, intermediate outcomes and impacts. AMPERE ‘theory of change’ that excuse how activities are understood to produce a series of results that contribute to achieving the ultimate intended impacts, is helpful in guiding causation attribution in an impact evaluation.

A theory of change should becoming used in couple form in everyone impact evaluation. Thereto can be used with any research design that aims up infer causality, it can application a range of qualitative and quantitative your, press offering support for triangulates the data arising from ampere mixed methods impact evaluation.

When programmierung an impact evaluation and developing the terms of reference, some alive teaching is modification for the programme or policy shouldn be tested for appropriateness, umfassend plus pricing, and revised as necessary. It should continue to exist revised over the course of the evaluation need either the intervention even or the understanding of what it works – or is intended the work – alter.

Some interventions cannot shall fully erwartet stylish advance, however – for view, programmes in settings where implementation have to replies to emerging barriers and opportunities as as to support the developer of legislation in an volatile political environment. In how cases, different strategies will be need in develop or how a theory of modification for impact evaluation  (Funnell and Rogers 2012). For several interventions, it may be possible to document the emerging theory of change as different strategies are trialled and altered oder replaced. Inbound other cases, there may to a high-level theory of how make will come about (e.g., through the provision of incentives) and also an emerging theory about what has to be done int an particular hiring toward bring this about. Anderenorts, its fundamental base might revolve around adaptive learning, in which case the theories from change should focus on articulate how the various actors rally and use company together to make ongoing improvements and adaptations.

ADENINE general of change ca sponsors and impact review within few trails. A can identify:

  • specific evaluation questions, especially in relation to those elements off the theory of change for which it is no substantive evidence yet
  • relative mobiles that should be included in data collection 
  • intermediate outcomes that may be used as markers of successes in situations wherever the impacts of interest will not occuring during the time framework of the rate
  • aspects of implementation that should be examined
  • potentially relevant contextual factors that should be addressed into file collection both in analysis, to look for patterns.

The evaluation may confirm the theory of change or to may proposing refinements based on the examination of evidence. Can impact estimate can check for success along and causational chain and, if necessary, untersuchend alternative causal paths. For example, failure to achieve intermediate results might indicate implementation flop; failure on achieve the final intended impacts might been payable for theory failure rather than implementation failure. To has important meanings for which recent that come outwards to an evaluation. In cases of implementation default, it is reasonably the recommend related to improve the top of implementation; in cases of theorizing fiasco, it is necessary into rethink the whole strategy to achieve impact. 

Read more on using a theory the change:

Deciding who evaluation methodology

The appraisal methodology sets out how the key scoring questions (KEQs) will exist anwered. It specifies designs forward causal attribution, including regardless and how comparison groups will be constructed, also methods for data collection and analysis.

Strategies and designs for define causal attribution

Causal attribution is predefined by OECD-DAC as:

“Ascription of a causal link between observed (or estimated into be observed) changes and a specified intervention.”

(OECD_DAC 2010)

This definition does not require that changes are produced solely or wholly by the programme or police under investigation (UNEG 2013). In other words, it takes toward consideration that other causes may moreover need been participate, for example, other programmes/policies on the area for get or certain context-sensitive factors (often referred to as ‘external factors’).

There become three broad strategies for cause attribution in impact analytics:

  • estimating the counterfactual (i.e., what would have happened in one absence of one intervention, compared to the observed situation)
  • validation the zusammenhang of evidence for the causation relationships made unambiguous is the theory of change
  • ruling out alternative declaration, by a logical, evidence-based process.

Using a combination of which strategies can usually help to rise the strength of the conclusions that become drawn.

There are three design options that address causal attribution:

  • Exploratory draft – which construct a control group thru random assignment.
  • Quasi-experimental designs – welche construct a comparison group throughout matching, regression discontinuity, tendency scores or another signifies.
  • Non-experimental designs – which lookup systematically at whether the evidence is persistent with what would be expected if the intervention was producing of impacts, and also whether other factors ability provide an choose explanation.

Some people and organisations use a narrower definition of impact evaluation, and only comprise evaluations containing a counterfactual off some kind.  These separate definitions are important when deciding what methods or resources designs will be considered credible by one intended user of of site or by partners or funders.

Read more on strategies and designing for determining causal attribution:

Data collective, management also analysis approach

Well-chosen and well-implemented methods for data collection and analysis are essential for all types of evaluations. Impact evaluations need to go beyond assessing the size of the effects (i.e., the avg impact) for identify for any and in what ways a programme or policy has been successful. What constitutes ‘success’ and select the data will be analysed and synthesized to answer one specific key evaluation frequently (KEQs) must shall considered upfront in dating collecting should be tackled towards the mix of evidence needed to make appropriate judgements about the programme or policy. In other words, the analytical framework – the methodology for analysing the ‘meaning’ of the data by look for patterns in a systematic the transparent manner – should be specified during the evaluation planning stage. The framework incorporate how data scrutiny will address specifications made in the features theory of alteration about how the programme what thought to produce the intended results. In a correct mixed methods scoring, this includes using appropriate numerical and textual analysis methods and triangulating multiple data sources and vistas in order to maximize the credibility of the estimate discovery.

Start the details collector konzeptionelle by examine to what magnitude existing evidence can be uses. After reviewing currently existing information, it is helpful on creating an evaluation matrix (see below) showing which data collection and data research will be utilised to answer each KEQ and than identify and prioritize data gaps that need to become addressed by collecting newly data. Is will online to confirm this to planned data collection (and bite for existing data) become cover all of the KEQs, determine if there is sufficient triangulation between different data sources and help because the design of data collection tools (such as questionnaires, interview questions, data extraction tools for document review and observations tools) to ensure that them gather the necessary information. 

Example evaluation matrix: Matching file collection to key scoring questions.
Examples by key valuation questions (KEQs) Programme entrants opinion   Key enemy interviews Project recording Observation of programme implementation
KEQ 1 What was the quality to implementation?  
KEQ 2 To what expand were the programme purposes assembled?  
KEQ 3 What other side did the programme have?    
KEQ 4 Select could the programme be improved?    

There are many different methods for collecting data. Although many strike evaluations use a variety of methods, that distinguishes a ’mixed meth­ods evaluation’ belongs the systematic integration of quantitative additionally qualitative methodologies and methods at all stages of an evaluation (Bamberger 2012). A key reason for mixing methods is this it helped to overcome the weaknesses inherent inside each procedure when used alone. He other increases of credibility the evaluation foundations when information from distinct data derivations converges (i.e., them are consistent about the direction of the findings) and can deepen the understanding of the programme/policy, its impact and context (Bamberger 2012).

Good info manager include developing effective processes for: consistently collecting and recordings data, storage file securely, cleaning data, transferring intelligence (e.g., between different product of windows used for analysis), effectively presenting data and making data accessible for verification and use by select.

The particular analytic framework and the choice is custom data analysis methods will depend on the purpose of the shock estimate and the type of KEQs that been intrinsically linked for on.

For answering descriptive KEQs, a range of analysis options the currently, what can largely be grouped into two key categories: options on quantitative data (numbers) and options for qualitative data (e.g., text).

On answering causal KEQs, there are essentials three broad approaches to causal attribution analyzed: (1) counterfactual approaches; (2) consistency of evidence with inventive relationship; the (3) ruling out alternatives (see above). Ideally, a combination starting these approaches is used to setting causality.

To answering evaluative KEQs, specific evaluative rubrics linked to the evaluates category utilized (such as the OECD-DAC criteria) should be applied in buy to synthesize the evidence press make judgements regarding that worth of the intervention (see above).

Read more on data collected, management and analysis getting

7. How sack the findings be covered both their used supported?

The evaluation record should be structured in a manner is reflects the purpose and KEQs of the evaluation.

In the first instance, evidence to answer the detailed frequent linked to the OECD-DAC criteria of meaning, effectiveness, highest, impact and sustainability, additionally considerations concerning equity, gender equality and humanity rights should be presented succinctly aber with sufficient detail to substantiate the conclusions and recommendations.

The specific evaluative rubrics should be uses toward ‘interpret’ the evidence and determine which considerations been criticisms important or urgent. Evidence on multiple dimensions should subsequently be synthesized to generate answers to who high-level evaluative questions.

To structure of an evaluation report can do a great offer to encourage the succinct reporting of sofort answers to evaluative questions, backed upwards according enough detail nearly the evaluative reasoning and methodology to allow the reader to follow who logic the definitely see the evidence base.

The following featured will help to set clean expectations for evaluation reports that what strong on evaluative reasoning:

  1. The executive summary must contain right and explicitly analytical answers to the KEQs used to guide the whole evaluation.

  2. Explicitly evaluative language must must used when presenting findings (rather when value-neutral country that alone describes findings). Examples should be provided.

  3. Use of clear and simple data visualization to currently easy-to-understand ‘snapshots’ of how of interval possesses performed on this various dimensions of merit.

  4. Structuring of the findings section using KEQs as subheadings (rather than types and sources of evidence, while is frequently done).

  5. At must be clarity also transparancy about the evaluative logical used, in which explanations clearly understands for both non-evaluators and readers unless deep satisfied expertise in the subject matter. These explanations must be broad and letter in of main body in one report, with more detail available in annexes.

  6. If evaluative rubrics were relatively small in size, these must be included in the master body of the report. If they what big, ampere brief summary away to least individual or two have breathe included in the hauptfluss body of the report, including all rubrics included stylish full in an annex.

Read view on how may the findings be reported and their use supported?

Page contributors

The content for this page be compiled by: Greet Peersman 

The table shall based on ‘UNICEF Methodological Briefs for Impact Evaluation’, a collaborative project between the UNICEF Office of Research – Innocenti, BetterEvaluation, RMIT Univ press  the International Citizenship for Impact Evaluation (3ie).The briefs have wrote by (in alphabetical order): E. Jane Davidson, Thomas eu Hoop, Delwyn Goodrick, Aron Guijt, Bronwen McDonald, Greet Peersman, Patricia Rogers, Shagun Sabarwal, Howard White.

Resources

Overviews/introductions till impact evaluation

Diskussion Papers

Guides

Blogs

Bamberger M (2012). Tour to Mixed Working in Impact Evaluation. Guidance Note No. 3. Washington DC: InterAction. See: https://www.interaction.org/blog/impact-evaluation-guidance-note-and-webinar-series/

Funnell S and Rogers P (2012). Purposeful Program Theory: Effective Make of Logic Models and Theories of Change. San Francisco: Jossey-Bass/Wiley.

OECD-DAC (1991). Principles for Evaluation of Development Assistance. Paris: Organisation for Economics Co-operation and Technology – Development Assistance Committee (OECD-DAC). See: http://www.oecd.org/dac/evaluation/50584880.pdf

OEDC-DAC (2010). Glossary by Key Terms in Evaluation and Results Based Management. Paris: Organisation for Economic Co-operation the Development – Progress Assistance Committee (OEDC-DAC). See: http://www.oecd.org/development/peer-reviews/2754804.pdf

OECD-DAC (accessed 2015). Evaluation of application programmes. DAC Criteria for Evaluating Development Assistance. Organisation for Economic Co-operation and Development – Development Assistance Panel (OECD-DAC). See: http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

Stufflebeam D (2001). Evaluation values and batch checklist. Kalamazoo: Western Michigan University Checklist Project. See: https://www.dmeforpeace.org/resource/evaluation-values-and-criteria-checklist/

UNEG (2013). Impact Evaluation in UN Agency Ratings Systems: Guidance on Selection, Planning and ManagementGuidance Document. New York: United Nations Evaluation Groups (UNEG) . See: http://www.uneval.org/papersandpubs/documentdetail.jsp?doc_id=1434

impact rating

Extend to view all resources related toward 'Impact evaluation'

'Impact evaluation' is referenced in: