Evaluation planning for funding applicants

Print document | Download PDF

The Social Policy Evaluation and Research Unit’s (Superu’s) purpose is to increase the use of evidence by people across the social sector so that they can make better decisions – about funding, policies or services – to improve the lives of New Zealanders, New Zealand communities, families and whānau.

The Using Evidence for Impact project takes a big-picture approach and aims to inspire all those working in the New Zealand social sector to use evidence in decision-making.

The objectives behind the programme are to drive:

  • greater accessibility to evidence
  • greater transparency of evidence
  • capability development and good practice in using evidence.
back-to-top |

Who is this guide for?

This guide is for you if you’re applying for funding for a social initiative, and you’ve been asked to describe how you’ll evaluate it. You may be responsible for a government agency’s budget bid, or you may be involved in a non-government organisation’s application for government or philanthropic funding. 

This guide describes evaluation planning for funding applications. Evaluation helps provide accountability to funders, but it is also about more than that. Evaluation can help you to understand what works, what has been less successful, and why. It can help to improve delivery, and the findings can encourage others to adopt the successful elements of your initiative. 

This guide describes the basics of evaluation planning, and how planning relates to a funding application. Links to online resources with more detail and practical guidance are provided at the end of this document. This document is a guide, not an instruction manual. Depending on your funder’s and your organisation’s requirements, you may need more or less detailed evaluation planning before your initiative is funded. 

back-to-top |

What is evaluation?

Evaluation is the systematic determination of value. Evaluation of an initiative asks:

  • what changes have been caused by it?
  • how valuable are those changes?

Evaluation may look at different things, including:

  • how, and how well an initiative has been delivered (process evaluation)
  • the extent to which the initiative contributed to the achievement of target outcomes and unintended outcomes (outcome, or impact evaluation)
  • value for money, cost-effectiveness, or cost-benefit (economic evaluation).

Evaluation is not performance monitoring, but it uses performance monitoring data. While monitoring measures change over time, evaluation tells us why or why not change has occurred. Evaluation also tells about the value of the change and whether it is attributable to the initiative.


  • Find out what the funder wants to know from evaluation, and how much evaluation planning detail they require from your application
  • Engage stakeholders
  • State the evaluation purpose and key questions
  • Describe what your initiative intends to achieve and how it will do so
  • Develop an intervention logic and use it to develop the questions your evaluation will answer
  • Identify measures and methods that can address the evaluation questions, including data sources, and ways of understanding causation
  • Ensure your evaluation reflects the culture and values of the communities you serve
  • Plan performance monitoring and evaluation together
  • Acknowledge that you will continue to develop your evaluation plan over time.


  • Plan an evaluation that is out of proportion to the need for evidence, or that is unfeasible with the available budget.

back-to-top |

1. Engage stakeholders

Even at an early stage, stakeholders can help you to describe the initiative, develop the evaluation purpose and questions, and identify measures and methods.

Stakeholders in the evaluation can be people who will:

  • make decisions about the initiative based on the evaluation findi
  • be affected by those decisions
  • be providing data or information for the evaluation.

Only some of the initiative’s stakeholders will be involved before it is funded, but even at this stage, they can help you to identify the most important issues to cover in the evaluation.

back-to-top |

2. Establish evaluation purpose and key questions

The purpose of the evaluation will depend on what you, your funders, and other stakeholders want to find out from evaluation. At this stage, it will be helpful to find out what your funder wants to know from the evaluation.

back-to-top |

2.1. Evaluation purpose

The evaluation purpose should describe how the results of the evaluation will be used; what kind of change in practice or policy could the evaluation reasonably lead to? This is distinct from the purpose of the initiative. While your initiative’s purpose might be to improve some specific social outcomes, the evaluation’s purpose might be to determine:

  • what has worked and what hasn’t, providing lessons for the wider sector
  • whether the initiative provides sufficient value to justify continued funding
  • ways in which the initiative could be improved.

back-to-top |

2.2. Key evaluation questions

Bearing in mind your evaluation purpose, you can now develop the key questions that the evaluation will seek to answer. Some common key questions are as follows:

  • How well has the initiative been implemented?
  • What outcomes have resulted from the initiative?
  • What value for money does the initiative generate?
  • What parts of the initiative work and what don't (and for whom)?
  • What can be done to improve the initiative's effectiveness?
  • Does the initiative address a demonstrable need (is it what the community needs)?

At the end of this document there are links to online resources that wil help you to develop your evaluation purpose and key questions.

back-to-top |

3. Describe the initiative

As part of your funding application, you will be describing what the initiative will do.

Your description should include information on:

  • the rationale (what needs will the initiative address; who will be targeted; what will be done; who will benefit and what benefits will they gain; how will the activities result in those benefits)
  • the size of the initiative in terms of the budget and resources required
  • involved parties (partner organisations and stakeholders).

Your rationale should be supported by evidence, such as evaluation findings showing what was achieved by similar initiatives elsewhere. You should identify any contextual differences that may influence what will be achieved by your initiative.

You should make sure that your initiative is well described in your funding application, as this will make designing an evaluation much easier.

back-to-top |

3.1. Intervention logic

An intervention logic is a useful tool for planning and evaluation, and should be developed as part of your description of the initiative. It is a flow chart of the initiative’s inputs, activities, outputs, and outcomes, which illustrates how it will work. An intervention logic can help to shape evaluation questions and data collection and can help you to communicate assumptions and intended outcomes. 

You can start to develop your intervention logic by filling in a table like this. 


Input  Activities (Systems and processes)  Outputs  Outcomes (Short term)  Outcomes (Medium term)  Outcomes (Long term) 

What we invest (time, money, equipment etc) 

What you do to convert your inputs into outputs (e.g. training, communication) 

What you produce (tangible products/ services) 

Change expected to be achieved in 1-2 years 

Change expected to be achieved in 3-4 years 

Change expected to be achieved in 5+ years 



While an intervention logic illustrates the rationale behind an initiative, on its own, it does not provide enough detail on how activities will contribute to outcomes and how lower-level outcomes will lead to higher-level outcomes. It should be accompanied by a narrative that describes how actions are intended to produce results, what the underlying assumptions are, and the risks and external factors that influence whether or not the outcomes will be achieved.

Other terms for an intervention logic include: ‘logic model’, ‘programme logic’, ‘programme theory’, ‘theory of change’, ‘causal model’, ‘outcomes hierarchy’, and ‘results chain’. While subtle differences are implied by different terms, all are essentially diagrams that show how an initiative’s activities are linked to outcomes.

Intervention logic models focus on intended results, but unintended results (positive and negative) can be important too. When planning your evaluation, you can work with stakeholders to identify and document potential unintended results. 

At the end of this document there are links to online resources that guide you through the process of developing an intervention logic and identifying risks and unintended results.

back-to-top |

4. Develop initiative-specific evaluation questions

Using your intervention logic and the key evaluation questions, you can now draft the more specific questions that the evaluation will address. These questions will guide the choice of evaluation measures and methods. 

There are a number of frameworks that can help you to develop evaluation questions, and there are links to several such frameworks at the end of this document. Your initiative-specific questions should relate to the content of your intervention logic. For example, if one intended outcome is the reduction of hospitalisations for diabetes-related conditions, the related questions may be: “has there been a decrease in hospitalisations for diabetes-related conditions?” and “to what extent is this a result of the initiative?”

At this point you do not need an exhaustive list of questions, but you should have identified the specific issues in enough detail to indicate how you will approach the evaluation, and the main methods that you will use. Your questions should be related to the purpose of your evaluation.

In some cases, you may propose evaluation at several stages of the initiative, and your questions may change according to where you are at in the life-cycle of the initiative. For example, in the initiative’s first year you may focus on measures of how well processes have been implemented. Later you may focus on short term or medium term outcomes, and long term outcomes may not be measurable for several years.

The questions may need revision once the initiative is in progress, and further stakeholders are involved. You should check that the questions are appropriate to stakeholders’ values and culture, and that they respond to what they need to know. 

back-to-top |

5. Decide how much evaluation is needed

Large scale evaluation is not needed for every initiative. If there is already a lot of evidence suggesting that it will work, you may need only a limited evaluation that assesses implementation and any unique aspects of your situation. If there is little existing evidence, you will need more evaluation to build the evidence base. 

Extensive evaluation can be difficult to justify for smaller initiatives, because the cost of this evaluation can be high relative to investment in the initiative itself. If your initiative is small and unproven, you may wish to discuss this issue with your funder. They may be willing to fund a larger scale, robust evaluation, even when the initiative is small, to develop the evidence base. Or they may support a stepped approach, where a limited evaluation is followed by a wider programme roll-out and more extensive evaluation, if the initial results are promising.

back-to-top |

6. Identify measures

Using your intervention logic and evaluation questions, you can now consider what your evaluation will measure. Measures can also be referred to as ‘indicators’ or ‘metrics’. 

At this early stage, you do not need to identify every measure you will use. It is enough to identify the main measures, for which you will develop information collection processes. For example, if your initiative aims to improve student achievement, you should identify how you will measure achievement and how and when you will collect this data. You may need achievement data from the students before and after they participate, and you may need to allow some time for improvements to become apparent. You may also need to collect achievement data for a group of students who did not participate, to assess causal attribution (more on this in the ‘understanding the causes of outcomes’ section later in this document). 

It’s important to select measures where the influence of the initiative is strong, and where there’s likely to be usable data at a reasonable cost. If you can’t find a way to get data for a particular measure, you may need to choose a different measure, or to use proxy data (that is, something else that you can measure, that will provide a reasonable indication of the result you’re interested in).

Long term outcomes can take a long time to manifest. Where they cannot be measured within the evaluation’s timeframe, or where it’s not possible to directly measure them at all, you can still learn about how well the initiative has worked by measuring the short to medium term outcomes described in your intervention logic.

You can start to identify your measures by filling in a table like this.

Question Measure/s Strength of influence on the measure Data source/ collection method Approximate cost Timing
of the initiative of other factors
Question 1            
Sub-Q 1            
Sub-Q 2 ...            
Question 2            
Sub-Q 1 ...            

Measures and data collection techniques need to suit the personal, cultural and social attributes of participants, otherwise their validity may be compromised. For example, attempting to measure western concepts of wellbeing among Māori participants may not be valid and may annoy participants. Data collection via a telephone survey will not work well if many participants don’t have telephones. You may need to plan some piloting of the data collection, to test whether the measures and methods are feasible and valid. 

At the end of this document there are links to online resources that will help you to identify good measures, to develop processes for collecting the measures, and to choose methods that are culturally appropriate.

back-to-top |

7. Understand the causes of outcomes

As well as identifying the main outcomes that you will measure, you should show how you will judge how much of those outcomes were caused by your initiative, as opposed to being caused by other factors. To understand why, imagine the following situations:

  • You provide job training and find that 70% of participants gain employment within three months of completion. This seems positive, but how will you convince funders this is a higher employment rate than they would have achieved without the training?
  • Your initiative aims to improve tertiary education retention rates, but 50% of students who participate subsequently leave study. Is this a poor result, or would those students have left study at an even higher rate otherwise?

There are three main approaches to assessing causation. The diagram below outlines each, and the factors that you need to consider in planning for them.

While understanding causes can seem to be technical and difficult, the fact is that small adjustments to how your initiative is implemented can make big differences to how easy it is to evaluate causes. Here are some adjustments to consider.

  • Piloting – If there is little existing evidence for the effectiveness of the initiative, piloting can allow it to be tried and evaluated before full-scale resources are committed. Piloting can also help you find a comparison group, because only some of the eligible and interested people participate in the pilot. 
  • Random allocation - In allocating participants to the initiative, a random mechanism can be used, enabling the use of an experimental design/randomised controlled trial. 
  • Phased introduction – In a phased introduction, all eligible and interested people participate, but sequentially over time. Outcomes for participants who have already participated can be compared to outcomes for those who have not yet participated. 
  • Scoring eligibility – Using criteria for eligibility and a scoring system to determine who best meets those criteria can make the process more transparent, and it can facilitate better evaluation. The evaluation could compare participant outcomes to outcomes for non-participants who scored just below the eligibility cut-off score (this method is known as ‘regression discontinuity’). 
  • Plan monitoring and evaluation together – Make sure that data will be collected about the baseline situation before the initiative begins, as well as when the initiative is operating. Identify your comparison group, and consider what data you will collect from them at baseline, and during the initiative. Check that your monitoring measures allow you to track progress on inputs, activities, outputs, and outcomes, as specified in your intervention logic.

back-to-top |

8. Interpret the results

Once the evaluation has collected data on the measures and assessed causation, you will need to assess what those results mean. How will you know that the initiative has been a good use of resources, or a better use than other initiatives that could have received the funding?

Before your initiative is funded, you do not need to identify what level of performance will be considered to be good, but you should have some idea of how you will do that. For example, imagine the following situations:


  • If your initiative led to a 5% decrease in burglaries, is this a good result? How will you decide what levels of outcomes are good, very good, or poor?
  • Is a 5% decrease in burglaries worth the resources that went into the initiative? How will you decide whether the outcomes justified the costs?
  • Is your initiative more or less successful than other strategies to reduce burglaries? How will you determine which of several initiatives provided greater value?
  • If your initiative led to a 5% decrease in burglaries, but upset residents and reduced property values because it removed a local park, how well did it work overall? Looking across the different outcomes, how will you make an overall assessment of value?

Ways of figuring out how valuable the initiative’s achievements are include:

  • comparing achievement to existing relevant standards or benchmarks
  • comparing achievement to the initiative’s stated goals
  • assigning monetary values to outcomes using econometric techniques
  • surfacing and documenting stakeholders’ tacit values, and developing consensus on what level of performance constitutes success.

In the next section there are links to online resources that provide guidance on various methods of assessing how valuable the outcomes are from an initiative. These include some guidance on cost-benefit and cost-effectiveness techniques, which may be appropriate and feasible for some larger initiatives. 

back-to-top |

Further reading

Resources related to sections of this document. 

back-to-top |

1. Engage stakeholders



Identifying and working with evaluation stakeholders 

Better Evaluation website 

http://betterevaluation.org/plan/ manage/identify_engage_users 

Centers for Disease Control and Prevention Introduction to Program Evaluation for Public Health Programs section on engaging stakeholders 

http://www.cdc.gov/eval/guide/ step1/index.htm 

Robert Wood Johnson Foundation Guide for Engaging Stakeholders in Developing Evaluation Questions. Presents practical strategies for involving stakeholders in evaluation planning 


Evaluation standards 

Evaluation Standards for Aotearoa New Zealand 

http://www.superu.govt.nz/sites/ default/files/Superu_Evaluation_ standards.pdf 

Engaging with Māori and Pacific communities 

Te Puni Kōkiri advice on building relationships for effective engagement with Māori 


Te Puni Kōkiri advice on measuring and reporting on effectiveness for Māori, including advice on how Māori should be engaged at each step 


Health Research Council guidelines for researchers on health research involving Māori 

http://www.hrc.govt.nz/sites/ default/files/Guidelines%20for%20 HR%20on%20Maori-%20Jul10%20 revised%20for%20Te%20Ara%20 Tika%20v2%20FINAL[1].pdf 

The Ministry of Pacific Island Affairs’ framework for policy development involving Pacific peoples. Pages 25-29 outline the Ministry’s Pacific consultation guidelines 


Health Research Council guidelines for researchers on Pacific health research 

http://www.hrc.govt.nz/sites/ default/files/Pacific%20Health%20 Research%20Guidelines%202014.pdf 

back-to-top |

2. Establish evaluation purpose and key questions



Determining the evaluation purpose

Better Evaluation website advice and links to further resources on defining the evaluation purpose


International Development Research Centre’s guide on how to identify the evaluation purpose and users


Field Guide for Evaluation published by Pact. Page 23 presents succinct guidance on developing an evaluation purpose statement


back-to-top |

3. Describe the initiative



Developing an initial description

Better Evaluation website


Finding and interpreting evidence for outcomes from similar initiatives

Superu resource on critical appraisal and sources of evidence


Developing an intervention logic

Better Evaluation website advice and links to further resources on developing an intervention logic


University of Kansas, Community Toolbox guidance on developing a logic model or theory of change. Includes instructions, examples, and links to further resources


W.K. Kellogg Foundation Logic Model Development Guide. Detailed guidance on how to develop an intervention logic for your programme


Identifying unintended results

Better Evaluation website advice on ways to identify potential unintended results. This can be done as an accompaniment to your intervention logic


back-to-top |

4. Develop initiative-specific evaluation questions



Developing evaluation questions 

Better Evaluation website list of resources on developing evaluation questions 

http://betterevaluation.org/plan/ engage_frame/decide_evaluation_ questions 

New South Wales Department of Premier and Cabinet ‘Evaluation Toolkit’. Scroll to the section on key evaluation questions 

http://www.dpc.nsw.gov. au/programs_and_services/ policy_makers_toolkit/steps_ in_managing_an_evaluation_ project/2._develop_the_evaluation_ brief 

Field Guide for Evaluation published by Pact. Pages 23- 26 describe types of evaluation questions, and steps in developing them. Pages 27-28 describe developing an evaluation purpose statement and key evaluation questions 

http://betterevaluation.org/sites/ default/files/Field%20Guide%20 for%20Evaluation_Final.pdf 

Overview of Impact Evaluation published by Unicef. Pages 6-8 present a framework for evaluation questions that is an alternative to that presented above 

http://devinfolive.info/impact_ evaluation/img/downloads/ Overview_ENG.pdf 

back-to-top |

5. Decide how much evaluation is needed



Scale of evaluation 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Pages 35-36 describe considerations affecting appropriate resourcing of evaluations 

https://www.gov.uk/government/ publications/the-magenta-book 

back-to-top |

6. Identify measures



Identifying good measures

Better Evaluation website guidance on how to determine what success looks like, including criteria (what to measure) and standards (the level of performance on the criteria that is valuable)


Centers for Disease Control and Prevention Introduction to Program Evaluation for Public Health Programs, section on gathering credible evidence


Statistics New Zealand Good Practice Guidelines for the Development and Reporting of Indicators


Criteria for Selection of High-Performing Indicators - A Checklist to Inform Monitoring and Evaluation


Examples of measures

Better Evaluation website links to examples of outcome and performance measures


What Works New Zealand website case studies of selected New Zealand evaluation projects, describing the measures and methods that they used


Data collection methods and integrating measurement into programme activities

Better Evaluation website summary of methods of collecting data. Your early planning does not need to cover this in detail, but this page may provide ideas for how to collect information via existing records, individuals, groups, or physical measurements


University of Kansas, Community Toolbox guidance on progress monitoring, including suggestions for how to integrate monitoring into programme activities


U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Chapter 7 (pages 69-80) describes data collection methods, the relationship between monitoring and evaluation, and implementation of data collection as part of programme activity


New Zealand data sources

Searchable collation of New Zealand government data


back-to-top |

7. Understand the causes of outcomes



Choosing an appropriate design 

Better Evaluation website entry-level guidance on ways of understanding causes 

http://betterevaluation.org/plan/ understandcauses 

Presentation by Jane Davidson on methods of understanding causes, with examples 

http://betterevaluation.org/ resources/guides/causal_inference_ nuts_and_bolts 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Pages 97-111 describe impact evaluation designs and feasibility issues with experimental and quasi-experimental designs 

https://www.gov.uk/government/ publications/the-magenta-book 

Field Guide for Evaluation published by Pact. Pages 29-37 describe types of evaluation designs, and ways to choose an appropriate design 

http://betterevaluation.org/sites/ default/files/Field%20Guide%20 for%20Evaluation_Final.pdf 

Experimental designs 

Better Evaluation website guidance on randomised controlled trials, with examples 

http://betterevaluation.org/plan/ approach/rct 

U.K. Cabinet Office guidance on conducting randomised controlled trials of public policy interventions 

https://www.gov.uk/government/ uploads/system/uploads/ attachment_data/file/62529/TLA- 1906126.pdf 

Laura and John Arnold Foundation guidance on key items to get right when conducting randomised controlled trials of social programs 

http://www.arnoldfoundation.org/ wp-content/uploads/Key-Items-to- Get-Right-in-an-RCT.pdf 

Quasi-experimental designs 

Better Evaluation website links to guidance on quasi-experimental methods; scroll down to the “Quasi-experimental options” section 

http://betterevaluation.org/plan/ understandcauses/compare_ results_to_counterfactual 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Pages 111-122 describe some quasi-experimental analysis strategies 

https://www.gov.uk/government/ publications/the-magenta-book 

Non-experimental designs 

Better Evaluation website links to guidance on qualitative methods for assessing causation 

http://betterevaluation.org/plan/ understandcauses/check_results_ match_theory 

Better Evaluation website links to guidance on methods for investigating rival theories of attribution 

http://betterevaluation.org/plan/ understandcauses/investigate_ alternative_explanations 

Participant selection and staged roll-out 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Pages 25 -29 describe programme roll-out strategies that can facilitate better evaluation 

https://www.gov.uk/government/ publications/the-magenta-book 

Building monitoring and evaluation into programme administration 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Chapter 7 (pages 69-80) describes data collection methods, the relationship between monitoring and evaluation, and implementation of data collection as part of programme activity 

https://www.gov.uk/government/ publications/the-magenta-book 

back-to-top |

8. Interpret the results



Assigning monetary value

New Zealand Treasury cost benefit analysis tool (CBAx), which includes NZ monetary values for selected impacts


New Zealand Treasury guidance on social cost benefit analysis. ‘Step 4: Value the costs and benefits’ covers methods for assigning monetary values. These methods need technical expertise


The U.K. SROI Network’s guide to Social Return on Investment. Accessible guide to methods of assigning monetary value and calculating cost benefit for social investments


Assigning value in non-monetary terms

Better Evaluation website guidance on performance criteria and setting standards, with links to existing statements of values, methods of articulating and documenting tacit values, and strategies for negotiating between different values


Better Evaluation website guidance on benchmarking and the use of standards


Better Evaluation website guidance on using the programme’s stated goals to assess success


Collating the overall value of the initiative

Better Evaluation website links to guidance on methods of synthesising data from an evaluation


New Zealand Treasury guidance on social cost benefit analysis


Better Evaluation website guidance on cost benefit analysis


Better Evaluation website guidance on cost effectiveness analysis


Better Evaluation website guidance on rubrics


U.K. Department for Communities and Local Government guide to using multi-criteria analysis in government decision-making


back-to-top |

Overall guidance on evaluation planning



Better Evaluation website. Comprehensive, up-to-date guidance on best practice evaluation design and methods 


Davidson, E. Jane (2013) Actionable Evaluation Basics - Getting succinct answers to the most important questions. Real Evaluation Ltd. Minibook purchasable here 

https://www.smashwords.com/ books/view/243170 

Pact (2014) Field Guide for Evaluation: How to develop an effective terms of reference. Pact is a US non-profit that works in the international development sector. This is a practical accessible guide to planning evaluations. While it’s intended for the international development sector, it is also relevant to planning evaluation in the social sector 

http://betterevaluation.org/sites/ default/files/Field%20Guide%20 for%20Evaluation_Final.pdf 

New South Wales Department of Premier and Cabinet ‘Evaluation Toolkit’. Guidance on planning and implementing an evaluation project 

http://www.dpc.nsw.gov.au/ programs_and_services/policy_ makers_toolkit/evaluation_toolkit 

U.K. Treasury ‘Magenta Book – Guidance for evaluation’. Part A is designed for policy makers, while Part B is more technical 

https://www.gov.uk/government/ publications/the-magenta-book 

Administration for Children and Families Office of Planning, Research and Evaluation (2010) ‘The Program Manager’s Guide to Evaluation’. Accessibly written guide to evaluation, for programme managers charged with ensuring that an evaluation is done. Presents straightforward strategies and tools to integrate evaluation into programme activities 

http://www.acf.hhs.gov/sites/ default/files/opre/program_ managers_guide_to_eval2010.pdf 

back-to-top |

Choosing methods that are culturally appropriate



Cross-cultural evaluation and cultural responsiveness

New Zealand What Works website. This page has guidance on evaluation with new immigrant and refugee communities, but most of the principles and recommendations also apply more broadly to cross-cultural evaluation in New Zealand


Evaluation with Māori and Māori methods

New Zealand What Works website. This page summarises what it means to do kaupapa Māori evaluation, and has links to further resources


Te Puni Kōkiri’s Effectiveness for Māori Framework, which is intended to support the public sector’s measurement and reporting of results for Māori


Evaluation with Pacific people and Pacific methods

Whānau Ora Research website. This page links to resources on research and engagement with Pacific people


back-to-top |

Ethics and standards



Evaluation Standards for Aotearoa New Zealand


New Zealand What Works website. This page has information on, and links to various ethical standards relevant to evaluation in New Zealand


back-to-top |

Other New Zealand resources



New Zealand What Works website. This page links to New Zealand online resources that are relevant to evaluation


Superu website. This page has links to resources and organisations that are relevant to research and evaluation in New Zealand


back-to-top |
    Last update: 20 Oct 2016