English French German Italian Portuguese Russian Spanish
Planning Your Evaluation Developing a Monitoring and Evaluation Plan

Developing a Monitoring & Evaluation Plan

A Monitoring and Evaluation (M&E) Plan is a guide as to what you should evaluate, what information you need, and who you are evaluating for.

The plan outlines the key evaluation questions and the detailed monitoring questions that help answer the evaluation questions. This allows you to identify the information you need to collect, and how you can collect it. Depending on the detail of the M&E plan, you can identify the people responsible for different tasks, as well as timelines. The plan should be able to be picked up by anyone involved in the project at anytime and be clear as to what is happening in terms of monitoring and evaluation.

It is also important to remember that there are many types of evaluation.

An evaluation plan should ideally be done at the planning stage of a project, before you commence implementation. This will allow you to plan ahead of time and data collection activities that you may need to undertake, such as pre-intervention surveys. However, it is never too late to develop an M&E plan.  Retro-fitting an M&E plan to an existing project may just mean that you may be constrained in some of the data that you can collect.

 

How to Develop a Monitoring and Evaluation Plan

A short tutorial on how to develop a M&E plan - opens up in PowerPoint

How to develop a monitoring and evaluation plan presentation

 

 

Step 1. Identify your evaluation audience

Identify who the evaluation audience or stakeholders are. The evaluation audience include the people or organisations that require an evaluation to be conducted. There may be multiple audiences, each with their own requirements. Typically, this includes the funding agency, and may also include partner organisations, the Council (or Councillors), the project team, and the project’s participants or target group. Remember that evaluation is generally undertaken for accountability, or learning, and preferably both together.

If you have limited funds for evaluation, you may have to prioritise your evaluation by identifying who are the most important people to report to.

Download M&E Audience and Evaluation Questions Template

Step 2. Define the evaluation questions

Evaluation questions should be developed up-front, and in collaboration with the primary audience(s) and other stakeholders who you intend to report to. Evaluation questions go beyond measurements to ask the higher order questions such as whether the intervention is worth it, or could if have been achieved in another way (see examples below). Overall, evaluation questions should lead to further action such as project improvement, project mainstreaming, or project redesign.

You should also identify at this stage whether the evaluation audience has specific timelines by which it requires an evaluation report. This will be a major factor in deciding what you can and cannot collect.

When framing your outcome-focussed evaluation questions, keep them open-ended. Eg. "To what extent did..."

 

Broad types of evaluation questions by focus area

Adapted from: Davidson & Wehipeihana (2010)

Focus of EvaluationEvaluation question
Process

How well was the project designed and implemented (i.e. its quality)

Outcome

To what extent did the project meet the overall needs?

Was there any significant change and to what extent was it attributable to the project?

How valuable are the outcomes to the organisation, other stakeholders, and participants?

Learnings

What worked and what did not?

What were unintended consequences?

What were emergent properties?

Investment

Was the project cost effective?

Was there another alternative that may have represented a better investment?

What next

Can the project be scaled up?

Can the project be replicated elsewhere?

Is the change self-sustaining or does it require continued intervention?

Theory of change

Does the project have a theory of change?

Is the theory of change reflected in the program logic?

How can the program logic inform the research questions?

 


Another way of classifying broad evaluation questions is presented below.

Adapted from: Europe Aid Cooperation Office

Focus of EvaluationEvaluation question
Relevance

Does the workshop topic and contents meet the information needs of the target group?

To what extent is the intervention goal in line with the needs and priorities of the community?

Efficiency

Did the engagement method used in this project lead to similar numbers of participants as previous or other programs at a comparable or lesser cost?

Have the more expensive engagement approaches led to better results than the less expensive engagement approaches?

Effectiveness

To what extent did the workshops lead to increased community support for action to tackle climate change?

To what extent was did the engagement method encourage the target group to take part in the project?

Outcome

To what extent has the project led to more sustainable behaviours in the target group?

Were there any other unintended positive or negative outcomes from the project?

Sustainability

To what extent has the project led to the long-term behaviour change?

 

 

Step 3. Identify the monitoring questions

In order to answer evaluation questions, monitoring questions must be developed that will inform what data will be collected through the monitoring process. Monitoring questions are quite specific in what they ask, compared to evaluation questions. For example, for an evaluation question of "What worked and what did not?" you may have several specific questions such as "Did the workshops lead to increased knowledge on energy efficiency in the home?" or "Did participants install water efficient showerheads".

The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is important to not leap straight into the collection of data, without thinking about the evaluation questions.  Jumping straight in may lead to collecting data that provides no useful information, which is a waste of time and money.

Download the M&E Plan Template to input the information or download a partically completed M&E plan template that includes sample evaluation questions, monitoring questions, indicators and sources for a kerbside recyclling education project. 

M&E Plan

If you have developed a program logic, you can use this to start identifying relevant monitoring questions and indicators. Click here to see how.

Once you have identified monitoring question in your program logic, you can transfer them into your M&E Plan Template. Click here to see how.

 

A short tutorial on how to extract information from your program logic to develop your M&E plan -  opens up in PowerPoint

From program logic to M&E plan presentation

Step 4. Identify the indicators and data sources

The next step is to identify what information you need to answer your monitoring questions (indicators) and where this information will come from (data sources). It is important to consider data collection, in terms of  the type of data and any  types of research design. Data sources could be participant themselves, or people’s homes (eg. audit of lighting types) or metering, or even literature. You can then decide on the most appropriate method to collect the data from the data source.

The evaluation tool selector is there to assist you in selecting an appropriate method for your needs.

 

It is important to ensure that your evaluation framework is culturally appropriate. When working with Culturall and Linguistically Diverse (CALD) communities, consider involving community representatives or cultural experts in the evaluation framework to ensure that  data collection methods are appropriate (sensitive and  understood) to the target group. For further reading, look at the Genuine Evaluation blog

Step 5. Identify who is responsible for data collection and timelines

It is advisable to assign responsibility for the data collection so that everyone is clear of their roles and responsibilities. This also allows new staff to come onto the project and get a sense of who is responsible for what, and what they may have to take on and when.

Collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or annually. Again, assigning timelines limits the excuse of ‘not knowing’.

You may also want to note any requirements that are needed to collect the data (staff, budget etc). It is advisable to have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information, only to find out that you cannot afford it all. In such a case, you will have to either prioritise or find some money elsewhere (sorry but we have no special tool for that).

Step 6. Identify who will evaluate the data, how it will be reported, and when

This step is optional but highly recommended, as it will round off the M&E plan as a complete document. Remembering that evaluation is the subjective assessment of a project’s worth, it is important to identify who will be making this ‘subjective assessment’. In most cases, it will be the project team, but in some cases, you may involve other stakeholders including the target group or participants. You may also consider outsourcing a particular part of the evaluation to an external or independent party.

For an evaluation to be used (and therefore useful) it is important to present the findings in a format that is appropriate to the audience. This may mean a short report, or a memo, or even a poster or newsletter. As such, it is recommended that you consider how you will present your evaluation from the start, so that you can tailor the way to present your findings to the presentation format (such as graphs, tables, text, images).

Step 7. Review the M&E plan

Once you have completed your M&E plan, highlight data sources that appear frequently. For example, you may be able to develop surveys that fulfil the data collection requirements for many questions.

Also consider re-ordering the M&E plan in several ways, for example, by data source, or by data collection timeframe. Finally, go through this checklist.  Does your M&E plan:

  • Focus on the key evaluation questions and the evaluation audience?
  • Capture all that you need to know in order to make a meaningful evaluation of the project?
  • Only asks relevant monitoring questions and avoids the collection of unnecessary data?
  • Know how data will be analysed, used and reported?
  • Work within your budget and other resources?
  • Identify the skills required to conduct the data collection and analysis?

 

Links & Further Resources

Department of Planning and Community Development (VIC) Evaluation Step-by-Step Guide

 

 
Donate Now

Please make a donation to upgrade the Evaluation Toolbox.

The Evaluation Toolbox is maintained by Damien Sweeney and Martin Pritchard from PREA as their in-kind contribution back to the community. The Toolbox now needs several weeks of work to review and upgrade all the contents and add new content for you. Work has begun and we are seeking your donation (big or small) to help support this major upgrade. Email us to indicate what you want to be included in the Toolbox.

case study

What is the Toolbox?

The toolbox aims to provide a one-stop-site for the evaluation of community sustainability engagement projects that aim to change household behaviours. Through the toolbox you can learn how to conduct your own evaluation of a behaviour change project using the guides and templates provided.

see more...

Why Evaluate?

Conducting an evaluation is considered good practice in managing a project.

The monitoring phase of project evaluation allows us to track progress and identify issues early during implementation, thus providing and opportunity to take corrective action or make proactive improvements as required.

see more...

 

DSE
City of Whitehorse City of Whitehorse
Accord