English French German Italian Portuguese Russian Spanish
Planning Your Evaluation Collecting Data

Collecting Data

There are two broad categories of social research methods and data that can be collected: quantitative and qualitative.


Quantitative methods deal with numerical data (eg. number of people recycling, number of energy efficient lights).

Quantitative methods can reach large number of people, and generally involve a short interaction.  The popularity of collecting quantitative data reflects the old adage “that you can’t manage what you can’t measure”.


Qualitative methods deal with words or communication (whether that is text, voice, or visual). Qualitative research seeks, amongst other, to find out what people are doing and why they are doing it, or what stops them from changing, the meaning people construct for their actions, and how they see their role and actions in the wider scheme of things.

Qualitative methods generally involve a longer personal interaction, and reach a lower number of people.

Qualitative evaluation is founded on the belief that meaningful information and evaluation requires an understanding of the context in which change occurs. As such, qualitative evaluation trades in quantity of respondents (eg. information gathered from questionnaires or other types of survey) for the fewer respondents, but more in-depth and quality information. This includes gaining an understanding of how people make sense of their lives and experiences, including the particular intervention that is the focus of the research, how people have coped with the change, and what has occurred as a result of their involvement.

In qualitative evaluation, the researcher or evaluator is central to the data gathering process, rather than the questionnaire or other instrument. The evaluator is involved in developing a relationship with the respondent, asking questions, eliciting responses, probing for more information, and making observations. The information gathered in qualitative evaluation is descriptive, focussing on change and processes, and their meaning.

Why undertake qualitative research?

The United Nations Development Program (UNDP) Guidebook on Participation notes that it is important to move beyond traditional evaluation approaches (eg. change in resource use) in order to evaluate the process of change. The benefit of qualitative evaluation is that it takes evaluation ‘beyond the numbers game’ (UNDP Guide, p3), and provides a story behind any numbers that are collected. This is particularly relevant for behaviour change projects, as these interventions are about people participating in a change process. Traditional quantitative approaches are noted to be inadequate for understanding the outcomes and effect of participatory development projects. This entails moving from a focus on measurements (quantitative) to describing the process of change and the change that has taken place (qualitative). The key elements proposed by the UNDP Guidebook are outlined below.

Key principles in monitoring and evaluating participation

Qualitative as well as quantitative Both dimension of participation must be included in the evaluation in order for the outcome to be fully understood.
Dynamic as opposed to static The evaluation of participation demands that the entire process over a period of time be evaluated and not merely a snapshot. Conventional ex post facto evaluation, therefore, will not be adequate.
Central importance of monitoring Enter the responses into the data collection template. Responses can be entered based on the question number or question identifier. It is best to enter one theme per line to help coding, especially if respondents had both positive and negative responses to a question. In order to analyse the data you have collected, you will need to code the responses based on themes. You may have broad themes (eg. workshops), or sub-categories within themes (eg. workshop speaker, workshop handouts). You may also want to code responses into whether they were positive or negative with regards to the project.
Participatory evaluation In the entire evaluation process, the people involved in the project have a part to play; the people themselves will also have a voice.

It is good practice where possible to use both quantitative and qualitative in designing your evaluation.

Both quantitative and qualitative data have strengths and weaknesses, and the use of one type of data set is not exclusive to using the other.  In fact, combining both provides a way to provide a measurement of change, as well as providing context for the change. For example, conducting quantitative survey research using questionnaires can inform further enquiry into particular areas of interest using qualitative methods.

Points to consider:

  • Measuring social phenomena is not always easy. It may be possible to count how many people recycle, or take short showers, but it is not easy to find out why they do so.
  • It may be possible to establish relationships between certain variables (eg. demographics) and behaviours, but relationships explained by aggregates do not necessarily relate to why specific individuals undertake particular behaviours.

There are different survey designs you may want to consider in collecting data for your evaluation. The three main designs are:

Summative or post-test designs: this is where you undertake data collection after you have undertaken your intervention (eg. post-workshop or post-project questionnaire). This is the simplest and easiest design for evaluation.

Pre-test, post test designs: this is where you undertake measurements before and after an intervention.

Quasi-experimental designs: this is where multiple measurements are taken over time, and when control groups are introduced. Quasi-experimental designs are used when you want to know with some confidence whether a project or particular intervention has caused a change.

It is important to note that control groups need to have similar demographic characteristics as the participant groups. This is very hard to achieve when interventions involve the self-selection of participants.

If you want to know whether the intervention was statistically significant in determining changes in behaviour, you need to add a control group.

The level of research design and statistical analyses should be guided by the amount of time and resources, and the level of the skills you have or are able to obtain. For many community engagement programs, a simple research design using descriptive statistics may be sufficient.

Some definitions to consider

Survey Surveys are the most common quantitative research method, and involve the systematic measurement of variables within a sample. The data is statistically analysed to see if there are any distinct relationships or patters.
Questionnaire The questionnaire is the most common tool to gather survey information. A questionnaire can be delivered in various ways: paper-based, online, face to face, or over the telephone.

Statistical analysis provides a means to analyse quantitative data. Statistics seeks to describe patterns and trends in the data and present information in a form that allows a person to make a judgement (evaluation) without the need to interpret all the raw data. This can be done through description, explanation or prediction.

Descriptive statistics are the simplest form and consists primarily of describing the data (Eg. how many respondents undertook the desired behaviour, what the range of answers was, and descriptions of the central tendency and the spread of the data). Explanation and prediction are generally more complicated and require the collection of good data sets and need higher order computation and interpretation (Eg. tests of significance or coefficients of correlations).

Post-test design Post-test designs involve a one-off survey and are useful where the evaluation question is descriptive (eg. Was there increased knowledge following the workshop?). Post-test designs are very popular, but you cannot infer causal relationships (eg. Were workshops significant in increasing knowledge?)
Pre-test – post-test design Pre-test - post-test designs measure change resulting from interventions by comparing pre-intervention information to post-intervention information. It is important to get the same respondents to take part in both pre & post-tests in order to draw valid conclusions.
Quasi-experimental design Where multiple measurements are taken over time, and when control groups are introduced. Quasi-experimental designs are used when you want to know with some confidence whether a project or particular intervention has caused a change.
Control Group A control group is made up of similar demographics to the target group but it does not experience the same intervention. This allows the project staff to determine if any change in the target group can be attributed to the intervention, or other variables.
Population Population refers to the complete set of research subjects relevant to the question or project under review.
Sample A sample refers to a group that comprises the population. The aim is to get data from a representative sample in order to make generalisations for the population. The benefit of samples is that they are cheaper and less time consuming to obtain.
Probability Sampling This is where every member of the population has an equal chance to be selected. There are different types of probability sampling, such as simple random sampling, or stratified random sampling, amongst other.
Non-probability sampling This is where individuals from the population are targeted based on some criteria. This includes accidental or convenience sampling and snowball sampling. You cannot infer cause-effect relationships from non-probability sampling.


Lastly, some parting thoughts on the use of quantitative data and statistics:


Torture numbers, and they'll confess to anything (Gregg Easterbrook)

98% of all statistics are made up (Author Unknown)

Statistics are like bikinis.  What they reveal is suggestive, but what they conceal is vital (Aaron Levenstein).

Source: Quotegarden

Donate Now

Please make a donation to upgrade the Evaluation Toolbox.

The Evaluation Toolbox is maintained by Damien Sweeney and Martin Pritchard from PREA as their in-kind contribution back to the community. The Toolbox now needs several weeks of work to review and upgrade all the contents and add new content for you. Work has begun and we are seeking your donation (big or small) to help support this major upgrade. Email us to indicate what you want to be included in the Toolbox.

case study

What is the Toolbox?

The toolbox aims to provide a one-stop-site for the evaluation of community sustainability engagement projects that aim to change household behaviours. Through the toolbox you can learn how to conduct your own evaluation of a behaviour change project using the guides and templates provided.

see more...

Why Evaluate?

Conducting an evaluation is considered good practice in managing a project.

The monitoring phase of project evaluation allows us to track progress and identify issues early during implementation, thus providing and opportunity to take corrective action or make proactive improvements as required.

see more...


City of Whitehorse City of Whitehorse