Understanding Monitoring vs Evaluation: When You Need Each
Monitoring tracks ongoing activities; evaluation assesses outcomes at key points. Learn the crucial differences, when to use each approach, and how combining both creates a complete picture of your impact.
“We’re doing monitoring and evaluation.”
A charity manager says this confidently in a funding application. But when you dig deeper, you discover they’re actually just monitoring. Or sometimes they’re attempting evaluation without any monitoring foundation. Or they’re confusing the two entirely, trying to do both but achieving neither.
Here’s the way I see it: most charities use “monitoring and evaluation” as if it’s a single thing, like “fish and chips” or “salt and pepper.” They’re not. They’re two distinct activities that serve different purposes, happen at different times, and require different approaches.
Understanding the difference isn’t just semantic pedantry. It’s practical. When you know which one you need and when, you stop wasting effort on the wrong thing at the wrong time. You build better systems. You answer the right questions. And you actually use the information you collect rather than drowning in data that serves no purpose.
Let me help you understand what each one actually means, why both matter, and when to use which.
The Simple Distinction
Here’s the clearest way I’ve found to explain the difference:
Monitoring asks: Are we doing what we said we’d do?
Evaluation asks: Is what we’re doing actually working?
Monitoring is about tracking delivery. Evaluation is about assessing impact.
Another way to think about it:
Monitoring is your speedometer and fuel gauge. It tells you what’s happening right now as you drive.
Evaluation is pulling over to check if you’re actually heading toward the right destination and whether this route is the best way to get there.
You need both. A car without a speedometer is dangerous. But constantly watching your speedometer won’t help if you’re driving in the wrong direction.
What Monitoring Actually Involves
Monitoring is the systematic, ongoing collection of information about your activities and outputs.
Monitoring tells you:
- Who you’re working with (numbers, demographics, characteristics)
- What activities you’re delivering (type, frequency, location)
- How much you’re delivering (sessions, hours, resources)
- Who’s participating (attendance, engagement, drop-out rates)
- Whether you’re on track to meet targets
- If delivery is going as planned
Common monitoring data includes:
- Attendance registers
- Participant databases
- Activity logs and timetables
- Reach statistics (numbers served, geographical spread)
- Service delivery records
- Referral numbers
- Waiting list lengths
- Staff time logs
- Budget tracking against spend
Monitoring happens continuously
It’s built into your daily operations. Every time someone signs in, every time you log a session, every time you update a case file – that’s monitoring.
A youth worker keeping attendance records is monitoring. A helpline logging call volumes and reasons for contact is monitoring. A food bank tracking how many parcels they distribute and to whom is monitoring.
This isn’t glamorous work, but it’s essential. Without monitoring data, you have no idea what you’re actually delivering or whether you’re reaching the people you intended to reach.
What Evaluation Actually Involves
Evaluation is the periodic, systematic assessment of whether your work is achieving its intended outcomes and impact.
Evaluation tells you:
- Whether participants are experiencing the changes you hoped for
- Why some things work better than others
- Which approaches are most effective
- Whether you’re making the best use of resources
- What you should do differently
- If your theory of change is correct
Common evaluation activities include:
- Before and after surveys measuring change
- Outcome tracking over time
- Participant interviews about their experience
- Case studies showing individual journeys
- Analysis of what worked and what didn’t
- Comparison of different approaches or groups
- Cost-per-outcome calculations
- Recommendations for improvement
Evaluation happens periodically
It’s scheduled at specific points: mid-way through a programme, at completion, six months after, annually. It’s not continuous like monitoring. It’s deliberate stepping back to assess and learn.
A youth service surveying confidence levels before and after their programme is evaluating. A debt advice charity interviewing clients three months later to see if their financial situation improved is evaluating. A community garden project analysing whether participants’ wellbeing has increased is evaluating.
Evaluation draws on monitoring data but goes further. It asks not just “what did we do?” but “did it make a difference?”
Why This Distinction Matters in Practice
Let me show you why muddling these two causes real problems:
Problem 1: Claiming monitoring is evaluation
“We monitor and evaluate our work” – then they show you an Excel spreadsheet tracking attendance and activities. That’s monitoring. It tells you what you delivered, not whether it created change.
Why it matters: Funders asking for evaluation evidence won’t be satisfied with activity logs. You’re answering a different question than they asked.
Problem 2: Trying to evaluate without adequate monitoring
An organisation runs a programme for a year with no systematic attendance tracking or participant records. Then they try to evaluate impact but can’t even tell you who participated consistently enough to experience outcomes.
Why it matters: You can’t evaluate what you haven’t monitored. Evaluation depends on knowing who you worked with, for how long, and in what ways.
Problem 3: Over-monitoring at the expense of evaluation
A charity collects mountains of monitoring data – demographics, attendance, satisfaction ratings – but never asks whether participants’ lives actually improved. They can tell you exactly how many sessions each person attended but not whether those sessions made any difference.
Why it matters: You’re working hard but learning nothing useful. Monitoring without evaluation is just record-keeping.
Problem 4: Evaluating too early or too often
An organisation evaluates outcomes after just two sessions because “we need to show impact.” But most change takes time. They’re measuring before outcomes could reasonably occur.
Why it matters: Premature evaluation shows nothing and wastes effort. Timing matters.
When You Need Each
Let’s get practical about when to use monitoring versus evaluation:
Use monitoring when you need to:
Check you’re reaching your target group
Example: A service for young carers monitors ages and caring responsibilities of participants to ensure they’re reaching actual young carers, not just young people who attend.
Track delivery against plan
Example: You promised a funder 20 workshops across 4 locations. Monitoring tells you you’ve delivered 15 workshops in 3 locations, so you’re behind schedule.
Spot problems with engagement
Example: Monitoring shows that 60% of people drop out after the first session. Something’s wrong with how the programme starts.
Inform resource planning
Example: Monitoring referral patterns helps you predict demand and plan staff capacity.
Meet reporting requirements
Example: Your grant requires quarterly reports on participant numbers and demographics. That’s monitoring data.
Use evaluation when you need to:
Understand whether you’re creating change
Example: Are participants more confident, more skilled, or better off after your programme? Evaluation answers this.
Decide whether to continue or change an approach
Example: You’ve tried two different formats for your support groups. Evaluation tells you which format leads to better outcomes.
Learn what works and why
Example: Why do some participants thrive while others struggle? Evaluation explores the factors that influence success.
Make the case for funding
Example: A funder wants evidence your model works. Evaluation provides that evidence where monitoring alone cannot.
Meet strategic planning needs
Example: Your trustees are deciding whether to expand. Evaluation evidence about impact and cost-effectiveness informs that decision.
The Relationship Between Monitoring and Evaluation
Here’s the key insight: monitoring and evaluation aren’t separate parallel activities. They’re sequential and interdependent.
Good monitoring enables good evaluation
Monitoring creates the data that evaluation needs. If you don’t know who attended regularly, you can’t evaluate outcomes for sustained participants versus drop-outs. If you haven’t tracked which staff delivered which sessions, you can’t evaluate whether delivery quality varied.
Think of monitoring as the foundation and evaluation as the building. You can’t construct a solid building without a foundation.
Evaluation makes monitoring meaningful
Without evaluation, monitoring is just data collection with no learning. Evaluation gives purpose to monitoring by asking: “Given what we now know about our outcomes, what monitoring data matters most? What should we track differently?”
After evaluation shows that participants who attend at least 8 sessions experience better outcomes, your monitoring suddenly becomes more meaningful. You start paying closer attention to attendance patterns and what affects them.
They work in a cycle
Good practice looks like this:
- Monitor delivery as you go
- Evaluate outcomes periodically
- Learn what the evaluation reveals about effectiveness
- Adjust your approach based on learning
- Monitor the adjusted approach
- Evaluate whether the changes improved outcomes
- Repeat
This is the learning cycle that turns monitoring and evaluation from compliance exercises into genuine improvement tools.
Common Confusions (And How to Think Clearly)
Let me address the most frequent muddles I see:
Confusion 1: “Participant satisfaction is evaluation”
Not quite. Asking “Were you satisfied with our service?” is useful monitoring data about service quality. But it doesn’t tell you if anything changed for participants.
You can be very satisfied with a service that made no lasting difference. Or dissatisfied with a service that nevertheless helped you make important changes.
Satisfaction is valuable monitoring data. Outcome change is evaluation data. Ideally, you track both.
Confusion 2: “We do continuous evaluation”
If it’s continuous, it’s probably monitoring. Evaluation requires stepping back periodically to assess, not constant real-time tracking.
Some organisations call any data collection “evaluation” to make it sound more impressive. Be honest about what you’re actually doing.
Confusion 3: “Monitoring is less important than evaluation”
Both are essential. Monitoring without evaluation means you know what you did but not whether it mattered. Evaluation without monitoring means you can’t explain what specifically led to outcomes or replicate success.
Confusion 4: “We need complex evaluation but simple monitoring”
Usually it’s the opposite. Most organisations need robust, systematic monitoring and relatively simple evaluation. You need to know reliably who you’re working with and what you’re delivering before you can meaningfully assess impact.
Confusion 5: “Our funder requires M&E”
When funders say this, read the actual requirements carefully. Often they want mostly monitoring (numbers reached, activities delivered) plus light evaluation (participant feedback, some outcome data). Don’t over-complicate it.
Building Both Into Your Practice
Here’s how to do monitoring and evaluation well without drowning in data:
For monitoring:
Keep it simple and systematic
Don’t track everything. Track the essentials: who, what, when, how many. Set up simple systems (spreadsheets, basic databases) that staff can maintain as part of normal work.
Make it as automated as possible
Digital sign-in systems, online forms that feed directly into spreadsheets, drop-down menus rather than free text – anything that reduces manual data entry increases sustainability.
Review it regularly
Monthly or quarterly, look at monitoring data as a team. Are you on track? Any concerning patterns? This makes monitoring feel useful rather than bureaucratic.
Use it for management decisions
If monitoring shows declining attendance, act on it. If it shows you’re not reaching your target group, explore why. Monitoring that informs action stays relevant.
For evaluation:
Time it appropriately
Don’t evaluate before change could reasonably happen. A two-session workshop might increase knowledge but won’t change long-term behaviour. Match evaluation timing to when outcomes should emerge.
Keep methods proportionate
Most organisations need simple before-and-after surveys, exit interviews, and perhaps case studies. You don’t need randomised control trials or complex statistical analysis unless you’re making specific causal claims that require that level of rigour.
Actually use the findings
Evaluation that doesn’t lead to any changes is wasted effort. Build time into your evaluation process for reflection workshops where staff and trustees discuss implications and agree actions.
Report accessibly
Nobody reads 50-page evaluation reports. Create one-page summaries with key findings and recommendations. Share these widely and discuss them in team meetings.
A Practical Example
Let me show you how this works for a real organisation – a community cooking programme:
Their monitoring system:
- Sign-in sheet at each session (tracking attendance)
- Basic database recording participant demographics and attendance patterns
- Staff log noting any significant observations or concerns
- Monthly summary counting sessions delivered, participants engaged, completion rates
This monitoring tells them who’s coming, how often, and whether delivery is happening as planned.
Their evaluation approach:
- Brief skills survey at start and end of 6-week programme (Can you cook a healthy meal from scratch? Do you feel confident adapting recipes? etc.)
- Three months after the programme ends, follow-up phone call asking: Are you cooking more? Do you feel more confident? What recipes do you still use?
- Annual review analysing completion rates, outcome data, and participant feedback to identify what works
This evaluation tells them whether participants gained skills, whether those skills lasted, and what programme elements most contributed to change.
Together, monitoring and evaluation give them a complete picture: what they delivered, to whom, and what difference it made.
When Resources Are Limited
Most charities can’t do comprehensive monitoring and evaluation. So what do you prioritise?
If you can only do one thing, start with basic monitoring
Know who you’re working with and what you’re delivering. This gives you the foundation for future evaluation and meets most funders’ basic requirements.
If you can do two things, add simple evaluation
Keep monitoring basic but add one or two outcome questions you ask before and after your intervention. Even simple evaluation is better than none.
If you can do three things, make evaluation periodic but thorough
Don’t try to evaluate continuously. Instead, do proper evaluation once or twice a year where you really dig into outcomes, learn from findings, and adjust practice.
If you can do four things, review your systems annually
Once a year, look at what monitoring data you collect and whether it’s useful. Look at whether your evaluation is answering the right questions. Refine both based on what you’ve learned.
You don’t need perfect systems. You need good-enough systems that you actually use.
Red Flags You’re Doing It Wrong
Watch for these warning signs:
Red flag 1: You collect lots of data but never look at it
Symptom: Spreadsheets full of monitoring data that nobody analyses or uses
Red flag 2: You can describe your activities in detail but can’t say if they work
Symptom: Comprehensive monitoring but no evaluation of outcomes
Red flag 3: You claim evaluation but it’s just satisfaction surveys
Symptom: Confusing service quality feedback with outcome measurement
Red flag 4: Your “evaluation” happens while the programme is still running
Symptom: Evaluating before outcomes could reasonably occur
Red flag 5: You spend more time on data collection than on using what you learn
Symptom: Multiple complex systems that produce reports nobody reads or acts on
If you spot these patterns, simplify and refocus. Better to do basic monitoring and simple evaluation well than complex systems badly.
Getting Started
If you’re building monitoring and evaluation from scratch, here’s a sensible sequence:
Month 1: Set up basic monitoring
Create simple tools to track who you work with, what you deliver, and how many people engage. Spreadsheet, paper forms, whatever works. Just start systematically recording the basics.
Months 2-3: Establish monitoring rhythm
Make data entry routine. Review monitoring data monthly as a team. Troubleshoot any systems that aren’t working.
Month 4: Plan your evaluation
Now you know who you’re working with and what you’re delivering (from monitoring), you can plan evaluation. What outcomes matter most? When should you measure them? How?
Months 5-6: Pilot simple evaluation
Start with one outcome, one simple method. Perhaps a before-and-after question, or exit interviews. Test whether it’s feasible and useful.
Months 7-12: Refine both
Adjust monitoring and evaluation based on what you’ve learned. Drop things that aren’t useful. Add things that are missing.
By month 12, you’ll have functioning monitoring and evaluation that’s proportionate to your capacity and actually informs your work.
Reflection Questions
Before you move on, take a moment to consider:
Review your current practice. Are you genuinely doing both monitoring and evaluation, or mostly one? Which needs strengthening?
Think about a question your organisation needs to answer. Would monitoring data or evaluation data answer it? How does that clarify what you actually need?
About This Series
This guide is part of a learning series on Measuring Social Impact for Charities and Social Enterprises. We’re here to make evaluation practical, accessible, and useful – not overwhelming.
Want to go deeper? Social Value Lab supports organisations to develop proportionate, practical approaches to measuring and communicating impact. We believe every organisation deserves to understand and communicate their value, regardless of size or budget.
Was this helpful? Share it with a colleague who needs to hear that evaluation doesn’t have to be overwhelming.