Skip to main content

What Good Evaluation Actually Looks Like (And What It Doesn’t)

Forget academic rigour and expensive methods. Good evaluation asks honest questions and finds credible answers to help you improve. Learn what genuinely matters versus common myths that keep charities stuck.

 

Picture this: You’re at a sector networking event, and someone mentions they’ve “just completed a really robust evaluation.” Your stomach sinks. Are they talking about randomised control trials? Statistical significance? A 100-page report with regression analysis?

You look at your own work – a collection of feedback forms, some powerful testimonials, and a spreadsheet tracking who’s completed your programme – and wonder: Is what we’re doing even evaluation? Or are we just fooling ourselves?

Let me put your mind at rest. What you’re doing probably is evaluation. The myth that good evaluation requires academic rigour, enormous budgets, and statistical expertise has done more damage to the charity sector than almost anything else. It’s kept thousands of organisations from measuring impact because they think they can’t do it “properly.”

Here’s the truth: good evaluation is about asking honest questions and finding credible answers that help you learn and improve. That’s it. Everything else is detail.

The Myths That Keep Us Stuck

Before we talk about what good evaluation looks like, let’s clear away the misconceptions that stop organisations from starting.

Myth 1: “Good evaluation means academic-level research”

Not true. Academic research asks theoretical questions to advance knowledge in a field. Evaluation asks practical questions to improve specific programmes and inform decisions. They’re different activities with different purposes.

A women’s refuge doesn’t need to prove causation between their service and reduced domestic violence rates across the population. They need to know: Are the women we support feeling safer? Are they accessing the resources they need? What could we do better?

Myth 2: “If we can’t prove attribution, we shouldn’t claim impact”

This one’s particularly toxic. Almost no social intervention can definitively prove it caused an outcome – there are too many other factors at play. A young person who gains confidence after your mentoring programme is also influenced by their family, school, friends, and a dozen other things happening in their life.

But that doesn’t mean your programme didn’t matter. Good evaluation focuses on contribution, not attribution. Did you play a meaningful role? Can you demonstrate plausible connections between your activities and observed changes? That’s what matters.

Myth 3: “Numbers are more credible than stories”

Different types of evidence answer different questions. “78% of participants reported increased confidence” is useful. But it doesn’t tell you what increased confidence means to someone who was previously terrified to leave their house, or how your programme helped that shift happen.

Good evaluation uses both numbers and narratives. Quantitative data shows patterns and scale. Qualitative data explains meaning and mechanism. You need both.

Myth 4: “We need big sample sizes to say anything meaningful”

Not necessarily. If you work with 15 young care leavers per year, detailed case studies showing individual journeys can be more compelling than trying to generate statistics that aren’t statistically significant anyway.

Small numbers demand honesty about what you can and can’t claim, but they don’t make evaluation impossible or worthless.

Myth 5: “Good evaluation is expensive”

Good evaluation is proportionate. For many small charities, that means simple methods like before-and-after questions, exit interviews, or feedback forms analysed over a cup of tea. These can cost almost nothing beyond staff time – and they’re perfectly valid if they help you learn.

Expensive evaluation exists, but it’s not automatically better. It depends entirely on your questions and context.

What Good Evaluation Actually Looks Like

Now we’ve cleared away the myths, here’s what characterises genuinely good evaluation – regardless of size, budget, or methods used.

1. It starts with clear questions

Good evaluation knows what it’s trying to find out. Not woolly questions like “Are we making a difference?” but specific ones:

  • Are participants more confident managing money after our six-week course?
  • Do families feel better supported three months after engaging with our service?
  • Are we reaching people from the communities we’re trying to serve?

Clear questions lead to focused data collection. Fuzzy questions lead to random data that nobody knows what to do with.

2. It measures things that actually matter

This sounds obvious, but it’s remarkable how often organisations measure what’s easy rather than what’s important.

Counting how many workshops you delivered (outputs) is easy. Understanding whether those workshops changed anything for participants (outcomes) is harder but infinitely more valuable.

Good evaluation focuses on outcomes – the changes, benefits, and learning experienced by the people you work with. Not exclusively, but primarily.

3. It uses methods appropriate to the questions

Good evaluation matches tools to purpose:

  • Want to know if confidence has increased? Use a simple self-assessment scale before and after.
  • Want to understand how people experienced your service? Use interviews or focus groups.
  • Want to know if people are still engaged six months later? Track contact data.

There’s no “best” method. Only appropriate or inappropriate for what you’re trying to learn.

4. It’s honest about limitations

Good evaluation doesn’t pretend to be perfect. It acknowledges when:

  • Sample sizes are small
  • Response rates are low
  • Baseline data is missing
  • Alternative explanations exist
  • Resources limited what was possible

This transparency doesn’t undermine credibility – it enhances it. Funders and trustees can smell false certainty a mile away. Honest evaluation builds trust.

5. It actually gets used

This is the most important characteristic. Evaluation that sits in a drawer is worthless, no matter how methodologically sophisticated.

Good evaluation:

  • Generates insights that surprise or challenge you
  • Leads to actual changes in how you work
  • Informs decisions about where to focus resources
  • Strengthens funding applications and reports
  • Helps staff understand their impact

If your evaluation isn’t changing anything, it’s not good evaluation – it’s just paperwork.

What Good Evaluation Definitely Isn’t

It’s not about volume

Having 50 pages of data doesn’t mean you’re learning 50 times more than someone with one page. Often, the opposite. Less data, better analysed, is almost always more useful than mountains of numbers nobody has time to make sense of.

It’s not about complexity

Using sophisticated statistical techniques doesn’t automatically make evaluation better. I’ve seen beautifully simple evaluations using basic before-and-after questions that led to genuine programme improvements. And I’ve seen complex evaluations that nobody understood, including the people who commissioned them.

Complexity should match your questions and capacity, not be pursued for its own sake.

It’s not about proving you’re brilliant

Good evaluation isn’t a PR exercise. It’s a learning exercise. The evaluations that lead to genuine improvement are the ones honest enough to reveal what’s not working, not just celebrate successes.

If your evaluation only ever shows positive results, you’re either running a perfect organisation (unlikely) or you’re not being honest with yourself (more likely).

It’s not separate from your work

Bolt-on evaluation that happens occasionally and feels disconnected from normal practice rarely lasts. Good evaluation is woven into how you operate – built into feedback conversations that already happen, using data you’re already collecting, informing decisions you’re already making.

Some Examples of Good Evaluation

Example 1: The Community Café

A small community café wanted to understand their impact on loneliness. They couldn’t afford external evaluators or complex surveys.

Their solution: A simple question asked during checkout: “How often do you see or speak to someone outside your household in a typical week?” They asked this after someone’s first visit and again after they’d been coming for three months.

The before-and-after comparison showed that regular café users were connecting with others more frequently. They also kept a “guest book” where people could share reflections – this provided rich qualitative context.

Cost: Zero. Value: Clear evidence of impact that strengthened funding applications.

Example 2: The Youth Mentoring Programme

A mentoring programme serving 30 young people per year wanted to demonstrate impact but knew 30 wasn’t enough for statistical significance.

Their solution: Detailed case studies for six young people (20% of their cohort), tracked over 12 months. Each case study included the young person’s own account of change, their mentor’s observations, and outcome data from their school where available.

They also collected simple quantitative data across all 30 participants: attendance rates, programme completion, and a brief end-of-programme survey.

The result: Compelling evidence combining rich individual stories with patterns across the whole cohort.

Example 3: The Debt Advice Service

An advice service wanted to move beyond counting appointments to understanding actual impact.

Their solution: Three months after each advice session, a follow-up text message with three questions:

  • Are you in a better financial position now than before your appointment? (Yes/Somewhat/No)
  • Did the advice help you take action? (Yes/No)
  • Would you recommend us to others? (Yes/No)

Response rate: 60%. Analysis time: Two hours per quarter. Impact on their funding narrative: Transformational.

Spotting Good Evaluation When You See It

Here are the questions I ask when reviewing any evaluation:

  • Can I understand the key findings in five minutes?
  • Are the methods clearly explained in plain English?
  • Does it acknowledge limitations and uncertainties?
  • Are the recommendations specific and actionable?
  • Has it led to any actual changes in practice?
  • Would this be useful to someone doing similar work?

If the answer to most of these is “yes,” you’re looking at good evaluation – regardless of whether it used complex methods or simple ones, cost thousands or nothing.

What This Means for Your Organisation

If you’ve been putting off evaluation because you don’t think you can do it “properly,” I hope this has given you permission to start.

Good evaluation isn’t about academic standards or perfect methodologies. It’s about:

  • Asking honest questions that matter to your work
  • Gathering credible evidence using methods you can sustain
  • Being transparent about what you can and can’t claim
  • Actually using what you learn to improve

That’s it. If you’re doing those things, you’re doing good evaluation.

The small community group with a simple feedback form and the reflective practice to act on it is doing better evaluation than the large organisation with sophisticated systems that nobody uses.

Start where you are. Use what you have. Be honest. Learn. Improve.

That’s what good evaluation looks like.

Reflection Questions

Before you move on, take a moment to consider:

Looking at your current evaluation activities, which of the “good evaluation” characteristics do you already have?

What’s one myth about evaluation that’s been holding your organisation back?

About This Series

This guide is part of a learning series on Measuring Social Impact for Charities and Social Enterprises. We’re here to make evaluation practical, accessible, and useful, not overwhelming.

Want to go deeper? Social Value Lab supports organisations to develop proportionate, practical approaches to measuring and communicating impact. We believe every organisation deserves to understand and communicate their value, regardless of size or budget.

Was this helpful? Share it with a colleague who’s struggling to turn aspirational outcomes into measurable ones.