“I Don’t Have Time for Evaluation”: Making the Case for Proportionate Measurement
Evaluation saves more time than it takes when done proportionately. Learn efficiency strategies that show how smart evaluation actually reduces workload, not increases it.
“We know we should be measuring our impact, but honestly? We’re drowning and just keeping our heads above water.”
If you’ve ever said this, or heard it in a team meeting, you’re not alone. It’s the most common objection I hear from charity and social enterprise leaders, and it’s completely understandable. When you’re juggling service delivery, fundraising applications, safeguarding concerns, and staff shortages, the thought of measuring impact can feel like yet another demand on an already impossible to-do list.
But really: evaluation doesn’t have to be a burden. When done proportionately, it actually saves time, strengthens funding applications, and helps you do better work. The trick is letting go of perfection and focusing on what actually matters.
Let me show you how.
Why “We Don’t Have Time” Usually Means “We’re Doing It Wrong”
When evaluation feels overwhelming, it’s usually because we’ve fallen into one of these traps:
Trap 1: Gold-plating
Thinking evaluation means academic rigour, complex statistics, and 50-page reports. It doesn’t. Most funders and trustees just want to know: Are we making a difference? How do we know? What should we do differently?
Trap 2: Measuring everything
Trying to capture every possible outcome for every participant. This creates mountains of unusable data and exhausted staff. Better to measure a few things well than everything poorly.
Trap 3: Treating it as separate
Bolting evaluation onto existing work as an afterthought. When data collection feels like an extra task, it becomes unsustainable. The solution is integration, not addition.
Trap 4: Starting from scratch
Ignoring the data you’re already collecting – attendance registers, feedback forms, case notes. You’re probably sitting on more evidence than you think.
The good news? Once you recognise these traps, you can avoid them.
Reframing Evaluation: From Burden to Investment
Let me offer a different way to think about this. Evaluation isn’t admin, it’s intelligence. It’s the organisational equivalent of switching the lights on before rearranging the furniture.
Consider what happens without measurement:
- You can’t tell funders what’s working, making applications weaker and less competitive
- You repeat activities that don’t deliver change while missing opportunities that would
- Staff feel undervalued because they can’t see evidence of their impact
- Trustees make strategic decisions based on anecdote rather than evidence
- You risk delivering services that don’t actually help the people you exist to serve
Now consider what becomes possible with proportionate measurement:
- Stronger funding applications backed by evidence of effectiveness
- Clearer decision-making about where to invest limited resources
- Better services because you learn what works and adapt accordingly
- Staff who feel valued when they can see the difference they’re making
- Credibility with funders, commissioners, and partners
This isn’t about extra work, it’s about working smarter.
Five Principles of Proportionate Measurement
Here’s how to do evaluation that fits your capacity and actually helps:
1. Start with one question
You don’t need to evaluate everything. Pick the single most important question your organisation needs to answer right now. Examples:
- Are participants more confident after our programme? (short-term outcome)
- Do people feel less isolated six months after using our service? (medium-term outcome)
- Are we reaching the people who need us most? (targeting question)
One clear question is infinitely more useful than ten fuzzy ones.
2. Use what you’ve already got
Before designing new surveys, audit what you’re already collecting: attendance data, feedback forms, referral records, case notes, social media comments. You probably have more evidence than you realise – it just needs organising.
A community gardening project I worked with thought they had “no data.” Within 20 minutes we’d identified: sign-in sheets showing repeat attendance (indicating engagement), volunteer feedback forms (indicating satisfaction and skills development), and photographs showing the transformation of neglected spaces (visual evidence of environmental impact). That’s three data sources requiring zero extra collection effort.
3. Keep methods simple
You don’t need randomised control trials or complex statistical analysis. Simple approaches often work best:
- Before and after questions: “On a scale of 1-5, how confident do you feel managing your finances?” Asked at the start and end of your programme.
- Exit interviews: A 10-minute conversation when someone completes your service. “What’s different for you now compared to when you started?”
- Most Significant Change: Ask participants “What’s the most important change for you?” and collect those stories.
Simple doesn’t mean low quality, it means fit-for-purpose.
4. Build it into service delivery
The most sustainable evaluation happens as part of normal practice, not as a separate task. Examples:
- Check-in conversations that already happen at sessions can include a couple of reflective questions
- Feedback forms can be filled in during the final session rather than chased afterwards
- Staff can note significant observations in case records they’re already writing
When evaluation is invisible from the user’s perspective and minimal effort for staff, it becomes sustainable.
5. Be realistic about scope
Small organisations doing great work don’t need the same measurement systems as large research institutions. Decide what’s “good enough” for your context:
- If you work with 20 young people per year, meaningful case studies might be more valuable than quantitative surveys
- If you run a helpline receiving 500 calls per week, simple monitoring data (call volume, reasons for contact, referrals made) might be sufficient
- If you’re piloting something new, learning whether it’s promising enough to continue is more important than proving definitive impact
Match your evaluation to your capacity, stage of development, and stakeholder expectations.
The 20% That Delivers 80% of the Value
Here’s what proportionate measurement looks like in practice:
Instead of: A 30-question survey sent six months after the programme ends (10% response rate, lots of missing data, impossible to analyse)
Try: Five questions asked in the final session while people are present (90% response rate, complete data, analysed in an afternoon)
Instead of: Complex logic models and monitoring frameworks gathering dust in a drawer
Try: A one-page diagram showing: Who we work with → What we do → What changes → How we’ll know
Instead of: Annual evaluation reports that nobody reads
Try: Quarterly one-page summaries shared at team meetings and board updates, with three bullet-point recommendations
The pattern? Less data, better data. Simpler systems, more use.
Your Next Steps
If you’re convinced but don’t know where to start, try this:
- Block out 45 minutes with your team (or by yourself if you’re a team of one)
- Answer these questions:
- What’s the one thing we most need to understand about our impact right now?
- What evidence could we collect this month to start answering that?
- Who needs to be involved, and how much time can they realistically give?
- Start small: Pilot with one programme, one group, or one simple question
- Review in three months: Is this giving you useful information? If yes, continue. If no, adjust.
Proportionate measurement isn’t about doing evaluation perfectly, it’s about doing just enough evaluation to learn, improve, and demonstrate your value.
Reflection Questions
Before you move on, take a moment to consider:
What would become possible for your organisation if you had clearer evidence of your impact? (Think about funding, decision-making, staff morale, or service quality)
What’s one piece of evidence you could start collecting this week with the resources you already have?
About This Series
This guide is part of a learning series on Measuring Social Impact for Charities and Social Enterprises. We’re here to make evaluation practical, accessible, and useful – not overwhelming.
Want to go deeper? Social Value Lab supports organisations to develop proportionate, practical approaches to measuring and communicating impact. We believe every organisation deserves to understand and communicate their value, regardless of size or budget.
Was this helpful? Share it with a colleague who needs to hear that evaluation doesn’t have to be overwhelming.