Skip to main content

The Evaluation Cycle: How Learning Feeds Back Into Better Work

Stop treating evaluation as a one-off exercise. Learn to create continuous feedback loops that turn insights into improvements. Practical strategies for embedding learning into your charity’s everyday practice.

 

Picture this: A charity spends months gathering evaluation data, writes a comprehensive report, submits it to their funder, and then… nothing. The report goes in a drawer. Staff never see it. Nothing about their work changes. Six months later, they’re still delivering the same programme in the same way, regardless of what the evaluation revealed.

Sound familiar?

This is evaluation as a compliance exercise rather than a learning tool. It’s one of the most common patterns I see in the sector, and it’s a tragic waste of effort. All that time collecting data, analysing it, and writing it up, and the organisation learns nothing. Gets no better. Makes no adjustments.

The problem isn’t the evaluation itself. It’s treating evaluation as an endpoint rather than part of a continuous cycle. When done well, evaluation doesn’t produce a final report that sits on a shelf. It generates insights that change how you work, which you then evaluate again, creating an ongoing spiral of learning and improvement.

This is the evaluation cycle, and understanding how it works transforms evaluation from a burden into a genuine improvement tool.

Let me show you how to make it work in practice.

Why One-Off Evaluation Doesn’t Work

Before we talk about cycles, let’s acknowledge why the traditional approach falls short:

It happens too late

Many organisations only evaluate at the end of a project or funding period. By then, it’s too late to improve that piece of work. You might learn valuable lessons, but you can’t apply them to the thing you just evaluated.

It’s disconnected from delivery

When evaluation is a separate activity that happens after the work is done, it feels removed from the real work of delivering services. Staff see it as bureaucracy rather than something that helps them do better.

Findings rarely translate into action

Even when evaluation reveals important insights, there’s often no clear process for turning those insights into decisions and changes. “That’s interesting” becomes “now let’s get back to the real work.”

It reinforces a compliance mindset

When evaluation only happens because a funder requires it, and the main goal is producing a report, it’s hard to feel genuinely curious about learning. You’re ticking a box, not exploring what works.

There’s no accountability for learning

Nobody checks six months later whether you actually did anything with your evaluation findings. So organisations develop evaluation amnesia – each piece of evaluation treated as if it’s the first, with no building on previous learning.

The alternative is treating evaluation as part of a continuous cycle where learning feeds directly back into improving your work.

Understanding the Evaluation Cycle

The evaluation cycle has different names and slightly different formulations, but the core concept is consistent. Here’s how I think about it:

  1. Plan: Design your work based on best available knowledge
  2. Deliver: Carry out activities while monitoring what’s happening
  3. Evaluate: Assess whether your work is achieving intended outcomes
  4. Learn: Make sense of findings and identify implications
  5. Adjust: Change your approach based on what you learned
  6. Plan: Design your next phase incorporating the learning

And the cycle begins again.

This isn’t a linear process that ends. It’s a spiral. Each time round, you know more than you did before. Your work gets progressively better informed by evidence about what actually works in your context with your participants.

Think of it like cooking. You try a recipe (plan and deliver), taste as you go (monitor), taste the final dish (evaluate), reflect on what worked and what didn’t (learn), adjust the seasoning or timing (adjust), and cook it differently next time (plan again). Gradually, you become better at making that dish because you’re learning from each attempt.

The Cycle in Detail

Let me break down each stage with practical guidance:

Stage 1: Plan

This is where you design your work based on:

  • What you know about the problem you’re addressing
  • Evidence about what works from elsewhere
  • Your own theory about how change happens
  • Previous evaluation findings if you have them

Good planning asks:

  • What change are we trying to create?
  • Who are we trying to reach?
  • What activities will we deliver?
  • What outcomes do we expect, and when?
  • How will we know if we’re succeeding?

Even if this is your first time doing something, you’re not starting from scratch. There’s usually research or practice wisdom you can draw on. The cycle improves planning over time as you add your own learning.

Stage 2: Deliver

This is the actual work – running groups, providing advice, delivering training, whatever your service involves.

During delivery, you’re monitoring (as we discussed in the previous blog):

  • Who’s participating
  • How activities are going
  • What you’re noticing about engagement
  • Any early signs of change or problems

Good delivery includes paying attention. Staff who notice patterns, ask questions, and stay curious about what’s working are gathering informal evaluation data constantly.

Stage 3: Evaluate

At planned intervals (mid-point, end, follow-up), you step back to assess:

  • Are we achieving intended outcomes?
  • For whom, and in what circumstances?
  • What’s working well?
  • What isn’t working?
  • What unexpected effects are we seeing?

This draws on both your monitoring data and specific evaluation activities like surveys, interviews, or case studies.

The key is evaluating at the right times – not so early that change hasn’t happened yet, but not so late that you can’t adjust.

Stage 4: Learn

This is the most neglected stage, yet arguably the most important. Learning means going beyond findings to implications:

  • What does this data tell us about our theory of change?
  • Which of our assumptions were correct? Which were wrong?
  • What does this mean for how we should deliver this work?
  • What should we stop, start, or continue doing?
  • What new questions does this raise?

Good learning happens collectively. It’s not one person reading the evaluation report. It’s staff, managers, trustees, and ideally participants discussing together what the evidence means.

Stage 5: Adjust

Learning means nothing if it doesn’t lead to change. Adjustment means:

  • Making concrete decisions about what you’ll do differently
  • Changing programme design, delivery, or targeting
  • Reallocating resources toward what works
  • Stopping things that aren’t working
  • Testing new approaches suggested by evaluation findings

This requires courage. Sometimes evaluation tells you that your favourite activity isn’t working, or that you need to make difficult changes. Organisations that embrace the cycle are willing to act on uncomfortable truths.

Stage 6: Plan again

Now you’re back at the start, but better informed. Your next phase of planning incorporates:

  • Evidence about what worked and what didn’t
  • Refined understanding of your target group
  • Adjusted theory of change
  • More realistic outcome expectations
  • Better evaluation questions

Each cycle builds your organisation’s knowledge. You become progressively smarter about what creates change in your particular context.

What This Looks Like in Practice

Let me show you three real examples of organisations using the cycle effectively:

Example 1: A job readiness programme

Year 1 – Plan: Designed a 6-week programme teaching CV writing, interview skills, and confidence building, based on common practice in the sector.

Year 1 – Deliver & Monitor: Ran 4 cohorts with 50 participants total. Monitored attendance (average 4 out of 6 sessions) and completion (60% finished).

Year 1 – Evaluate: Participants reported increased confidence but interviews showed they struggled to sustain job search after the programme ended. Only 30% were employed 6 months later.

Year 1 – Learn: The programme built skills but didn’t address practical barriers (travel costs, childcare) or provide ongoing support during job applications. Six weeks wasn’t enough to embed new habits.

Year 1 – Adjust: Extended programme to 12 weeks with ongoing monthly drop-in support. Added small grants for travel/interview clothes. Paired participants with peer buddies for mutual support during job search.

Year 2 – Plan: Redesigned programme incorporating these changes. Set more realistic outcome targets (50% employment at 6 months given participant circumstances).

Year 2 – Results: 62% completion rate, 48% employment at 6 months. Qualitative feedback showed participants valued ongoing support and peer connections most. Travel grants were transformational for some.

Notice how the cycle led to concrete improvements based on evidence rather than assumptions.

Example 2: A community lunch club

Cycle 1 – Plan: Weekly lunch for isolated older people, assuming attendance would lead to friendships.

Cycle 1 – Evaluate: Good attendance but evaluation revealed people mostly sat with staff, not each other. Loneliness scores barely changed.

Cycle 1 – Learn: Passive attendance doesn’t create relationships. People need structured activities that prompt interaction.

Cycle 1 – Adjust: Added table activities (quizzes, crafts, photo sharing prompts) and changed room layout to encourage mixing.

Cycle 2 – Evaluate: People now interacting more. Some forming friendships and meeting outside of the lunch club. Loneliness scores improving for regular attenders.

Cycle 2 – Learn: Activities help, but friendships take time. Regular attendance over 3+ months correlates with better outcomes.

Cycle 2 – Adjust: Added transport service to support regular attendance. Started small group outings for people who’d attended 3+ months to deepen connections.

Cycle 3 – Evaluate: Sustained improvements in loneliness scores. Five participant-organised social groups now running independently.

Small adjustments, each informed by learning, transformed the programme’s effectiveness.

Example 3: A mental health helpline

Early version: Reactive service answering calls when they came in. No follow-up. No way to know if callers improved.

First evaluation: Managed to trace 30 callers who agreed to follow-up interview. 70% said the call helped in the moment but they struggled afterwards with no ongoing support.

Adjustment 1: Offered to send written resources after calls. Added follow-up call option if callers consented.

Second evaluation: Follow-up calls made a huge difference for some people but many didn’t answer. Written resources rarely read.

Adjustment 2: Introduced text-based support option. Sent brief check-in text 3 days and 7 days after initial contact with option to respond.

Third evaluation: 85% response rate to texts. People valued low-pressure check-in. Staff could identify who needed more support.

Current version: Multi-channel approach (phone, text, email) with proactive follow-up that evaluation showed was more effective than one-off crisis support.

Each cycle revealed something that improved the service. Without systematic learning and adjustment, they’d still be running the original model.

Making the Cycle Work in Your Organisation

Understanding the cycle conceptually is one thing. Making it work in practice is another. Here’s how:

Build evaluation into programme design from the start

Don’t think “we’ll deliver this programme then evaluate it afterwards.” Think “we’re piloting this programme and building in evaluation so we can learn and improve it.”

This means:

  • Including evaluation time and budget in project planning
  • Designing data collection into delivery from day one
  • Scheduling review points at sensible intervals
  • Planning how you’ll adjust based on findings

Create protected time for learning

The learn and adjust stages require time. If you don’t protect it, urgent delivery work will always squeeze out reflection.

Try:

  • Quarterly learning sessions where the team reviews evaluation data together
  • Annual away days focused on “what did we learn this year and what does it mean for next year?”
  • Building learning time into staff roles (not just data collection, but sense-making)

Make learning collaborative

One person reading evaluation reports in isolation rarely leads to change. Learning happens through conversation.

Involve:

  • Frontline staff who understand delivery realities
  • Managers who can make resource decisions
  • Trustees who set strategic direction
  • Participants who experienced the programme
  • Partners or funders who might implement learning

Different perspectives enrich understanding and increase buy-in to changes.

Document learning, not just findings

Most evaluation reports document findings: “70% of participants reported increased confidence.” But they don’t capture learning: “We learned that confidence increases happen mostly in the final three sessions when people practice real-world scenarios, suggesting we should focus more on practical application earlier.”

Keep a learning log that records:

  • What you learned
  • What it implies for practice
  • What you decided to change
  • Who’s responsible
  • When you’ll review whether the change worked

Follow through on adjustments

The cycle breaks if you learn but don’t adjust, or adjust but don’t follow through.

Create accountability:

  • Assign clear ownership of actions
  • Set deadlines for implementation
  • Report back to trustees on what changed
  • Include adjustment progress in staff supervision

Evaluate your adjustments

This is where the cycle truly closes. After you adjust your approach based on evaluation, you need to evaluate whether the adjustment actually improved things.

Did extending the programme from 6 to 12 weeks improve outcomes like you hoped? Did adding the peer buddy system make a difference? You won’t know unless you evaluate again.

This is how you build a genuine evidence base about what works in your specific context.

Overcoming Common Barriers

Let’s address the obstacles that prevent organisations from completing the cycle:

Barrier 1: “We don’t have time to make changes, we’re too busy delivering”

This is backwards. If you’re delivering something that doesn’t work well, you’re wasting time. Strategic pausing to learn and adjust saves time in the long run by making your delivery more effective.

Start small. One adjusted activity. One changed approach. Prove the value before scaling.

Barrier 2: “Our funder won’t let us change things mid-project”

Most funders are more flexible than you think, especially if you can show evidence-based reasons for adjustment. Good funders want you to learn and improve, not rigidly follow a plan that isn’t working.

Have honest conversations with funders about your learning. Many will respect evidence-informed adaptation more than stubbornly sticking to the original plan.

Barrier 3: “Evaluation shows mixed results, we don’t know what to change”

Mixed results are still useful. They often reveal that your programme works for some people in some circumstances but not others. That’s valuable learning.

Ask: Who did we help most effectively? What conditions led to success? Can we target or adapt to increase those conditions?

Barrier 4: “Staff resist changes to how we work”

Resistance usually comes from changes feeling imposed rather than collaborative. When staff are involved in the learn and adjust stages, changes feel owned rather than inflicted.

Also, frame changes positively: “Evaluation showed what we’re doing works for X, so we’re going to do more of that and less of Y which wasn’t as effective.”

Barrier 5: “We can’t evaluate quickly enough to adjust within project timeframes”

Then evaluate smaller. Mid-point reviews don’t need to be comprehensive. A simple check-in with participants about what’s working and what isn’t can guide adjustments without waiting for end-of-project formal evaluation.

Use rapid evaluation methods: quick surveys, focus group over lunch, exit interviews, observation notes. Light-touch evaluation that informs timely adjustment is better than comprehensive evaluation that comes too late.

Different Cycles for Different Timescales

The evaluation cycle operates at multiple timescales simultaneously:

Within a programme (weeks to months)

A 12-week programme might have a mid-point review at week 6, allowing adjustment for the second half. “Participants are struggling with X, so we’re adding extra support in weeks 7-8.”

Across delivery cycles (months to a year)

If you run quarterly cohorts, each cohort is evaluated and improvements incorporated into the next. Cohort 4 benefits from learning across cohorts 1-3.

Across project phases (1-3 years)

A three-year funded project might have year 1 as a pilot, year 2 incorporating learning from year 1, and year 3 refining the model further.

Across funding periods (3-5 years)

When one funding period ends and you apply for the next, your proposal should reflect accumulated learning. “We’ve learned through evaluation that X approach is most effective, so our next phase will focus there.”

Strategic learning (5-10 years)

Long-term, organisations should be learning not just about programme details but about their theory of change, target population, and role in the wider system.

You can operate the cycle at all these levels simultaneously. Weekly programme adjustments. Annual programme redesign. Five-yearly strategic review.

Signs the Cycle Is Working

How do you know if you’ve successfully embedded the evaluation cycle? Look for these indicators:

Staff talk naturally about learning

“We tried X last month and it didn’t work as well as we hoped, so we’re testing Y this month.”

Evaluation findings are discussed widely

Not just read by one person but shared in team meetings, board papers, newsletters, conversations with participants.

You can point to specific changes informed by evaluation

“We made this change because evaluation showed that…” Not just vague improvement but concrete evidence-informed adjustments.

Plans reference previous learning

New project proposals explicitly build on what you learned from previous projects rather than starting fresh each time.

You’re comfortable saying “that didn’t work”

Organisations with good learning cultures can acknowledge failure without defensiveness because failure is just information.

Evaluation feels useful, not burdensome

When the cycle is working, staff see evaluation as helping them do better work, not as bureaucratic obligation.

Things actually get better over time

The ultimate test: Are outcomes improving? Is delivery more efficient? Are participants happier? Evidence-informed improvement should lead to measurable enhancement over time.

Starting the Cycle

If you’re not currently operating in cycles, here’s how to begin:

Don’t wait for perfect conditions

You don’t need sophisticated systems or ideal evaluation data. Start with what you have. The cycle itself will help you improve your evaluation over time.

Start with one programme

Pick something that runs in repeatable cycles (quarterly cohorts, monthly groups, annual events). Practice the cycle here before expanding to your whole organisation.

Schedule the review points now

Put dates in the diary for learning sessions. If you don’t schedule them, they won’t happen. Make them as non-negotiable as your board meetings.

Keep it simple initially

First time round: basic evaluation, team discussion, one or two adjustments. Don’t try to revolutionise everything at once.

Document your process

Keep notes on what you learned and what you changed. Six months later, when you evaluate again, you’ll want to remember what adjustments you made and why.

Celebrate learning

When adjustments lead to improvements, acknowledge it. “Remember when we learned X and changed Y? Look at the difference it made.” This reinforces the value of the cycle.

Reflection Questions

Before you move on, take a moment to consider:

Think about a programme or project you’ve evaluated recently. What happened to the findings? Did they lead to any changes in how you work? If not, what stopped the cycle from completing?

Looking at your current work, where could you build in a review point that would allow you to adjust based on learning before the work finishes?

About This Series

This guide is part of a learning series on Measuring Social Impact for Charities and Social Enterprises. We’re here to make evaluation practical, accessible, and useful – not overwhelming.

Want to go deeper? Social Value Lab supports organisations to develop proportionate, practical approaches to measuring and communicating impact. We believe every organisation deserves to understand and communicate their value, regardless of size or budget.

Was this helpful? Share it with a colleague who needs to hear that evaluation doesn’t have to be overwhelming.