Making Sense of Numbers: Descriptive Statistics for Non-Statisticians
Turn raw data into meaningful insights using simple spreadsheet formulas. No maths degree needed – just practical techniques for calculating averages, spotting patterns, and reporting statistics honestly.
You’ve collected evaluation data. Numbers. Lots of them.
Confidence scores from 30 people. Attendance figures. Employment outcomes. Wellbeing measurements before and after the programme.
Now you need to make sense of it.
You could list all 30 confidence scores: 3, 4, 5, 2, 8, 6, 7, 5, 4, 8, 9, 3, 5, 7, 6, 4, 5, 8, 7, 6, 5, 4, 3, 8, 9, 5, 6, 7, 4, 5.
But that tells you nothing useful. Thirty scattered numbers. You can’t see the pattern.
You need a way to describe what these numbers actually show. That’s what descriptive statistics does.
Descriptive statistics takes messy numbers and describes them clearly. It answers questions like: “On average, how confident were people?” “What’s the range – did some people feel much more confident than others?” “Did most people improve or just a few?”
Most charities think statistics is complicated. Intimidating. Something only specialists do.
But descriptive statistics is actually straightforward. It’s not fancy maths. It’s clear thinking about what your numbers show.
Let me walk you through what you actually need to know.
Why Descriptive Statistics Matter
First, understand what descriptive statistics actually does for you.
Without descriptive statistics, you have raw numbers. Thirty confidence scores scattered around. You can’t communicate what they mean.
You tell your funder: “Our participants’ confidence scores were: 3, 4, 5, 2, 8, 6, 7, 5, 4, 8…”
Funder’s eyes glaze over. They learn nothing.
With descriptive statistics, you tell them: “Average confidence was 5.7 out of 10. Most people (26 out of 30) scored between 3 and 8. The highest score was 9, the lowest was 2.”
Now the funder understands. The numbers make sense.
Descriptive statistics also helps you see patterns you might miss looking at raw data.
You notice: “Average confidence at baseline was 4.1. At the endpoint, 7.3.”
That’s a clear improvement. Without averaging, you wouldn’t see it as clearly.
Or: “Average attendance was 12 sessions. But the range was 2 to 24 sessions. Some people came regularly, others barely at all.”
Without describing the range, you might have missed that huge variation.
Descriptive statistics isn’t just for reporting. It helps you understand your own data better. It guides what to investigate. It reveals patterns worth noticing.
Key Concepts Explained Simply
The Average (Mean)
What most people call “average.”
Add up all the numbers. Divide by how many there are.
Example: Confidence scores: 3, 4, 5, 2, 8, 6, 7, 5, 4, 8, 9, 3, 5, 7, 6, 4, 5, 8, 7, 6, 5, 4, 3, 8, 9, 5, 6, 7, 4, 5.
Add them all: 171
Divide by how many: 171 ÷ 30 = 5.7
Average confidence: 5.7 out of 10.
Why it matters: It tells you the typical score. It’s a quick way to describe what happened.
When to be careful: If you have extreme outliers (one person scored 9, everyone else scored 2), the average can be misleading. The average of 2, 2, 2, 2, 9 is 3.4. But four people got 2, not average.
The Median
The middle number. If you line up all numbers from lowest to highest, the median is the one in the middle.
Same confidence scores, lined up: 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9.
(30 numbers, so median is average of the 15th and 16th numbers: 5 and 6, so median is 5.5)
Why it matters: It shows the middle. It is less affected by extreme scores.
If you have outliers, the median can be more meaningful than the average.
When to use: When describing a typical experience. “Half the group scored above 5.5, half below.”
The Mode
The number that appears most often.
Same confidence scores: Which number appears most?
5 appears 7 times. More than any other number.
Mode: 5.
Why it matters: It shows what’s most common. “Most people scored 5.”
When to use: When you want to know what’s typical. It’s different from the average – it tells you what actually happened most.
The Range
Lowest number to highest number.
Same scores: Lowest 2, highest 9.
Range: 2-9. Or you could say: “Range of 7 points.”
Why it matters: It shows the spread. Are all scores similar, or is there huge variation?
Small range (all scores 5-7): relatively consistent.
Large range (scores 2-9): highly variable.
When to use: When explaining how much people varied. “While the average was 5.7, there was considerable variation. Some people scored as low as 2, others as high as 9.”
Standard Deviation
This one’s slightly more complex, but worth understanding basically.
Standard deviation measures how spread out numbers are from the average.
High standard deviation: numbers are scattered far from average.
Low standard deviation: numbers cluster close to average.
Example: Two groups both with an average score of 5.
Group A scores: 4, 5, 5, 6 (cluster around 5). Low standard deviation.
Group B scores: 2, 3, 7, 8 (spread out from 5). High standard deviation.
Why it matters: Tells you consistency. Are most people similar or very different?
When to use: When you want to know if results are consistent. “Average confidence improved from 4.1 to 7.3, and the improvement was consistent (low standard deviation). Most people improved significantly.”
Or: “Average remained at 6, but standard deviation decreased. People’s experiences became more similar – less variation.”
You don’t need to calculate standard deviation by hand (spreadsheets do it). But understanding the concept helps you interpret it.
How to Calculate These in a Spreadsheet
You don’t need to do the maths by hand. Spreadsheets do it for you.
Google Sheets formulas:
- Average: =AVERAGE(A2:A31)
- Median: =MEDIAN(A2:A31)
- Mode: =MODE(A2:A31)
- Min (lowest): =MIN(A2:A31)
- Max (highest): =MAX(A2:A31)
- Standard deviation: =STDEV(A2:A31)
Put these formulas in empty cells. The spreadsheet calculates automatically.
In Excel:
The same formulas work. Or use Data > Data Analysis if you want built-in statistical tools.
That’s it. You don’t need to be a mathematician. Spreadsheets do the heavy lifting.
An Example: Using Descriptive Statistics
An employment programme measured job search confidence before and after the programme.
30 participants. Measured on 1-10 scale.
Baseline scores: 2, 3, 2, 4, 3, 5, 2, 4, 3, 2, 5, 4, 3, 2, 4, 3, 2, 5, 4, 3, 2, 4, 3, 5, 2, 3, 4, 2, 3, 4.
Endpoint scores: 7, 8, 6, 8, 7, 9, 6, 7, 8, 7, 8, 7, 6, 8, 7, 8, 7, 9, 8, 7, 6, 8, 7, 9, 7, 8, 8, 7, 6, 8.
Descriptive statistics:
Baseline:
- Average: 3.2
- Median: 3
- Mode: 3
- Range: 2-5
- Standard deviation: 1.1
Endpoint:
- Average: 7.6
- Median: 7.5
- Mode: 7 and 8
- Range: 6-9
- Standard deviation: 1.0
What this tells us:
Average confidence nearly doubled (3.2 to 7.6). Clear improvement.
Median (3 to 7.5) confirms: typical person improved significantly.
Mode shows the most common improvement (from 3 to 7-8).
Range improved: started scattered (2-5), ended fairly consistent (6-9). Most people improved.
Standard deviation slightly decreased: improvement was relatively consistent. Almost everyone improved, not just a few.
Report version:
“Job search confidence improved significantly. At baseline, average confidence was 3.2/10. At programme end, the average was 7.6/10 – a 4.4 point improvement (137% increase). Most participants improved similarly (standard deviation decreased). Nearly all participants scored in the 6-9 range at endpoint, compared to a scattered 2-5 range at baseline. This suggests the programme builds confidence consistently across participants.”
Same data. But now it’s meaningful.
Creating Summary Tables
Instead of just reporting numbers, create simple summary tables.
This helps people understand data quickly.
Example table:
| Measure | Baseline | Endpoint | Change |
|---|---|---|---|
| Average | 3.2 | 7.6 | +4.4 |
| Median | 3 | 7.5 | +4.5 |
| Lowest score | 2 | 6 | +4 |
| Highest score | 5 | 9 | +4 |
| Range | 2-5 | 6-9 | +4 points |
Clear. Easy to read. Shows what changed.
Common Mistakes With Descriptive Statistics
Mistake 1: Only reporting average
Average is useful but incomplete.
Average of 2, 2, 2, 2, 10 is 3.6. But most people scored 2.
Better: Report the average plus range or standard deviation. Shows you’re not hiding variation.
Mistake 2: Reporting too many statistics
You calculate mean, median, mode, standard deviation, variance, skewness, kurtosis…
People get overwhelmed. Eyes glaze over.
Better: Report only what’s meaningful. Average and range are usually sufficient.
Mistake 3: Not showing change
You have baseline and endpoint numbers.
You report each separately: “The baseline average was 3.2. The endpoint average was 7.6.”
People don’t immediately see the improvement.
Better: Show the change. “Average improved from 3.2 to 7.6, a 4.4 point increase.”
Mistake 4: Ignoring outliers
One person’s score is way off (everyone else 4-6, one person 1).
You report average without mentioning the outlier.
People assume everyone scored similarly.
Better: Notice outliers. “Most people scored 4-6. One person scored 1. Results are relatively consistent except for one lower scorer.”
Mistake 5: Confusing mean, median, mode
You calculate all three but don’t know which tells what story.
Better: Understand when each is useful. Mean shows overall average. Median shows typical experience. Mode shows most common.
Mistake 6: Reporting percentages incorrectly
“90% of people improved.”
Actually, 27 of 30 improved. That’s 90%, but with small numbers, it’s clearer to say “27 of 30.”
Better: With small sample sizes, report the actual numbers. “27 of 30 improved.” With larger samples, percentages are fine.
Interpreting Someone Else’s Statistics
Sometimes funders or evaluators give you statistics. Understanding what they mean helps.
“The average was 7.2, with standard deviation of 0.8”
Translation: The average was 7.2. Most scores fell between 6.4 and 8.0 (within one standard deviation). Pretty consistent as everyone scored similarly.
“The median was 6, range 2-10”
Translation: The middle score was 6. But some people scored as low as 2, others as high as 10. Huge variation.
“P < 0.05”
Translation: The difference is statistically significant. Probably real, not just random chance.
(You don’t need to understand the maths. Just know: p < 0.05 usually means “probably real.”)
“95% confidence interval 5.2-8.1”
Translation: We’re fairly confident the true average is somewhere between 5.2 and 8.1.
(Based on our sample, but there’s uncertainty.)
You don’t need to be an expert. But these translations help you understand reports.
When You Don’t Need Statistics
Descriptive statistics are useful. But not always necessary.
Small sample, simple question
“Did our 5 participants like the programme?”
You know: 4 said yes, 1 said kind of.
You don’t need the average. Just report: “4 of 5 enjoyed it.”
Qualitative data
“What changed for people?”
Themes emerged: connection, confidence, support.
Don’t try to quantify. Just report the themes.
Exploratory evaluation
“What’s happening in this programme?”
Descriptive statistics are useful, but so are stories, observation, and open-ended responses.
Use both.
Statistics aren’t the only way to understand data. Use it when it clarifies. Don’t use it just because it sounds sophisticated.
Practical Next Steps
For your next evaluation:
- Step 1: Identify what you’ll measure
Confidence 1-10? Employment yes/no? Attendance count? - Step 2: After collecting data, calculate basics
Average, median, range.
Spreadsheet formulas do this in seconds. - Step 3: Compare baseline to endpoint (if applicable)
Show change. “Improved from X to Y.” - Step 4: Notice patterns
Did everyone improve similarly or was it varied? - Step 5: Report clearly
“The average improved from 3.2 to 7.6, a meaningful gain.”
Include actual numbers when helpful. “27 of 30 gained employment.”
Reflection Questions
Before moving on, consider:
- When you’ve reported evaluation data before, how did you describe your findings?
- For your current evaluation, what are the 2-3 numbers that matter most to understand?
- Could you calculate average, median, and range for your data using a spreadsheet?
- If you reported “the average improved from 4 to 7,” what other statistics would help people understand that?
About This Series
This guide is part of a learning series on Measuring Social Impact for Charities and Social Enterprises. We’re here to make evaluation practical, accessible, and useful, not overwhelming.
Want to go deeper? Social Value Lab supports organisations to develop proportionate, practical approaches to measuring and communicating impact. We believe every organisation deserves to understand and communicate their value, regardless of size or budget.
Was this helpful? Share it with a colleague who’s struggling to turn aspirational outcomes into measurable ones.