Why evaluation matters — and when it starts

Evaluation is not something you bolt on at the end of a campaign. It is something you plan at the beginning, before a single piece of content is created or a single pound of budget is spent. The reason is straightforward: you cannot evaluate a campaign against a target you didn't set in advance.

This is why the SMART objective comes first. Your evaluation framework is essentially a plan for how you will measure whether that objective was achieved. Every metric you track, every tool you use, every check-in you schedule — all of it flows from the objective.

You cannot measure success unless you defined what success looks like before you started.

In a coursework or exam context, an evaluation section that says "we would review our social media analytics at the end of the campaign" will not score highly. What markers are looking for is a specific, pre-planned framework: these are the metrics, tracked by these tools, reviewed at these intervals, compared against this objective.

Quantitative vs. qualitative measures

A complete evaluation considers two types of evidence: the numbers, and what the numbers don't capture.

Quantitative
The numbers

Hard data that can be counted, compared and charted. Follower growth, impressions, click-through rates, conversions, website sessions, reach. These tell you what happened at scale.

Qualitative
The meaning

Evidence that requires interpretation. Sentiment in comments, media coverage, word-of-mouth, brand perception, customer feedback. These tell you how people felt about what happened.

The best evaluation plans include both. Numbers tell you whether you hit your target; qualitative indicators tell you whether the campaign landed the way you intended — and often flag things the data alone wouldn't reveal.

Choosing the right metrics

Not every metric is relevant to every campaign. The metrics you choose should be directly connected to your objective. If your objective is to grow Instagram followers, your primary metric is follower count — not website sessions. If your objective is to drive traffic to a landing page, your primary metric is sessions and click-through rate — not social engagement.

That said, it's good practice to track a spread of metrics across different stages of the customer journey. Here are the main categories:

👁️
Awareness Metrics
How many people saw or heard the campaign? Used when the objective is reach or brand awareness.
Impressions · Reach · Views · Follower count
💬
Engagement Metrics
How many people interacted with the content? Used when the objective involves audience interaction.
Likes · Comments · Shares · Saves · CTR
🔗
Traffic Metrics
How many people visited the website or landing page? Used when the objective involves driving online traffic.
Sessions · Users · Bounce rate · Time on page
💰
Conversion Metrics
How many people took the desired action? Used when the objective involves sales, sign-ups or purchases.
Conversions · Sales · Sign-ups · Cost per acquisition

A good rule of thumb: pick 3–5 metrics that are directly linked to your objective, rather than listing every metric available. More metrics does not mean a more rigorous evaluation — it usually means a less focused one.

Measurement tools — platform by platform

Every metric needs a tool to measure it. Naming the tool is what makes your evaluation plan credible — it shows you know exactly where the data will come from. Here are the most common ones:

For most student campaigns running on social media, Meta Ads Manager and platform-native insights (Instagram Insights, TikTok Analytics) will be the primary tools. Name both where relevant — one for paid performance, one for organic.

Evaluation frequency — when to check in

A single post-campaign review is not a complete evaluation approach. Professional campaign evaluation happens at regular intervals throughout the campaign, not just at the end. The three standard checkpoints are:

Example evaluation approach

Weekly check-ins using Meta Ads Manager to monitor follower growth, reach and cost per result throughout the campaign. A mid-campaign review at week six to assess whether the 15% follower growth target is on track. A final evaluation report produced within one week of the campaign end, comparing actual follower growth against the SMART objective and summarising key learnings.

Tool walkthrough: the Evaluation Tool, field by field

The Evaluation Tool on Campaign Theory takes you through each component of an evaluation plan in a structured order. Here's what to write in each section and why.

1

Brand / Campaign

Name the brand and the specific campaign you're evaluating. This gives your evaluation plan a clear title and context in the output.

e.g. Irn Bru — Summer 2025 Instagram Campaign
2

SMART Objective

Paste in your full SMART objective. This is the anchor for the entire evaluation plan — every metric, every tool, every check-in is there to measure performance against this statement. If you've used the SMART Objective Maker, copy the output directly here.

e.g. Increase Instagram followers by 15%, targeting females aged 18–25 in Scotland, between September–November 2025. Measured via Meta Ads Manager.
3

Metrics & Measurement Tools

Add a row for each metric you will track, pairing it with the specific tool that will provide the data. Aim for 3–5 metrics. Each row should name a concrete metric (not a category) and a named platform (not just "analytics").

Follower count → Meta Ads Manager · Impressions → Instagram Insights · CTR → Meta Ads Manager
4

How & When Will You Evaluate?

Describe your evaluation process — how often you will check in, what you will review at each stage, and how findings will be recorded or reported. Include all three checkpoints: ongoing monitoring, a mid-campaign review, and a post-campaign report.

e.g. Weekly check-ins via Meta Ads Manager. Mid-campaign review at week 6. Final report within one week of campaign end.
5

What Does Success Look Like?

Go beyond the numbers here. Describe the qualitative indicators that would tell you the campaign worked — the things the data alone wouldn't reveal. This is where sentiment, coverage, word-of-mouth and audience reaction come in.

e.g. Positive sentiment in comments, increased brand mentions, press coverage in student media, organic shares from users outside the paid target.

Common mistakes to avoid

❌ Mistake

"We will review analytics at the end of the campaign." No frequency, no named tool, no mid-campaign checkpoints. This is not an evaluation plan.

✓ Better

"Weekly check-ins using Meta Ads Manager. Mid-campaign review at week 6. Post-campaign report within one week of end date." Specific, scheduled, and named.

❌ Mistake

Listing every available metric — impressions, reach, clicks, saves, shares, conversions, bounce rate, follower growth — without explaining which ones relate to the objective.

✓ Better

Choosing 3–5 metrics that directly connect to the objective. If the goal is follower growth, follower count and reach are primary. Everything else is secondary context.


Build your evaluation plan now

Add your metrics, define your process, and generate a formatted evaluation plan to save to your campaign.

Open the tool →