Why evaluation matters — and when it starts
Evaluation is not something you bolt on at the end of a campaign. It is something you plan at the beginning, before a single piece of content is created or a single pound of budget is spent. The reason is straightforward: you cannot evaluate a campaign against a target you didn't set in advance.
This is why the SMART objective comes first. Your evaluation framework is essentially a plan for how you will measure whether that objective was achieved. Every metric you track, every tool you use, every check-in you schedule — all of it flows from the objective.
In a coursework or exam context, an evaluation section that says "we would review our social media analytics at the end of the campaign" will not score highly. What markers are looking for is a specific, pre-planned framework: these are the metrics, tracked by these tools, reviewed at these intervals, compared against this objective.
Quantitative vs. qualitative measures
A complete evaluation considers two types of evidence: the numbers, and what the numbers don't capture.
Hard data that can be counted, compared and charted. Follower growth, impressions, click-through rates, conversions, website sessions, reach. These tell you what happened at scale.
Evidence that requires interpretation. Sentiment in comments, media coverage, word-of-mouth, brand perception, customer feedback. These tell you how people felt about what happened.
The best evaluation plans include both. Numbers tell you whether you hit your target; qualitative indicators tell you whether the campaign landed the way you intended — and often flag things the data alone wouldn't reveal.
Choosing the right metrics
Not every metric is relevant to every campaign. The metrics you choose should be directly connected to your objective. If your objective is to grow Instagram followers, your primary metric is follower count — not website sessions. If your objective is to drive traffic to a landing page, your primary metric is sessions and click-through rate — not social engagement.
That said, it's good practice to track a spread of metrics across different stages of the customer journey. Here are the main categories:
A good rule of thumb: pick 3–5 metrics that are directly linked to your objective, rather than listing every metric available. More metrics does not mean a more rigorous evaluation — it usually means a less focused one.
Measurement tools — platform by platform
Every metric needs a tool to measure it. Naming the tool is what makes your evaluation plan credible — it shows you know exactly where the data will come from. Here are the most common ones:
- Meta Ads Manager — tracks performance for Facebook and Instagram paid campaigns: reach, impressions, clicks, cost per result, follower growth from ads.
- Instagram Insights — native analytics for organic Instagram content: follower growth, post reach, profile visits, story views.
- TikTok Analytics — tracks video performance, follower growth, profile views and audience demographics for TikTok content.
- Google Analytics — tracks website traffic, user behaviour, source of visits, session duration and conversion goals.
- YouTube Studio — tracks video views, watch time, subscriber growth and audience retention for YouTube content.
- Spotify for Podcasters / Ad Studio — reach and listener data for podcast or audio ad campaigns.
- Google Search Console — tracks organic search performance: clicks, impressions and keyword rankings for SEO-focused campaigns.
For most student campaigns running on social media, Meta Ads Manager and platform-native insights (Instagram Insights, TikTok Analytics) will be the primary tools. Name both where relevant — one for paid performance, one for organic.
Evaluation frequency — when to check in
A single post-campaign review is not a complete evaluation approach. Professional campaign evaluation happens at regular intervals throughout the campaign, not just at the end. The three standard checkpoints are:
- During the campaign (ongoing / weekly): Monitor metrics at regular intervals to spot what is working and what isn't while there is still time to adjust. Flag anything significantly above or below target.
- Mid-campaign review: A more structured review at the halfway point to assess overall trajectory and make any significant changes to targeting, budget allocation or content.
- Post-campaign evaluation: A final report produced within one to two weeks of the campaign end, comparing actual results against the SMART objective and documenting learnings for future campaigns.
Weekly check-ins using Meta Ads Manager to monitor follower growth, reach and cost per result throughout the campaign. A mid-campaign review at week six to assess whether the 15% follower growth target is on track. A final evaluation report produced within one week of the campaign end, comparing actual follower growth against the SMART objective and summarising key learnings.
Tool walkthrough: the Evaluation Tool, field by field
The Evaluation Tool on Campaign Theory takes you through each component of an evaluation plan in a structured order. Here's what to write in each section and why.
Brand / Campaign
Name the brand and the specific campaign you're evaluating. This gives your evaluation plan a clear title and context in the output.
e.g. Irn Bru — Summer 2025 Instagram CampaignSMART Objective
Paste in your full SMART objective. This is the anchor for the entire evaluation plan — every metric, every tool, every check-in is there to measure performance against this statement. If you've used the SMART Objective Maker, copy the output directly here.
e.g. Increase Instagram followers by 15%, targeting females aged 18–25 in Scotland, between September–November 2025. Measured via Meta Ads Manager.Metrics & Measurement Tools
Add a row for each metric you will track, pairing it with the specific tool that will provide the data. Aim for 3–5 metrics. Each row should name a concrete metric (not a category) and a named platform (not just "analytics").
Follower count → Meta Ads Manager · Impressions → Instagram Insights · CTR → Meta Ads ManagerHow & When Will You Evaluate?
Describe your evaluation process — how often you will check in, what you will review at each stage, and how findings will be recorded or reported. Include all three checkpoints: ongoing monitoring, a mid-campaign review, and a post-campaign report.
e.g. Weekly check-ins via Meta Ads Manager. Mid-campaign review at week 6. Final report within one week of campaign end.What Does Success Look Like?
Go beyond the numbers here. Describe the qualitative indicators that would tell you the campaign worked — the things the data alone wouldn't reveal. This is where sentiment, coverage, word-of-mouth and audience reaction come in.
e.g. Positive sentiment in comments, increased brand mentions, press coverage in student media, organic shares from users outside the paid target.Common mistakes to avoid
"We will review analytics at the end of the campaign." No frequency, no named tool, no mid-campaign checkpoints. This is not an evaluation plan.
"Weekly check-ins using Meta Ads Manager. Mid-campaign review at week 6. Post-campaign report within one week of end date." Specific, scheduled, and named.
Listing every available metric — impressions, reach, clicks, saves, shares, conversions, bounce rate, follower growth — without explaining which ones relate to the objective.
Choosing 3–5 metrics that directly connect to the objective. If the goal is follower growth, follower count and reach are primary. Everything else is secondary context.
- Evaluating against the wrong thing. Your metrics must relate to your SMART objective. If the objective is follower growth, evaluating on website traffic alone misses the point.
- Only evaluating qualitatively. "The campaign generated lots of positive comments" is not a complete evaluation. You need the numbers too.
- Only evaluating quantitatively. A campaign that hit its follower target but generated widespread negative sentiment has not succeeded. Qualitative indicators matter.
- Leaving evaluation until after the campaign ends. Monitoring during the campaign gives you the opportunity to optimise — adjust targeting, shift budget, change content — while there's still time to make a difference.
Build your evaluation plan now
Add your metrics, define your process, and generate a formatted evaluation plan to save to your campaign.