The benchmark table
"Good" creative performance is not a single number. It is a distribution, by platform, by industry, by objective, by frequency, by creative format. This composite is sourced from WordStream, Databox, Revealbot, Varos, Tinuiti, and Socialinsider[1]. Use it as a rough anchor, not as an account-level target.
| Industry (Meta) | CTR | CPC | CVR | CPL |
|---|---|---|---|---|
| All industries | 1.51% | $1.86 | 8.78% | $23.10 |
| Legal | 1.94% | $1.32 | 7.20% | $45.90 |
| Beauty | 2.08% | $0.88 | 11.63% | $11.00 |
| B2B | 0.78% | $2.52 | 10.63% | $23.70 |
| Finance | 0.56% | $3.77 | 9.09% | $41.43 |
| Apparel | 1.24% | $0.45 | 4.11% | $10.98 |
Cross-platform snapshot from Revealbot's live dashboard (April 2025): Meta CPM ~$14.90 US, TikTok CPM ~$7.85, Snap CPM ~$3.45. TikTok in-feed CTR runs 1.0 percent to 1.5 percent; Spark Ads are 30 to 40 percent lower CPA than standard in-feed. YouTube Shorts CPM ~$3.50; in-stream skippable CPV runs $0.011 to $0.030; YouTube Demand Gen CPA was down 29 percent year over year in Q4 2024[2].
Methodology and sources
Each source measures something slightly different. WordStream/LocaliQ[3] aggregates over $200M in client spend. Databox[4] pulls from 250+ companies. Revealbot[5] reports live market averages. Varos[6] is DTC-focused. Tinuiti[2] covers digital ads quarterly with heavy retail exposure. Socialinsider[7] specializes in TikTok.
No single benchmark is representative of your account. These are indicative anchors. Your 12-month rolling median against itself is the number that actually matters.
Industry distribution
The range from finance CPCs to beauty CPCs spans a factor of eight. Industry is the biggest single determinant of headline numbers, which is why "average Facebook CTR" is a broken concept without a category caveat. Beauty and apparel have cheap clicks and moderate conversion. B2B and finance have expensive clicks and narrow intent. Legal has high CPLs driven by high lead value.
When you compare across categories, you are mostly measuring unit economics, not creative quality.
Frequency and creative fatigue
Smartly.io analyzed 5,000 Meta campaigns in 2023[8]: CTR drops approximately 60 percent after frequency reaches around 3.4 per week. Mark Ritson's 2023 IPA analysis[9] and Ad Age's 2022 reporting land on a similar range: rotating creative every four to six weeks improves long-term ROI by roughly 18 percent.
The practical takeaway: creative rotation is not a nice-to-have. It is the main lever most teams are not pulling. Build your pipeline to have fresh variants ready before the frequency curve dictates it.
When is a test over?
Meta Learning Phase. Ad sets exit Learning Phase at roughly 50 optimization events in seven days[10]. Do not make creative decisions inside Learning Phase. You are reading noise.
Meta A/B Testing. Uses Bayesian probability of winner. The industry rule of thumb is 95 percent confidence plus 1,000 conversions per cell plus 7 to 14 days of runtime.
Brand-lift. Nielsen and Meta converge on roughly five percentage points of absolute unaided awareness lift as the "meaningful" threshold.
Minimum spend per variant. Common Thread Collective[11] recommends $500 to $1,000 spend per variant before judging Meta DTC creative. Below that, sample is too thin to separate variant effects from audience noise.
Calling a winner before Learning Phase graduates, or before a thousand conversions per cell, is measuring randomness. The test was not over.
What the data does not tell you
Benchmarks are outcomes, not causes. A high-CTR ad can sell nothing. A low-CTR ad can drive a quarter of annual revenue. Above a minimum floor, creative quality matters more than the number.
Wistia's 2024 State of Video[12] reported 50 percent drop-off at the one-minute mark for ads under two minutes. Long-form ads are a different problem. Watch-through distributions tell more about the creative than the headline CTR ever will.
How to read your own numbers
- Compare against your 12-month rolling median, not industry benchmarks. Your business is the controlled experiment.
- Segment by format, placement, and audience. Static versus video, feed versus stories, cold versus remarketing. These split the distribution more than industry does.
- Use incremental lift, not last-click. Last-click attribution systematically under-reports creative effects, especially for brand-leaning variants.
Benchmarks as a training signal
For prediction models (see the scoring framework), creative performance data is the ground truth that the model has to calibrate against. The Meta Ad Library (see the Meta Ad Library guide) provides the largest open record of what ran; these benchmark distributions tell you what "performed well" actually means in a given category.
At OpenAffect, we use exactly these distributions to label whether a historical creative belongs in the top, middle, or bottom of its category for prediction training and calibration (see the calibration page).





References
- 1Composite benchmark assembled from WordStream, Databox, Revealbot, Varos, Tinuiti, Socialinsider (2024).
- 2Tinuiti. Digital Ads Benchmark Report Q4 2024.
- 3WordStream/LocaliQ Facebook Ads Benchmarks 2024.
- 4Databox Facebook Ads benchmarks.
- 5Revealbot live advertising costs dashboard.
- 6Varos DTC reports.
- 7Socialinsider TikTok Ads benchmarks.
- 8Smartly.io creative fatigue research.
- 9Mark Ritson, Marketing Week.
- 10Meta Learning Phase documentation.
- 11Common Thread Collective ecommerce benchmarks.
- 12Wistia State of Video 2024.
- 13Meta Brand Lift methodology.