You're probably seeing the same thing most UK SMEs see when they finally dig into placements. Spend has gone out the door, clicks have arrived, and a chunk of those impressions landed on websites, apps, or YouTube inventory you'd never have chosen yourself. That's frustrating when budgets are tight and someone inside the business is asking why performance looks uneven from week to week.
Placement optimization fixes that, but not by going fully manual and not by handing everything to automation either. The accounts that hold efficiency best usually run a hybrid model. Let Google and Microsoft find reach. Then use disciplined audits, exclusions, bidding logic, and light automation to stop obvious waste and scale what's working.
That matters more in a market this large. The UK online ad market reached £29.6 billion in 2023, with search and related formats at £15.9 billion, according to IAB UK's ad market report. For smaller brands, placement optimization isn't a cosmetic tidy-up. It's one of the clearest ways to protect margin and make limited budget work harder.
Your Starting Point The Placement Audit Playbook
Most wasted spend hides in plain sight. It sits in app inventory, weak display placements, low-intent YouTube channels, and broad campaign types where the platform has been allowed to explore faster than your reporting rhythm.
A proper audit starts inside the Where ads showed style reports in Google Ads and Microsoft Advertising. Don't stop at impressions. Placements can look busy and still be commercially useless. The only view that matters is performance at placement level against your campaign objective.
If you need a wider account health process around this, use a structured Google Ads audit checklist alongside the placement review so you're not diagnosing placement issues in isolation.
What to pull before you judge anything
Export placement data with enough context to make decisions. At minimum, review:
- CTR: Helps spot accidental clicks, weak relevance, or inventory that attracts curiosity rather than intent.
- CPC: Useful for judging whether a placement is expensive relative to the value it produces.
- Conversion rate: Tells you if the traffic is qualified once it lands.
- CPA or ROAS proxy: This is the commercial filter. A placement that clicks well but never converts is still waste.
Segment by device if you can. A placement may look acceptable overall but collapse on mobile apps, or perform well on desktop and poorly on mobile web. That split often explains why a campaign feels unstable.
Practical rule: Don't optimise placements off one metric. High CTR with weak conversion rate usually means low-quality curiosity traffic, not a winner.
How I triage placements fast
I use a three-bucket model.
Toxic placements
These are obvious drains. They generate spend, low-quality clicks, or poor post-click behaviour and don't show signs of commercial intent.Neutral placements
These have activity but not enough evidence. They stay under observation. Most accounts have a large middle category, and that's fine.Promising placements
These don't just convert. They fit the business model. For ecommerce, that usually means strong transaction quality. For lead gen, that means enquiries that sales teams don't immediately reject.
A quick sense-check helps. Ask whether the placement matches the product, the audience mindset, and the buying journey. If you sell specialist B2B software and spend is drifting into children's app inventory, you don't need a philosophical debate.
What usually goes wrong in SME accounts
The failure point isn't lack of data. It's lack of review discipline. Busy teams check campaign totals, see blended CPA holding, and miss the fact that weak placements are being subsidised by a few strong pockets of traffic.
Use a simple weekly review pattern:
- Start with spend concentration: Which placements consumed meaningful budget?
- Then check outcome quality: Did they convert at a rate the business can live with?
- Then check context: App, website, YouTube channel, video, or audience expansion route.
- Finally decide: Exclude, watch, or try to scale.
Placement optimization works best when you treat placement reports as a profit-and-loss view of traffic quality, not just a hygiene task.
Cutting the Waste Strategic Placement Exclusions
Once the audit is done, hesitation gets expensive. If a placement is clearly poor, exclude it. The key is to do it with rules, not emotion.
The best practice I use is straightforward: only exclude placements with meaningful evidence, such as over 100 clicks and zero conversions, or where CPA is more than 2x your campaign target, as outlined by PPC Geeks' guidance on PPC management. That stops you from cutting traffic too early just because a placement had a bad day.
Exclude at the right level
Not every bad result means you should block an entire domain. That's where many accounts overcorrect.
- Exclude a specific URL when the problem is isolated.
- Exclude a YouTube channel when the inventory theme is wrong but you don't want to lose YouTube as a whole.
- Exclude a domain when the site itself is consistently irrelevant or low quality.
- Use shared exclusion lists when the same junk inventory appears across multiple campaigns.
Scale matters. If you manage several campaigns, shared lists save time and stop the same weak placements leaking back into a different campaign type later.
A workable decision framework
Use this simple grid when making exclusion calls.
| Situation | Action |
|---|---|
| Low data, unclear performance | Monitor only |
| Clear irrelevance, poor quality signals | Exclude placement |
| Repeated weak performance across campaigns | Add to shared exclusion list |
| Good thematic fit but underperforming recently | Leave active and review with longer date range |
That “good thematic fit” point matters. Some placements need time because conversion lag is real, especially in lead gen and higher-consideration ecommerce.
If you also need to control invalid or poor-intent traffic around the edges, this guide on filtering out DIY clicks without killing Google Ads volume is useful alongside placement exclusions.
A quick walkthrough helps if your team wants to tighten process before making account-wide changes:
What doesn't work
Blanket exclusions rarely age well. Excluding all apps, all YouTube, or broad swathes of display inventory might clean reporting up, but it can also shut down valid reach and remarketing support.
Another bad habit is excluding solely on CPC. Expensive clicks aren't automatically bad. Cheap placements can be the worst offenders if they never produce value.
Don't reward neat-looking reports over profitable traffic. The cleanest placement list in the account can still underperform if you've removed useful inventory.
Choosing Your Path Automatic vs Manual Placements
This is the strategic question commonly approached incorrectly. It is treated like a software setting instead of a budget decision.
Google's rollout of Smart Bidding and Performance Max changed placement optimization from a fixed setup task into a machine-learning process that adapts continuously, as described in Google's overview of Performance Max. That's useful, but only if you know when to let automation explore and when to narrow the field.
Automatic works when discovery matters
Automatic placements are strongest when you need breadth. They're useful for newer accounts, campaigns entering a new market, or brands trying to learn where demand exists across Google inventory.
They also reduce setup friction. You don't need a hand-built list of websites, channels, or apps before launch. The trade-off is obvious. You'll usually get less control and slower visibility into where weaker inventory is hiding unless reporting and exclusions are tight.
Manual works when control matters
Manual placements are better when you already know where value sits. That often applies to niche B2B, brand-sensitive sectors, or campaigns with narrow budgets where every wasted click matters.
They're also useful when a business has legal, compliance, or reputation concerns. If you can't afford for ads to appear beside unsuitable content, manual placement control becomes part of risk management, not just optimisation.
For teams trying to balance automation with oversight inside newer campaign types, this piece on controlling automated campaigns in Performance Max is a practical companion.
Automatic vs Manual Placement Strategy
| Factor | Automatic Placements | Manual Placements |
|---|---|---|
| Reach | Broad access across available inventory | Narrower, selected inventory only |
| Speed to launch | Fast | Slower, requires research and setup |
| Control | Lower direct control | High control |
| Best use case | Exploration, scaling, audience discovery | Brand safety, niche targeting, proven winners |
| Management load | Lower upfront, higher oversight later | Higher upfront, clearer ongoing control |
| Budget fit | Better when testing breadth | Better when waste tolerance is low |
The hybrid approach that usually wins
Most SME accounts shouldn't choose one side completely. They should split the job.
Use automatic placements for discovery. Let the platform find inventory patterns you wouldn't have built manually. Then use manual oversight to trim the obvious waste, protect brand suitability, and build a shortlist of placements worth backing more deliberately.
That hybrid model is usually the sweet spot for busy marketing managers. You don't need to micromanage every impression, but you also don't need to accept whatever the algorithm serves.
Automation is excellent at finding options. It is not responsible for your margin. You are.
Beyond Exclusions Advanced Bidding and Testing
Removing bad placements is only half the job. Strong placement optimization also means directing more pressure toward inventory that produces qualified sales or leads.
A good starting point is industry context. Average UK CPC can range from £1.35 for Retail to over £2.62 for Health, according to this PPC Geeks benchmark guide on Google Ads costs. That matters because a placement with a high CPC may still deserve more budget if the conversion value is strong relative to your sector.
How to scale strong placements without overreacting
Don't jump from “this placement looks good” to “push hard”. Confirm that the placement is producing value consistently and that the landing page, offer, and audience all align. Then choose the lever that fits the campaign type.
For some campaigns, that means bid adjustments where available. For others, it means carving out proven placements into their own campaign or asset structure so budget doesn't get diluted by weaker inventory.
Use this decision path:
- If performance is strong and stable: increase priority carefully.
- If performance is strong but volume is low: test adjacent placements with similar themes.
- If performance is volatile: hold steady and gather more evidence.
- If performance dropped after expansion: check whether automation started favouring lower-quality inventory.
Two different workflows
Ecommerce accounts
The commercial question is simple. Does the placement produce profitable purchases, not just clicks?
Look at product category fit, basket quality, and whether users from that placement buy higher-margin products or only discounted items. A placement can look healthy in platform reporting while still dragging down gross margin if it mainly sells low-value lines.
Useful actions include:
- Separate proven placement clusters into dedicated campaigns when budget justifies it.
- Align creative to context so the message matches where the user saw the ad.
- Watch CPC against conversion value rather than treating cost in isolation.
Lead generation accounts
Lead gen needs a harsher filter because many placements can generate form fills that sales teams hate. The best placement isn't the one with the cheapest lead. It's the one that sends leads that progress.
Review lead quality by source if your CRM allows it. If not, at least compare placement groups against downstream signals such as qualification notes, booked calls, or sales feedback.
A placement that lowers platform CPA but worsens lead quality isn't efficient. It just shifts the problem downstream.
A practical testing model
Testing new placements works best when it's controlled. Don't throw dozens of speculative sites into one campaign and hope the aggregate tells you something.
Use a compact cycle:
- Start with a shortlist based on audience fit or themes already converting.
- Keep creative variables tight so placement is the main difference.
- Review on commercial metrics, not vanity engagement.
- Promote winners into a stronger budget position only after they repeat.
That repeatability check is where discipline pays off. Many placements spike once and never again. The good ones keep showing intent over time.
Automating Placement Management with Rules and Scripts
Manual reviews are fine when an account is small. Once campaign count grows, they stop being enough. Placements change too quickly, and repetitive clean-up starts stealing time from strategy.
Automation is what keeps the hybrid model practical. Use manual judgement for thresholds and exceptions, then let rules and scripts do the repetitive scanning.
Where automation helps first
Start with built-in automated rules in Google Ads. They're enough for many SME accounts.
Typical use cases include:
- Flagging weak placements for review when spend rises without conversions
- Pausing low-quality inventory once your agreed threshold is hit
- Surfacing strong performers so they don't get buried in broader reports
If the account is larger or the team wants more control, scripts become useful. They can scan placement data on a schedule and apply logic consistently across campaigns.
According to Google Ads Scripts documentation on large-scale URL management, using automation rules or scripts for repetitive management tasks can reduce manual time by up to 75%. That doesn't replace strategy. It creates space for it.
What to automate and what to keep human
This split works well:
| Automate | Keep human |
|---|---|
| Routine checks against set thresholds | Final decisions on high-value placements |
| Daily or frequent monitoring | Judging brand suitability |
| Repeated reporting tasks | Interpreting lead quality and business context |
| Shared list maintenance | Deciding when to expand into new inventory |
Don't automate judgement calls that depend on nuance. A script can see clicks, cost, and conversions. It can't understand that a placement is strategically important because it reaches a niche trade audience your sales team values.
A realistic operating model for busy teams
A practical setup is weekly human review supported by daily automation. Let rules catch obvious issues, then use your scheduled review to approve exclusions, identify candidates for scaling, and check that automated actions still reflect business reality.
For businesses that need outside support without building this in-house, PPC Geeks is one option that provides PPC management across Google Ads, Microsoft/Bing, Facebook, Amazon and other platforms, including audit and optimisation support aligned to business objectives.
The main point is simpler than the tooling. Automation should handle account hygiene. Humans should handle prioritisation.
Measuring the Impact and Proving Your ROI
Placement optimization is only valuable if you can show what changed. Stakeholders don't care that you excluded a list of poor placements. They care whether spend got cleaner and results got stronger.
The easiest way to prove impact is with a before-and-after view. Keep the reporting tight and commercial.
The KPIs that matter
Track the few metrics that show business effect clearly:
- Spend removed from poor placements so waste reduction is visible
- Conversion rate change to show traffic quality improved
- CPA trend for lead generation accounts
- ROAS or revenue efficiency trend for ecommerce accounts
- Lead quality notes or sales feedback where available
If you need a cleaner framework for this, use a proper marketing ROI calculation approach rather than relying on platform metrics alone.
A reporting format stakeholders understand
Avoid dumping placement reports into a deck. Summarise what you changed, why you changed it, and what commercial effect followed.
A simple format works:
| Reporting line | What to include |
|---|---|
| Problem found | Low-quality placements consuming budget |
| Action taken | Exclusions, bid changes, tighter controls |
| Observed effect | Cleaner traffic, stronger conversion efficiency, better lead quality |
| Next move | Expand winning placements or tighten rules further |
This style keeps the focus on business outcome instead of PPC jargon.
Senior stakeholders rarely want every optimisation detail. They want confidence that budget is being protected and directed intelligently.
How to talk about wins without overselling
Be precise. If the data shows cleaner traffic and stronger efficiency, say that. If the result is mixed, say that too and explain what you're testing next.
Strong reporting language sounds like this:
- We removed budget from placements that were attracting clicks without commercial value.
- We concentrated spend on inventory that produced stronger conversion quality.
- We kept automated reach but added controls to reduce obvious waste.
Weak reporting sounds like this:
- “The campaign was optimised.”
- “Traffic quality improved significantly.”
- “The algorithm is learning.”
Those phrases are too vague to be trusted. Decision-makers need a chain of reasoning they can follow.
Placement optimization earns trust when it's reported as an operating discipline, not a one-off tidy-up. Audit the traffic, cut the waste, back the winners, automate the repetitive work, and show the commercial effect in plain English.
If you want a second pair of eyes on wasted spend, PPC Geeks can review your account structure, placement controls, tracking, and bidding setup, then show where a hybrid placement optimization approach could tighten efficiency without choking off growth.







