Contentmacro

The content audit that actually moves rankings

How to run a real content audit that produces ranking changes, instead of the spreadsheet exercise most teams call an audit and shelve.

Most content audits are built to fail. A team commissions one, an agency or a freshly hired analyst spends a few weeks building a giant spreadsheet, the spreadsheet gets presented in a meeting where everyone agrees it is impressive, and then nothing changes. The audit goes in a drive and the team continues writing as before. A year later, somebody else suggests an audit and the cycle repeats.

The audit that actually moves rankings looks different from the spreadsheet exercise. It is shorter, more opinionated, more aggressive, and it produces a list of decisions rather than a list of observations. This is the working version.

Decide what the audit is for

The most common reason audits fail is that they were never scoped to a decision. An audit that is "let's see what we have" produces a tour of the archive that is mildly interesting and operationally useless. An audit that is scoped to a specific decision produces actionable work.

A useful audit is scoped to one of three decisions. The team is going to consolidate. The team is going to refresh. The team is going to kill. Most archives need a mix of all three, but each piece in the archive ends up in exactly one bucket. Naming the decision before the audit starts changes how the work is done and dramatically increases the chance that anything happens after.

If the team cannot articulate which of the three is the goal, the audit is premature. Spend the meeting deciding that question instead, then come back to the audit with a real scope.

Pull a small set of inputs

The audit does not need a full data warehouse export. It needs a small set of inputs, pulled cleanly, and put in one place.

The first input is search performance per page. Impressions, clicks, position, and the queries the page actually ranks for. Pulled from search console for the last twelve months. This tells you what each page is actually doing in search, which is usually different from what the team thought the page was doing when they wrote it.

The second input is engagement per page. Time on page, scroll depth, and bounce rate, plus the conversion event that matters most for the team. Pulled from analytics for the same period. This tells you whether the readers who arrive at a page actually engage with it.

The third input is internal linking per page. The number of internal links pointing in to each page, and the number going out. Pulled from a crawler. This tells you which pages the rest of your site is voting for and which are isolated.

The fourth input is link profile per page, if you have access to a backlink tool. The number and quality of external links to each page. This tells you which pages have earned authority from the rest of the internet.

These four inputs, joined per page, give you the working dataset. Most teams build audits with twenty more columns than this and never use them. Resist.

Apply the four-bucket cut

With the dataset in hand, every page in the archive ends up in one of four buckets.

Performers. Pages that rank, get clicks, and convert. The work on these is to keep them current and to push internal links in their direction. They are the pages your team writes more pages like.

Sleepers. Pages that rank for queries with intent but get fewer clicks than they should because the title or meta is weak, or because the page itself does not match the query well enough. The work on these is targeted refresh. Often a thirty minute change to the title and intro produces a meaningful click lift.

Stranded. Pages that have decent search visibility for a query the team did not intend, or that have engagement but no rankings. These are usually salvageable with rewriting and internal linking. The work on these is more substantial than a refresh but less than a rewrite.

Dead. Pages that have no rankings, no clicks, and no engagement, and that have not had any of those for at least six months. The work on these is to consolidate them into a single better page or to remove them entirely. Most teams cannot bring themselves to delete content. Most archives benefit substantially when the team finally does.

Each page goes in exactly one bucket. The bucket determines the next action.

Make the consolidation decisions explicit

The work that produces the largest ranking shifts is consolidation, and it is the work most teams skip.

Consolidation looks like this. Three thin posts on the same topic, none of which rank well, get combined into one substantial post that does. The new post lives at the URL that already has the most authority. The other two get redirected to it. The internal links from the rest of the site get pointed at the new URL. Within a few weeks, the consolidated page begins to rank for queries none of the three originals could.

This works because search engines are voting for the strongest article on a topic, not the most articles on a topic. A team that has written six posts on the same topic over the years has divided its own authority into six pieces and confused both search engines and readers. Consolidating those six into one cleans up the confusion and concentrates the signals.

A typical archive of two hundred pages will have ten to thirty consolidation candidates. The team that does the work tends to see meaningful ranking improvements within a quarter on the consolidated pages. The team that does not, does not.

Schedule the work, not the analysis

The audit produces a backlog of named, prioritized work. Refresh these specific pages. Consolidate these specific groups. Remove these specific URLs. Without that backlog, the audit is a presentation. With it, the audit becomes the basis of the next quarter of work.

The mistake teams make at this step is treating the backlog as advisory. It is not. The audit work, once identified, is more valuable than most of the new content the team would otherwise produce in the same quarter. Refresh and consolidation work tends to produce ranking and traffic gains that comparable new content would not.

The right framing is that the audit work is a real workstream that competes with new production. In most quarters following the audit, the audit workstream wins on a meaningful share of the team's capacity. Teams who put the audit on the side, to be done when there is time, find that there is never time. Teams who put it on the calendar, with named owners and named deadlines, ship the work and see the results.

When to revisit

A real audit, run well, produces twelve to eighteen months of follow-on work. The next audit makes sense roughly a year later, after the work from the first one has compounded and the archive has grown enough to warrant another look.

Teams who run audits more frequently than that tend to be running them as a substitute for editorial decision-making rather than as a complement to it. The audit is a periodic correction, not an operating mode. If you are auditing constantly, the audit has become a procrastination strategy.

The teams who get this right run a real audit roughly once a year, do the work it produces, and let the rest of the time be spent on writing, distribution, and the other parts of the system. The teams who get it wrong run quarterly audits that nobody acts on, and the rankings continue to slowly drift in the direction they were already drifting.

A real audit changes the work. A spreadsheet audit changes a meeting. The difference is not the analyst doing the audit. It is whether the team is set up to act on what the audit produces.