Posted by TheMozTeam

Google is all about serving up results based on your precise location, which means there’s no such thing as a “national” SERP anymore. So, if you wanted to get an accurate representation of how you’re performing nationally, you’d have to track every single street corner across the country.

Not only is this not feasible, it’s also a headache — and the kind of nightmare that keeps your accounting team up at night. Because we’re in the business of making things easy, we devised a happier (and cost-efficient) alternative.

Follow along and learn how to set up a statistically robust national tracking strategy in STAT, no matter your business or budget. And while we’re at it, we’ll also show you how to calculate your national ranking average.

Let’s pretend we’re a large athletic retailer. We have 30 stores across the US, a healthy online presence, and the powers-that-be have approved extra SEO spend — money for 20,000 additional keywords is burning a hole in our pocket. Ready to get started?

Step 1: Pick the cities that matter most to your business

Google cares a lot about location and so should you. Tracking a country-level SERP isn’t going to cut it anymore — you need to be hyper-local if you want to nab results.

The first step to getting more granular is deciding which cities you want to track in — and there are lots of ways to do this: The top performers? Ones that could use a boost? Best and worst of the cyber world as well as the physical world?

When it comes time for you to choose, nobody knows your business, your data, or your strategy better than you do — ain’t nothing to it but to do it.

A quick note for all our e-commerce peeps: we know it feels strange to pick a physical place when your business lives entirely online. For this, simply go with the locations that your goods and wares are distributed to most often.

Even though we’re a retail powerhouse, our SEO resources won’t allow us to manage all 30 physical locations — plus our online hotspots — across the US, so we'll cut that number in half. And because we’re not a real business and we aren’t privy to sales data, we'll pick at random.

From east to west, we now have a solid list of 15 US cities, primed, polished, and poised for our next step: surfacing the top performing keywords.

Step 2: Uncover your money-maker keywords

Because not all keywords are created equal, we need to determine which of the 4,465 keywords that we’re already tracking are going to be spread across the country and which are going to stay behind. In other words, we want the keywords that bring home the proverbial bacon.

Typically, we would use some combination of search volume, impressions, clicks, conversion rates, etc., from sources like STAT, Google Search Console, and Google Analytics to distinguish between the money-makers and the non-money-makers. But again, we’re a make-believe business and we don’t have access to this insight, so we’re going to stick with search volume.

A right-click anywhere in the site-level keywords table will let us export our current keyword set from STAT. We’ll then order everything from highest search volume to lowest search volume. If you have eyeballs on more of that sweet, sweet insight for your business, order your keywords from most to least money-maker.

Because we don’t want to get too crazy with our list, we’ll cap it at a nice and manageable 1,500 keywords.

Step 3: Determine the number of times each keyword should be tracked

We may have narrowed our cities down to 15, but our keywords need to be tracked plenty more times than that — and at a far more local level.

True facts: A “national” (or market-level) SERP isn’t a true SERP and neither is a city-wide SERP. The closer you can get to a searcher standing on a street corner, the better, and the more of those locations you can track, the more searchers’ SERPs you’ll sample.

We’re going to get real nitty-gritty and go as granular as ZIP code. Addresses and geo coordinates work just as well though, so if it’s a matter of one over the other, do what the Disney princesses do and follow your heart.

The ultimate goal here is to track our top performing keywords in more locations than our poor performing ones, so we need to know the number of ZIP codes each keyword will require. To figure this out, we gotta dust off the old desktop calculator and get our math on.

First, we’ll calculate the total amount of search volume that all of our keywords generate. Then, we’ll find the percentage of said total that each keyword is responsible for.

For example, our keyword [yeezy shoes] drew 165,000 searches out of a total 28.6 million, making up 0.62 percent of our traffic.

A quick reminder: Every time a query is tracked in a distinct location, it’s considered a unique keyword. This means that the above percentages also double as the amount of budgeted keywords (and therefore locations) that we’ll award to each of our queries. In (hopefully) less confusing terms, a keyword that drives 0.62 percent of our traffic gets to use 0.62 percent of our 20,000 budgeted keywords, which in turn equals the number of ZIP codes we can track in. Phew.

But! Because search volume is, to quote our resident data analyst, “an exponential distribution,” (which in everyone else-speak means “gets crazy large”) it’s likely going to produce some unreasonably big numbers. So, while [yeezy shoes] only requires 124 ZIP codes, a keyword with much higher search volume, like [real madrid], might need over 1,000, which is patently bonkers (and statistical overkill).

To temper this, we highly recommend that you take the log of the search volume — it’ll keep things relative and relational. If you’re working through all of this in Excel, simply type =log(A2) where A2 is the cell containing the search volume. Because we're extra fancy, we'll multiply that by four to linearly scale things, so =log(A2)*4.

So, still running with our Yeezy example, our keyword goes from driving 0.62 percent of our traffic to 0.13 percent. Which then becomes the percent of budgeted keywords: 0.0013 x 20,000 = tracking [yeezy shoes] in 26 zip codes across our 15 cities.

We then found a list of every ZIP code in each of our cities to dole them out to.

The end. Sort of. At this point, like us, you may be looking at keywords that need to be spread across 176 different ZIP codes and wondering how you're going to choose which ZIP codes — so let our magic spreadsheet take the wheel. Add all your locations to it and it'll pick at random.

Of course, because we want our keywords to get equal distribution, we attached a weighted metric to our ZIP codes. We took our most searched keyword, [adidas], found its Google Trends score in every city, and then divided it by the number of ZIP codes in those cities. For example, if [adidas] received a score of 71 in Yonkers and there are 10 ZIP codes in the city, Yonkers would get a weight of 7.1.

We'll then add everything we have so far — ZIP codes, ZIP code weights, keywords, keyword weights, plus a few extras — to our spreadsheet and watch it randomly assign the appropriate amount of keywords to the appropriate amount of locations.

And that’s it! If you’ve been following along, you’ve successfully divvied up 20,000 keywords in order to create a statistically robust national tracking strategy!

Curious how we’ll find our national ranking average? Read on, readers.

Step 4: Segment, segment, segment!

20,000 extra keywords makes for a whole lotta new data to keep track of, so being super smart with our segmentation is going to help us make sense of all our findings. We’ll do this by organizing our keywords into meaningful categories before we plug everything back into STAT.

Obviously, you are free to sort how you please, but we recommend at least tagging your keywords by their city and product category (so [yeezy shoes] might get tagged “Austin” and “shoes”). You can do all of this in our keyword upload template or while you're in our magic spreadsheet.

Once you’ve added a tag or two to each keyword, stuff those puppies into STAT. When everything’s snug as a bug, group all your city tags into one data view and all your product category tags into another.

Step 5: Calculate your national ranking average

Now that all of our keywords are loaded and tracking in STAT, it’s time to tackle those ranking averages. To do that, we’ll simply pop on over to the Dashboard tab from either of our two data views.

A quick glimpse of the Average Ranking module in the Daily Snapshot gives us, well, our average rank, and because these data views contain every keyword that we’re tracking across the country, we’re also looking at the national average for our keyword set. Easy-peasy.

To see how each tag is performing within those data views, a quick jump to the Tags tab breaks everything down and lets us compare the performance of a segment against the group as a whole.

So, if our national average rank is 29.7 but our Austin keywords have managed an average rank of 27.2, then we might look to them for inspiration as our other cities aren't doing quite as well — our keywords in Yonkers have an average rank of 35.2, much worse than the national average.

Similarly, if our clothes keywords are faring infinitely worse than our other product categories, we may want to revamp our content strategy to even things out.

Go get your national tracking on

Any business — yes, even an e-commerce business — can leverage a national tracking strategy. You just need to pick the right keywords and locations.

Once you have access to your sampled population, you’ll be able to hone in on opportunities, up your ROI, and bring more traffic across your welcome mat (physical or digital).

Got a question you’re dying to ask us about the STAT product? Reach out to clientsuccess@getSTAT.com. Want a detailed walkthrough of STAT? Say hello (don’t be shy) and request a demo.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!