More threads by Robocop


Dec 28, 2018
Reaction score
Hi there!

I just began a set of experiments to study a bit more closely how Google handles local queries in various configurations.

Coming from the general SEO there's some points I haven't covered in depth yet and I'd like to take advantage of this great community and its experience.

The initial results I got through the first quick tests were encouraging and surprising. However I need to be sure I didn't miss any detail which could bias the results.

I'm analyzing any form of search results involving the display of google maps. I'm leaving out the keyword + location as for now since it would require to much ressources to obtain reliable results. I've sampled thousands of points at various scales already. The goal is to get hundreds of thousands (even more if possible) of samples to try to obtain some useful data (I'm no big data expert).

So far all the tests are performed unlogged (with a systematic erasing of all the cookies and restart of the sessions after having collected the data for each sample) to avoid any user bias (although Google also collects MAC address and IMEI, I think that its impact (on these results at least) is not significant). In parallel I'll try to see how Google deals with unlogged users (that focuses mainly around the IP).

The idea is to study the balance between prominence and location and keep the user's history bias away (we'll play with that a bit later since it's important to try to understand how Google sorts its data before ours (user history) comes into play).

The tests will be performed on through automated processes on both mobile and desktop.

So far I've only performed some quick tests over just a few thousand samples and I'm a bit surprised by what I found. These early tests were done though just 2 types of queries (desktop so far):
  • simple url parameter in google maps: Google MapsKEYWORD/@LATITUDE,LONGITUDE,ZOOM LEVEL > get the first 20 results
  • geolocalized search in Google Maps > click on geolocation button > feed Google maps with the coordinates of the sample point > add keyword in the search field > get the first 20 results
Both were performed over a small metropolitan area (300k inhabitants) and its sphere of influence. For "event agency" (in French), where local business have a really low prominence (almost no-one manages its Mybusiness, few reviews, most websites have little to no authority), and in this kind of industry still much of the business is happens outside of Google's hegemony.

If you align both layers you get quite different results (I won't provide any exact number here so far since you can't base yourself just on a small sample), like 90% of samples having a different #1 result from the other mean of search.

Surprisingly the url parameters query seems to be less influenced by prominence than geolocalized results. I've seen strong brands (OK domain authority, decent number of reviews) from bigger metropolitan centers (> 1 million inhabitants, 150+ km away from the sample) make it to the first position. I would have thought that if Google thinks when I'm actually on the spot it would serve me more local companies.

You don't need to call an event manager based 150 km away to get a band for your mother's 100th birthday. Or that could possibly tell something on the search intent : most searchers are b2b clients who are OK to pay more to celebrate the 20th anniversary of their company if they can get from the bigger agencies some services which the local offer is unable to satisfy. Other keywords and areas will tell us more.

I'd like to sort out a few points before I go any further in my tests though :
  • have you noticed any fundamental difference between the google maps url query cited above and the load Google Maps > search in this area. Both seem to work the same way, right? It's completely unbundled from the user's location (may it be IP or GPS coordinates). Do we agree on that?
  • have you noticed yourself a correlation between opening hours and the results ? If I were a google project manager, I'd offer to arrange the results based on if the business is closed, at least for the types of queries implying an action within a short timeframe (typically "restaurant", on mobile, at 11 pm, searched in front of the exit door of a theater). Would it be worth it to perform the same search (for different industries) at different times of the day to see if there's a correlation?
  • I'll take any hint or trick to avoid as much as possible user bias if you have any
I'll be glad to share my results publicly if there's any interest for it. Don't expect them within the next few days though, since I'm working with my own means and I repeat, I'm no data scientist.

Still, I think metadata is a decent way to tackle Google's algorithms.

But remember : correlation is not causation :)
I love experiments that test Google alogrithms, so I'm really interested in what you're doing. I'm having a hard time understanding what you're testing. I think it might help if you could give some examples of what you mean by
  • simple url parameter in google maps: Google MapsKEYWORD/@LATITUDE,LONGITUDE,ZOOM LEVEL > get the first 20 results
  • geolocalized search in Google Maps > click on geolocation button > feed Google maps with the coordinates of the sample point > add keyword in the search field > get the first 20 results
I'm not following what the variables are in these tests or what the input data is.
I didn't pay attention to the auto formating of links here sorry.

The input data is a point grid evenly spaced at various intervals (100 m, 1000 m, 10,000 m so far) depending on the scale and projected on the map in the appropriate projection.

As for understanding the algorithm, it is pretty much like Plato's allegory of the cave. If you see that a specific business covers more ground on the map than the rest, you just have to pay more attention on what makes it stronger than the others (be it inherent to the business or caused by the context (ie competition and/or the area: reigning without contest over corn fields is easier than in the middle of Manhattan).
Last edited:
Here's an example at the 10 km scale for the #1 position. 1 color = 1 business
I am trying to understand the difference between the two searches. Can you be more specific for this set of instructions:
geolocalized search in Google Maps > click on geolocation button > feed Google maps with the coordinates of the sample point > add keyword in the search field

Are you ensuring the zoom level is the same?
Last edited:
The geolocation button method is indeed useless. It places the centroid on the coordinates, but you don't have any control over the zoom level. So the difference I talked about between both methods is normal.

And by extrapolation it actually means that Google maps search is pretty basic when it comes to how it fetches results (I thought is was more complex). And the way it gets the geolocation doesn't really matter (may it be through GPS/IP/Wifi/direct input of coordinates/input of an address + georeferencing).

Thank you for pointing this out. That's where my lack of experience with google maps shows. Your question actually means a lot to me :)

Here's my thinking out loud; there's a lot of obvious things here, but what matters is how they interact and in which order.

So basically Google maps needs a centroid and a zoom level.

There's however a difference in the results depending on how you notify your location to Google (that why I was confused).

Perform a regular Google Maps search (desktop geolocated) and compare it to the right-click/"search from that point" results. It automatically triggers the Directions API.

That's because Google knows that you're probably searching for something before getting to your destination. Mobility search is a different beast from static search. Getting search intent right is vital for an advertising company such as Google. Since it leads to better conversion.

Which implies you need a different reporting and strategy for businesses with a front desk. And it should be studied separately since location and prominence seem to be handled differently.

Maybe that Google tells you clearly the successive steps its algorithm uses here :
  1. Relevance: it needs to understand what the user is looking for. It uses user input and can also process the input and add more granularity to the query thanks to the user's known history. If you keep user's history bias, this is where you can learn about what Google understands of the search intent and how it modulates the results depending on the topic
  2. Location: this is what we talked about above. Locate pertinent results around a centroid. A first step could possibly be (for each zoom level) to understand which range it initially uses to fetch results and how far it can go if there's no result available.
  3. Prominence: sort the results by Google's perceived popularity of the POIs. Once you can get an idea of how the algorithm fetches data you can then analyze how it sorts the data. That would require a prominence audit of each fetched POI and check the balance between them for each sampling point.

Login / Register

Already a member?   LOG IN
Not a member yet?   REGISTER

LocalU Event

LocalU - Local SEO Penalties for Links

  Promoted Posts

New advertising option: A review of your product or service posted by a Sterling Sky employee. This will also be shared on the Sterling Sky & LSF Twitter accounts, our Facebook group, LinkedIn, and both newsletters. More...
Top Bottom