Yan Gilbert
Moderator
- Joined
- Oct 15, 2016
- Messages
- 507
- Solutions
- 3
- Reaction score
- 589
Just bringing this up for discussion....
Many of us [SEOs] spend time going through citations to make sure that the NAP data is all the same. I think this used to be much more important in the past than it is now to be honest, as Google's algorithm has improved enough to be able to deal with minor differences and still properly associate the citation to the business.
For example, one citation has the suite # while another doesn't, or if phone number tracking is used so the phone number might be different across listings..
Getting rid of duplicates was also something on the list to do, but I have come across a rather not too white hat technique of citation building where duplicate citations are used on purpose to feed the algorithm.
I'd rather not link to the page, but basically duplicate listings are created on purpose within the same directory, the address is the actual address (maybe with a different suite #), but instead of using the proper name of the business, a keyword stuffed name is used. A different forwarding number is used to bypass any duplicate checking that the directory might perform. The link is to the normal website of the business (homepage, service page, location page).
It's that actually works, I would say it's almost as if the algorithm has gotten too good at associating a citation with a business, but not smart enough to filter out spammy duplicates.
Any thoughts?
Many of us [SEOs] spend time going through citations to make sure that the NAP data is all the same. I think this used to be much more important in the past than it is now to be honest, as Google's algorithm has improved enough to be able to deal with minor differences and still properly associate the citation to the business.
For example, one citation has the suite # while another doesn't, or if phone number tracking is used so the phone number might be different across listings..
Getting rid of duplicates was also something on the list to do, but I have come across a rather not too white hat technique of citation building where duplicate citations are used on purpose to feed the algorithm.
I'd rather not link to the page, but basically duplicate listings are created on purpose within the same directory, the address is the actual address (maybe with a different suite #), but instead of using the proper name of the business, a keyword stuffed name is used. A different forwarding number is used to bypass any duplicate checking that the directory might perform. The link is to the normal website of the business (homepage, service page, location page).
It's that actually works, I would say it's almost as if the algorithm has gotten too good at associating a citation with a business, but not smart enough to filter out spammy duplicates.
Any thoughts?