Google’s Gary Illyes recently highlighted a recurring SEO problem on LinkedIn, echoing concerns he’d previously voiced on a Google podcast.
The issue? URL parameters cause search engines difficulties when they’re crawling websites.
This problem is especially challenging for big sites and online stores. When different parameters are added to a URL, it can result in numerous unique web addresses that all lead to the same content.
This can impede search engines, reducing their efficiency in crawling and indexing sites properly.
The URL Parameter Conundrum
In both the podcast and LinkedIn post, Illyes explains that URLs can accommodate infinite parameters, each creating a distinct URL even if they all point to the same content.
He writes:
“An interesting quirk of URLs is that you can add an infinite (I call BS) number of URL parameters to the URL path, and by that essentially forming new resources. The new URLs don’t have to map to different content on the server even, each new URL might just serve the same content as the parameter-less URL, yet they’re all distinct URLs. A good example for this is the cache busting URL parameter on JavaScript references: it doesn’t change the content, but it will force caches to refresh.”
He provided an example of how a simple URL like “/path/file” can expand to “/path/file?param1=a” and “/path/file?param1=a¶m2=b“, all potentially serving identical content.
“Each [is] a different URL, all the same content,” Illyes noted.