Responsive Web Design and Duplicate URLs
Duplicate content is a well-known issue that members in the search engine optimization (SEO) and web development communities keep a close eye on. This issue can lower the relative value of a page in the eyes of search engines, due to their being multiple instances of the page's content. The address, or URL, of any page on the internet is meant to be a unique identifier for the page, and if there is an exact duplicate of that content located on a separate URLs than the pages are not as useful for end users. A great example of this is the www vs no-www duplication that many sites have... Both versions of the URL may render the same page. GET parameters, tailing slashes, and other types of dynamically generated pages can also generate duplicate content issues.
I wanted to address the opposite issue in this post. Duplicate content can be easily fixed using a variety of solutions, but there are other practices that are starting to pervade the web development community that could potentially confuse users and search engines alike. Let's call it 'duplicate URLs'. This practice has been around for a while in small doses, although responsive web design has the potential for much larger issues. Basically, duplicate URLs occur when a single URL generates different content and/or pages for different users. This can happen a few different ways.
Client Cookies
This example is not an issue, but it may help illustrate how a duplicate URL can happen. When different users log into certain sites, like Twitter, and visits the main page (twitter.com/) they see a unique stream based off of the other users they are following. This stream is unique (unless two users are following the exact same people, that is) even though the URL is the same. I don't see this as an issue, since this example merely makes these sites easier to use, but it shows that the URL isn't the only thing that can make a page's content unique - cookies stored on a computer can as well.
Referring URLs
Here is where things start to get tricky. There's only so many things that can make a visit to a website unique outside of the URL, and the referring URL is one of them. This is one of the little tricks that web developers can do to make the user's visit a little more 'personal'... If the user is coming from Google, than the search terms they used might be highlighted; if they are from a known 'partner' site, than a cute welcome message might pop up. This also isn't a huge issue, but it's something else that can make a page's content unique outside of the URL.
User Agent
Here is where the issue comes in with responsive web design. Every web developer has dealt with cross-browser compliance. Websites don't render the same in different web browsers. Developers have been doing responsive web design to a small extent for years, writing different styles or hacking behaviors to make pages work on different devices and browsers. Responsive web design is doing this on purpose, analyzing the user agent (client platform and browser) to add specific functionality or change the layout of a page to better fit the client's use. If these changes go beyond pure styles and scripts, than there is the potential of some big side effects. For example, one proposed implementation of responsive web design is RESS, and would involve loading different HTML, content, or layout based off of a device.
One of the side effects will involve search engines and their ability to determine helpful content. If a search engine bot indexes a URL based off of an incorrect analysis of the content (yes, search engine bots send user agents too), than the search engine might send users to pages that are not related or helpful to their searches. Another side effect involves direct user confusion - if a user clicks on a link expecting to see a full site and end up on a small, inhibiting mobile site, they may exit frustrated.
It's not a bad idea to have targeted versions of your content for different devices and browsers, but I think it should be implemented gently. Let the user choose how they want their content delivered, with the option to go back to the primary site at any time. If you're serving up different content or an optimized layout, than serve it up under a unique URL. Work with search engines, using tools like Google's canonicalization to point engines to one primary version. Detecting user agents to target users is a great idea, but it should be implemented in a way that is transparent and friendly to both your end users and the different bots and crawlers out there.
Comments (0)