eCommerce Over Coffee Webinar: How to Safely Migrate to Magento 2 Before End of LifeREGISTER NOW
The topic of improving the quality of search engine results is known to generate many debates. It’s essential to keep the content of online work accurate and useful, while allowing it to be easily searchable by end users. After all – there’ s little point in creating online content, if no-one is likely to find and view it. In a continual quest to improve search engine results, Google implement processes to identify and distribute key words and phrases. But how good a job are they actually doing?
It’s true that the perceived quality of a Google search can differ from user to user. After all, the information being retrieved will apply more, or be more ‘relevant’ to some users, than it will be to others. It’s also true that a search query can cover a wide range of subjects. So, how can Google really indentify the correct results for each and every user query? Certainly, it seems like an impossible task in principle.
Let’s use an example. An end user is searching the term ‘Diabetes symptoms’. They receive back a multitude of results, at the top of which is an article by a copywriter who has limited knowledge of the condition. Further down the search results is a comprehensive e-book written by a leading medical professional, who specializes in the area of Diabetes. Instinctively, the end user will view the top results – as they will deem them to be more ‘important’ or ‘relevant’ than results displayed further down the list. In return, they may view information that is out-of-date, or even worse, it could be entirely incorrect. Meanwhile, the information that they really need to read is waiting further down the search result list, undiscovered.
And why does this happen? Put simply, you can think of search engines such as Google like a virtual librarian. You ask for information on a particular subject, and you are referred to the correct ‘book’ (website). However, as some subjects are vast, the information contained in the book (website) may not be entirely relevant to your query. Additionally, the librarian may know more about one particular book, than they know about another. This knowledge can be likened to search terms. If Google detects a large amount of keywords or key phrases in an online document, it will automatically place more importance on the document – and in turn, the document will appear higher up the search results list, even if it’s not exactly what the end user is searching for.
And how can this problem be resolved? It has been suggested that professionals within specific areas (i.e. the medical sector) are consulted by Google when keywords are produced. This would perhaps ensure that more relevant search terms are made public, and in turn, that better results are generated. In theory, this could work – however, the number of queries that can be made via the Internet is literally endless, and it would be impossible to irradiate all ‘rogue’ entries.
So, perhaps the onus should be on the authors/publishers of online content, and on the generation of stricter web publishing guidelines? This would make sense in relation to medical documents or any other information that is currently governed by guidelines in paper form. It’s certainly not going to be an easy process, and it’s unlikely that all searches will eventually be 100% accurate, but with tighter publishing guidelines, perhaps a large amount of ‘filler’ can be removed from the Intranet, allowing the accurate, authoritative content to really shine.