Search engine optimisation (SEO) has a long history, as analysts have attempted to bias search algorithms to push a website up the ranking – but now there’s a new kid on the block:
Generative Engine Optimisation (GEO)
As generative AI-powered large language models such as Chat-GPT and Google’s Gemini have become popular for their ability to answer complex questions – and are increasingly incorporated into search engine responses – companies are looking for the opportunity to play these systems too. But they need to be aware that doing so is a whole new ballgame.
Because generative AI can provide functions apparently similar to search, it’s easy to fall into the trap of thinking that they are just more complex equivalents – but that isn’t true. A generative AI is not a search engine. It is based on a complex neural network that has been trained on extremely large sets of material from the internet. It will break down a question (known as a prompt) into a set of elements and produce an answer as a result of weighting millions of factors to give the best score to the generated response. It does not return text directly from websites.
GEO vendors claim to be able to tune sites to be best usable by a generative AI. This can include having clear answers to common questions, quality content and strong context. The idea is to make your content easy to break down and be synthesised into the final response – and this all makes sense.
Unfortunately, though, there are significant problems with GEO claims.
Where SEO models a logical algorithm, GEO has to deal with the opaque ‘reasoning’ of generative AI. An AI can give weights based on something entirely different to anything a person would use in making a decision. A simple early example was when an AI trained to distinguish photos of wolves from dogs didn’t look at the animals at all – it just equated snowy backgrounds with wolves. And while a search engine can be reasonably quick to incorporate new material, the training process for generative AI can mean that new texts takes far longer to be included. Finally, it should be remembered that, unlike a search, generative AI does not respond with separate answers by source – the result is blended from many websites and documents.
The response to a prompt will not necessarily even be true.
Generative AI can ‘hallucinate’ because it has no understanding of what it is being asked or of the material it is trained on. As long as the score is high enough, it will return an answer that can be totally fictional. Academics have found that generative AI knows it needs references but not what they are – and simply makes them up. Generative AI has similarly invented legal precedents, falsified medical conditions and claimed that the Golden Gate Bridge was twice transported across Egypt. What is certainly true is that by shaping a prompt correctly (so-called prompt engineering) the questioner can get a better-quality result. But the would-be optimiser has no control over prompts.
We have to ask if GEO is snake oil or an effective solution.
There is no doubt that many companies, eager to enhance their results, will pay handsomely for GEO – but as yet there is limited evidence of quantifiable benefits. It seems reasonable that clear, quality writing would get better exposure – but it certainly isn’t a silver bullet. This is well-illustrated by a quick conversation with Microsoft’s CoPilot. I asked it for a straight answer, not a comparison, on which of a (Microsoft) Windows PC or an Apple Mac is better for a writer. It responded ‘Understood! For a straightforward recommendation, I’d suggest going with a Mac.’ If Microsoft can’t influence their own generative AI enough to recommend their operating system, it seems that, for the moment, GEO is a doubtful exercise.