Google’s quandary around truth in search results
With misinformation rife across the internet, Mark Razzell suggests how Google could become a gatekeeper to the truth, and what the implications of that might be.
Recently, I’ve been doing a lot of work understanding new advancements in the online space and how it might change our world, because that’s my job. While perusing the matter, one thing has become more and more prevalent: how Google operates in the world and how that’s going to change. The way I see it, Google must be finding itself at a bit of a philosophical crossroads at the moment, and I’m not sure what I would do if I were in its position. Moreover, it could have a significant impact on all of us. Allow me to explain myself…
Last year, the world economic forum identified ‘digital misinformation’ as one of the main threats to the world, alongside terrorism and environmental catastrophe. A recent study has also highlighted how this happens, with people susceptible to believing things that aren’t true, specifically conspiracy theories like chemtrails and reptilian overlords (I know, right?), having a tendency to lock themselves in an online echo chamber. We already see the impacts of this in the real world. For example, the anti-vaccine movement incredibly gaining traction and people, mostly children, literally dying as a result.
Recently, Google announced that it could, in theory, populate and rank search results by how ‘true’ they are. The theory is based on techniques behind artificial intelligence and it got a lot of people’s attention. Is it ethical that Google, effectively, be put in charge of truth? On the other hand, not doing this could also be detrimental, as the anti-vax movement demonstrates. Google could, technically, put a stop to the spread of misinformation. Or at least seriously hinder those who would look to spread information that is downright incorrect. I just Googled ‘vaccines’ (incognito). The second and third results were anti-vax sites. The first was a .gov site and nobody likes them because the government lies (allegedly – don’t put me on a list). So, that’s why this is a big deal.
The problem is: where do you draw the line? There are many thriving brands that, while not necessarily causing harm, are based on misinformation. Cosmetics, naturopathy, and a whole host of other related industries are based on, at best, half-truths. Reckitt Benckiser’s Nurofen has recently got in a lot of trouble for telling porky-pies, which might harm sales.
V’s ‘The massive hit that improves you a bit’ is a claim that’s largely met with a shrug and ‘I dunno – maybe’ in the scientific community, so that’s another claim that would be relegated below a competing message of ‘caffeine-based drink that tastes nice and may/may not make you a little more alert’.
If Google punished these guys with a ‘truth’ rank, they’d get hammered. That kills a thriving industry, which harms the economy. It flies in the face of modern, opportunistic capitalism, for which governments’ general consensus is ‘if it ain’t killing nobody, we’re all for it’.
Also, let’s have a look at how Google makes its money. 97% of how Google makes its money is from advertising. If it ranks by truth, how is it supposed to justify having contradictory links in the ‘paid’ part of search? Here’s a few hypotheticals for how it could change things:
- Include a colour code system next to the search results as they stand (green = good, red = lies),
- introduce a ‘truth’ filter for individuals as part of advanced search,
- set by truth, but have brands and companies get around it by paying to do so, and
- have it as another weighted rule in its algorithms.
These situations all carry significant problems. Namely and respectively, people ignoring ‘false’ links, inability to guarantee anything to clients, and a complete lack of integrity (x2). Is there any way Google can do this without compromising itself? Seriously, I’m asking the question genuinely… I’m not positing something to then answer myself as a smug attempt to appear smart. It’s counter-intuitive. There is no way that I can see of Google implementing the ‘truth’ model without hurting brands, economies, and itself.
OK, so why don’t they… just… not do it?
Well, the problem is, Google has now proven it can be done – if it can be done, someone will do it. Techniques sitting behind artificial intelligence are responsible for ‘truth ranking’, and Google doesn’t own scientific methodologies. ‘Ah,’ I’m pretending I hear you say, ‘But they do own the data that needs to be crawled in order to determine truth’. Yes, but let’s not forget that Microsoft also has a search engine that, while nowhere near as popular, is still pretty damn comprehensive. It has access to a lot of data, too, and Microsoft might be eyeing this opportunity as a way to steal market share from Google… Google doesn’t have a complete monopoly on global web traffic. Sure, it’s close, but it’s not invincible (yet).
By doing this, they’ve created a bit of a rod for their own back. They’re damned if they do, damned if they don’t. Since the advent of the internet, ‘truth’ has become a little fuzzy – people tend to think of objectivity and subjectivity as some sort of Venn diagram which, if the anti-vax movement is anything to go by, is problematic. Automatically assessing the quality of content by how ‘true’ it is would eliminate that part of the problem, but it also means that brands like yours might suffer. How many people reading this can honestly say they’ve never seen brands stretch the truth (at the very least)?
So that, from the way I see it, is Google’s quandary. Does it adhere to the pressing need of civilisation to stamp out nonsense? Or does it just keep quiet and hope this all blows over? Knowing Google, I’m sure they’ve got a fantastic strategy in place, but from where I’m standing I suspect this might be causing more headaches than would initially be suspected.