Google’s quandary around truth in search results

With misinformation rife across the internet, Mark Razzell suggests how Google could become a gatekeeper to the truth, and what the implications of that might be.

 

Recently, I’ve been doing a lot of work understanding new advancements in the online space and how it might change our world, because that’s my job. While perusing the matter, one thing has become more and more prevalent: how Google operates in the world and how that’s going to change. The way I see it, Google must be finding itself at a bit of a philosophical crossroads at the moment, and I’m not sure what I would do if I were in its position. Moreover, it could have a significant impact on all of us. Allow me to explain myself…

Last year, the world economic forum identified ‘digital misinformation’ as one of the main threats to the world, alongside terrorism and environmental catastrophe. A recent study has also highlighted how this happens, with people susceptible to believing things that aren’t true, specifically conspiracy theories like chemtrails and reptilian overlords (I know, right?), having a tendency to lock themselves in an online echo chamber. We already see the impacts of this in the real world. For example, the anti-vaccine movement incredibly gaining traction and people, mostly children, literally dying as a result.

 

On truth

Recently, Google announced that it could, in theory, populate and rank search results by how ‘true’ they are. The theory is based on techniques behind artificial intelligence and it got a lot of people’s attention. Is it ethical that Google, effectively, be put in charge of truth? On the other hand, not doing this could also be detrimental, as the anti-vax movement demonstrates. Google could, technically, put a stop to the spread of misinformation. Or at least seriously hinder those who would look to spread information that is downright incorrect. I just Googled ‘vaccines’ (incognito). The second and third results were anti-vax sites. The first was a .gov site and nobody likes them because the government lies (allegedly – don’t put me on a list). So, that’s why this is a big deal.

The problem is: where do you draw the line? There are many thriving brands that, while not necessarily causing harm, are based on misinformation. Cosmetics, naturopathy, and a whole host of other related industries are based on, at best, half-truths. Reckitt Benckiser’s Nurofen has recently got in a lot of trouble for telling porky-pies, which might harm sales.

READ: Nurofen faces ACCC action over misleading packaging »

V’s ‘The massive hit that improves you a bit’ is a claim that’s largely met with a shrug and ‘I dunno – maybe’ in the scientific community, so that’s another claim that would be relegated below a competing message of ‘caffeine-based drink that tastes nice and may/may not make you a little more alert’.

If Google punished these guys with a ‘truth’ rank, they’d get hammered. That kills a thriving industry, which harms the economy. It flies in the face of modern, opportunistic capitalism, for which governments’ general consensus is ‘if it ain’t killing nobody, we’re all for it’.

 

On profits

Also, let’s have a look at how Google makes its money. 97% of how Google makes its money is from advertising. If it ranks by truth, how is it supposed to justify having contradictory links in the ‘paid’ part of search? Here’s a few hypotheticals for how it could change things:

  1. Include a colour code system next to the search results as they stand (green = good, red = lies),
  2. introduce a ‘truth’ filter for individuals as part of advanced search,
  3. set by truth, but have brands and companies get around it by paying to do so, and
  4. have it as another weighted rule in its algorithms.

These situations all carry significant problems. Namely and respectively, people ignoring ‘false’ links, inability to guarantee anything to clients, and a complete lack of integrity (x2). Is there any way Google can do this without compromising itself? Seriously, I’m asking the question genuinely… I’m not positing something to then answer myself as a smug attempt to appear smart. It’s counter-intuitive. There is no way that I can see of Google implementing the ‘truth’ model without hurting brands, economies, and itself.

 

On competition

OK, so why don’t they… just… not do it?

Well, the problem is, Google has now proven it can be done – if it can be done, someone will do it. Techniques sitting behind artificial intelligence are responsible for ‘truth ranking’, and Google doesn’t own scientific methodologies. ‘Ah,’ I’m pretending I hear you say, ‘But they do own the data that needs to be crawled in order to determine truth’. Yes, but let’s not forget that Microsoft also has a search engine that, while nowhere near as popular, is still pretty damn comprehensive. It has access to a lot of data, too, and Microsoft might be eyeing this opportunity as a way to steal market share from Google… Google doesn’t have a complete monopoly on global web traffic. Sure, it’s close, but it’s not invincible (yet).

By doing this, they’ve created a bit of a rod for their own back. They’re damned if they do, damned if they don’t. Since the advent of the internet, ‘truth’ has become a little fuzzy – people tend to think of objectivity and subjectivity as some sort of Venn diagram which, if the anti-vax movement is anything to go by, is problematic. Automatically assessing the quality of content by how ‘true’ it is would eliminate that part of the problem, but it also means that brands like yours might suffer. How many people reading this can honestly say they’ve never seen brands stretch the truth (at the very least)?

 

The wrap

So that, from the way I see it, is Google’s quandary. Does it adhere to the pressing need of civilisation to stamp out nonsense? Or does it just keep quiet and hope this all blows over? Knowing Google, I’m sure they’ve got a fantastic strategy in place, but from where I’m standing I suspect this might be causing more headaches than would initially be suspected.

 

 

Mark Razzell
BY Mark Razzell ON 4 June 2015
Mark Razzell is strategic planner at Zuni.
  • William Cosgrove

    Thank you for bringing this up. Most of us get so wrapped up in the internet as a business and social entertainment tool that we tend to overlook or get blinded to the serious social impact the internet can have on peoples’ lives by using it for mis and/or disinformation and the effects this can have on society as a whole.

    Governments, organizations and individuals have been doing this since the beginning of time. Why, because it is part of the dark makeup of us as a species to take advantage of the ignorance and susceptibility of others to further self serving agendas.

    The internet has already been infiltrated in large part by governments in the name of security but to allow anyone to start controlling how and what information is disseminated is in my opinion something that should be avoided at all costs.

    Our lives areconstantly being invaded and manipulated in ways that we probably are not aware of or overlook in our quest to fulfill our everyday needs.
    The internet’s focus should be to educate and publish all views so we can make our own informed decisions. Knowledge is power. There will always be people who will abuse and manipulate it but to openly put this kind of control in the hands of others will play right into the hands of the abusers and have the opposite desired intended effect.

  • Douglas

    It would be a nice twist on the history of thought that an artificial intelliegence could finally put to bed the postmodern philosophy that truth is in the eye of the beholder. Truely an ‘other’ with a non human bias. I suspect this sort of thing may have an impact greater than merely than debunking claims of homeopaths and nutraceutical peddlers.