Type to search

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic

Technology & Data

Fake news, deep fakes and fraud detection 2020 – addressing an epidemic

Share

Online giants and regulators alike have taken up the fight against fake news and deep fakes. Simon Marchand says the answer has been on the tips of our tongues all along. 

Since 2016 when the Macquarie Dictionary named ‘fake news’ as its word of the year, the spread of misinformation online has only increased. Technology has become more sophisticated, giving rise to the production of ‘deep fakes’ and synthetic voices.

It’s no wonder the Analysis and Policy Observatory’s (APO) 2019 ‘Digital News Report’ for Australia found that nearly two-thirds (62%) of the nation is concerned about what is real or fake on the internet ― above the global average.

The lack of consumer confidence in online content is a major threat to marketers, with their brands’ success firmly embedded in establishing a trustworthy and authentic reputation – consumers are far more likely to purchase from, stay loyal to and advocate for brands they trust.

Introduce deep fakes into the mix and you’re looking at a far more sophisticated threat to brand reputation that demands an ultra-modern response. Big tech organisations, government bodies and social media platforms are fighting back against fake news with new policies, technology, litigation and more. However there is an existing, under-utilised tool that could have a major impact if employed by marketers – voice biometrics technology.

Deep fakes – how tech is propelling the issue forward

Deep fakes are used in the context of broad propaganda campaigns, intended to influence the opinion of groups of individuals. On a large scale, this can potentially have a dramatic impact, such as heavily influencing the outcome of a political election. Consumers are continuously warned to be sceptical and afraid; we’re in the middle of a fake news epidemic. The technology used to create this content has become more realistic and accessible, so it’s easy to see why. Effectively, anyone with a computer, internet connectivity and a bit of free time, could produce a deep fake video or audio file. As AI becomes more sophisticated, it’s become increasingly hard to discern what is real or fake.

This is compounded by the increasing reliance on social media for news―the APO’s report found almost half of generation Z (47%) and one-third of generation Y (33%) use Facebook, YouTube, Twitter and other social channels as their main source of news. Blind trust in social media platforms is enabling fake news to spread to masses in record time.

The rise of social media and influencer marketing in recent years has put brands in an extremely vulnerable position. A convincing deep fake of a company’s CEO, brand’s celebrity or an influencer ambassador can be created with ease. If their visual image were manipulated to depict them or even the brand itself in a way that is false or offensive, this would pose a serious threat to modern-day marketers.

The threat-level heightens when you consider that debunking fake news takes time, and content published for that purpose typically receives less coverage than the original, false article. As a result, misinformation can have lasting effects, even once discredited – it is a phenomenon researchers across the globe are investigating.

How social media and tech companies are fighting back

As AI becomes increasingly refined, big tech is racing to keep up. Twitter has announced it will add labels to or remove tweets carrying manipulated content, while Facebook has partnered with Microsoft and academics to create the Deepfake Detection Challenge, which seeks to better detect when AI has been used to alter video.

Google recently released more than 3,000 visual deep fakes to help inform a FaceForensics benchmark that is combating manipulated videos. This follows its earlier release of a synthetic speech dataset.

These solutions are a work in progress, however. Voice biometrics technology – existing fraud detection tech – could have a major impact in marketing.

Voice biometrics to combat fake news

Banks, insurance providers and governments across the globe are already using voice biometrics as an easy and secure way to authenticate their customers, combat fraudulent activity and improve customer satisfaction.

A voiceprint includes more than 1000 unique physical and behavioural characteristics of a person, such as vocal tract length, nasal passage shape, pitch, cadence, accent and more. In fact, research shows it’s as unique to an individual as a fingerprint.

Where behaviours can be easily mimicked, physical voice characteristics cannot, thus preventing impersonators from ‘tricking’ the system. Voice biometrics could be monumental in verifying if a video or audio recording is legitimate, analysing if the voice is actually from the person the message claims to be, or has been manipulated, simulated, or created synthetically to create fake news stories.

The accuracy and speed with which voice biometrics can authenticate a person’s identity mean that harmful deep fakes be debunked with certainty – quickly mitigating the threat to a brand’s reputation.

Biometrics represent a new era of identity security, and given the dramatic influence fake news can have, combating deep fake videos and synthetic audio with voice biometrics is a natural progression for the technology.

 

Simon Marchand is chief fraud protection officer at Nuance Communications.

Photo by Gage Walker on Unsplash

Tags:

You Might also Like

Leave a Comment