Trusted news sources and even beliefs could come into question.

The future is fake

Monday, 8 July, 2019 - 15:16

The emergence of deepfake videos and similar technology is raising questions about how we measure reality, trust and privacy.

Advancements in machine learning are giving new meaning to the term fake news, presenting high-profile individuals with cause for concern and potentially posing serious issues regarding public trust.

Take the growing prevalence of ‘deepfakes’, for example, which are one of many forms of video manipulation that lets a creator make anyone say anything.

Deepfakes have become synonymous with fake news, hoaxes and revenge pornography. Understandably, this is creating a lot of fear around the dangers of the technology for politicians, business executives and celebrities. And it has implications for anyone with an online profile.

How we measure reality, trust and privacy in the near future will be completely transformed.

In simple terms, a deepfake is a face swap, using a range of still images from multiple sources – such as someone’s Facebook or Linkedin account, and a Google search.

Artificial intelligence software uses this image library to map out a 3D model of that person’s face. Scarily, it can create a pretty impressive model from just a single image (Google ‘Mona Lisa deepfake’), however, the outcome is more realistic when more images are used.

The features of the face are all pinpointed, from the corners of the lips to the arch in the eyebrows, enabling the computer to animate that face with a great deal of realism.

The next step is taking a ‘target’ video of another person, and another mapping process occurs. The person in the still image then replaces the face in the video.

Deep video portraits

Similar to deepfakes, deep video portraits (DVPs) are considered the more sophisticated technology of the two. If you’ve ever used a Snapchat lens, Instagram filter or the app Face Swap, you’ll have witnessed the effects of these digital masks.

DVP technology was created by a Ukranian startup called Looksery, which was acquired by Snapchat in 2015 for $US150 million. If you’re interested in the nitty-gritty details of how they work, Looksery’s patents are available online.

DVPs start the process by identifying a face through areas of light and shade that resembles what it knows to be a human face. Digital cameras have been putting boxes around faces for many years now using a similar approach.

The algorithm is trained based on someone manually mapping out the features on thousands of images of faces, creating a template. These points are then aligned with the moving face behind the camera, creating a mask over it.

While not flawless, a great demonstration of a deep video portrait uses a video of Barack Obama overlayed onto impersonator Jordan Peele doing an impression of him.

Voice fakes

Companies such as Lyrebird can create a vocal avatar based on only one minute of speech input, or create a unique voice specifically for a brand. 

A few months ago, popular podcaster Joe Rogan’s voice was forged using machine learning. Dessa, the company behind the dupe, created a quiz that you can take to see if you can tell the difference (fakejoerogan.com).

Whether for humour, politics or otherwise, the potential impact of voice fakes combined with fabricated videos is really quite profound.

Detection

Detection software is racing to keep up with advances in this type of machine learning. A number of startups, governments, and researchers are working on ways to identify deepfakes, which in some cases have been banned. Political manipulation deepfakes are now banned in Texas, and the state of Virginia’s revenge porn laws now cover deepfakes too.

In the online space, Reddit, Twitter and other big players have banned phony adult content, often produced using deepfake techniques.

Facebook, on the other hand, made a statement after a deepfake of congresswoman Nancy Pelosi appearing drunk was posted on its platform, saying that it wouldn’t take such a video down, even if it featured CEO Mark Zuckerberg.

It was only a matter of time before someone put it to the test, with a couple of artists posting a fake video of the Facebook founder saying “Whoever controls the data, controls the future” in June. Facebook didn’t take it down.

Tim Hwang, director of the Harvard-MIT Ethics and Governance or AI Initiative, doesn’t think these videos are anything to worry about.

“Nothing suggests to me that you’ll just turnkey use this for generating deepfakes at home. Not in the short term, medium term, or even the long term,” Mr Hwang said.

He believes that quality fakes require superior technical know-how and are costly to produce.

In truth, however, this does not depend upon exceptionally talented artists and animators to create authentic human enactments. There are already low-cost apps and websites that allow an everyday internet user to create their own. They may not be perfect now, but it won’t be long before they are.

Another question raised is whether these videos will become a scapegoat for PR blunders? It may enable people to do and say whatever they want, simply claiming it as ‘fake news’.

Believability in what we read, see and hear is diminishing. The best thing to do is to be sceptical. Keep your photos as private as possible or, better yet, be the company that designs the detection software – they’re tipped to make a great deal of money.