Could New Tech Terrorize Upcoming Elections?

Could New Tech Terrorize Upcoming Elections?

(TacticalNews.com) – Technology has changed tremendously in a very short period of time. However, it’s not necessarily for good reason. It seems that technology changes and becomes more advanced faster than we can create something to hinder its influence. Perhaps this new technology will prove impossible to stop.

What Is This New Tech?

Created by a man known as Ian Goodfellow in 2014, generative adversarial networks (GANs) are a branch of deep learning core technology. He originally created two neural networks. Not only could they perceive, but they could also create. He eventually named the two networks “generator” and “discriminator” and pitted the two against each other.

The generator, given a dataset, creates new images that are in terms of pixel, similar to existing images. The discriminator in the meantime, was being given photos from both the generator and the original dataset. The discriminator’s task was to identify which photos were real and which ones were synthetic.

As the two networks fight back and forth, almost like a battle of good vs. evil, they become aware of each other’s capabilities. Soon, the discriminator only had a 50% success rate, meaning the photo the generator created was indistinguishable from the original. This became the base for deepfakes, which generates a clip often used to make people say or do things they otherwise wouldn’t.

Cause For Concern?

The deepfake technology has been widely used in pornography, with 96% deepfake videos being pornographic as of September 2019. Several websites dedicated to these deepfake pornography videos massed hundreds of millions of views in just two years. Doesn’t seem like a big deal right?

Well actually, yes, there is cause for concern. Now there are videos of political figures engaging in acts that are demeaning. This could cause a major swing in voters right before an election. Not to mention what it could do on a global scale with a video of Russian President Vladimir Putin stating they have launched nukes toward the US. This would likely result in US retaliation, and could lead to a third world war. This technology is getting harder and harder to differentiate from reality. This can cause massive distrust on a national, and global scale.

In the words of US Senator Marco Rubio “In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles.”  The Senator then added that now all you need is the ability to create a fake video.

Combating AI

Due to the AI’s ease of use and widespread accessibility, combined with the fact anyone can create a deepfake, it could prove difficult to stop. Deepfakes are based on AI to begin with, so some have suggested using AI to combat any harmful intentions. Some researchers have created a deepfake detection system, utilizing lighting, shadows, and facial movements as reasons to flag images or videos. Adding a filter to a file may be another defensive approach, making it impossible to use it for a deepfake.

Truepic and Deeptrace are a few startups that have offered a way to defend against deepfakes. Unfortunately, even these technological solutions may not stop the creation and spread of deepfakes. Instead, it will likely result in an endless loop of cat-and-mouse. Similar to what we already see in cybersecurity today. Any time there’s a major breakthrough on the defensive end, it will only lead to further learning and innovation on the opposing side.

Lawmakers can come up with laws like the one in California, which prohibits the creation of deepfakes containing politicians 60 days before an election. The First Amendment, however, could get in the way of making a blanket ban on deepfakes. “Political speech enjoys the highest level of protection under U.S. law,” as stated by law professor, Jane Kirtley. Making and enforcing such laws may be seen as unconstitutional. Adding in the anonymity and lack of borders the internet holds, it would prove to be nearly impossible to enforce these laws.

Copyright, defamation, and the right of publicity exist as legal frameworks to help combat deepfakes. The fair use doctrine and its wide applicability would limit legal avenues though. Facebook, Google, and Twitter are likely the best short term solutions available, taking voluntary action to limit the spread of harmful deepfakes. But relying on private companies to solve societal problems can be dangerous, too.

Fake news has been on the rise, especially in recent years, and these deepfakes will only fuel that. There may not be one single solution to solve this problem. The best thing that can be done as a first step is to make the public aware of both the possibilities and dangers of deepfakes. Properly informed citizens is a crucial defense in the spread of misinformation.

Copyright 2020, TacticalNews.com