Deepfake is a term used to describe a specific kind of imaging software, one that has recently become more and more important in the last two years. The term itself is a portmaneau, in which the terms deep learning and fake were mashed together in order to describe a type of advanced form of human image synthesis that relies on the use of artificial intelligence in order to perform its intended function. The phrase was coined only in 2017, and is now known across the world as one of the potentially most invasive kinds of imaging software in common use amongst the general populace. What deepfake does that has caused such a significant controversy is to combine and superimpose specific digital images or videos onto the source images or videos. This means that by using a sophisticated kind of machine learning technique known as ‘generative adversarial network’, deepfake can allow any user to create things like fake celebrity pornography or other malicious fake news.
The research that led to the eventual creation of deepfake began as far back as 1997, with the Video Rewrite program that was designed to modify videos of people talking so that their faces matched up to a different audio track. For some reason, there seems to be very little mention of this kind of machine learning techniques being used for that purpose for almost twenty years after the release of the Video Rewrite program. Perhaps the potential for mischief making simply slipped under the radar of most people. In 2017 the reddit user named deepfakes became well known on the website for the deepfake videos they were creating, some pornographic in nature and some comedic, with a particular focus on the face of the actor Nicholas Cage. As one might assume, the term deepfake comes from this reddit member’s username, and it became synonymous with this kind of activity after Vice published an article about the increasing popularity of these deepfakes.
A large percentage of the deepfakes created over the years have been pornographic in nature, something which has caused a great deal of controversy and has lead many websites to ban the deepfakes all together. Not only have they been banned from websites like twitter and reddit, but even the pornography website pornhub has banned these pornographic deepfakes from being posted. Because these pornographic deepfakes use the images of people without their permission, it is considered involuntary pornography much the same as filming a sexual encounter without the participants permission would be. While the deepfake phenomenon is indubitably fascinating as an expression of machine learning capability and as an example of the drastic improvements made in imaging software in recent years, it has been used by many for malicious purposes. Although there are certainly many funny deepfake videos and images that portray celebrities and others in a clearly parodic way, there are also far too many that are pornographic in nature and used in order to deliberately cause harm to the subjects of the deepfakes.