Skip to Main Content

Using Generative Artificial Intelligence for Research

Synthetic Media

Synthetic media is a broad term that is used to describe media that is, either partially or fully, created or modified using generative AI. Synthetic media could take the form of videos, images, voices, texts and images (Vales, 2019).

Synthetic Videos

Synthetic videos are produced using generative AI without any use of physical equipment or resources, such as cameras, audio equipment, actors, etc. Synthetic videos can be generated from text or still images with the help of specific software tools.

Deepfakes are a form of synthetic media that can be used to manipulate and alter visual or auditory information. This technology can falsely depict people or objects in scenarios they were never involved in, such as conversations, activities or locations. Despite being synthetic or artificial, deepfake media can look and sound highly realistic (Canadian Security Intelligence Service, 2023).

The key distinction between synthetic media and deepfakes lies in their intent - deepfakes involve creating content that appear real but are fabricated, while synthetic media broadly involves generating content for creative or practical purposes without aiming to deceive.

Here are two examples of creative and practical uses of synthetic videos:

Dalí Lives – Art Meets Artificial Intelligence

Using deepfake technology, the Salvador Dalí Museum unveiled an interactive exhibit that gave visitors the opportunity to learn about Dalí's work and life from Dalí's AI likeness. Read more about how the installation was constructed from The Verge.

Malaria No More Campaign

Malaria No More UK, in partnership with Synthesia, released a campaign featuring David Beckham to bring awareness to end malaria in 9 different languages. Visit the Synthesia website to learn more.

Deepfake Technology

Another form of synthetic videos are called "shallowfakes". Unlike deepfakes that use AI technology, shallowfakes use simple editing tools and techniques to achieve similarly realistic results. Some techniques include, manipulating video speed, using freeze frames, and re-editing footage to alter the original meaning. 

Early deepfake videos were detectable by humans due to video glitches and flickers, peculiar eye and mouth movements, variations in lighting and shadows, etc. However, as deepfake technology continues to improve, synthetic videos become increasingly difficult to detect without the help of AI. 

AI detection tools include:

  • Digital watermarks embedded to media that is detectable when broken
  • Using machine learning to spot unique head and face movements of a subject
  • Browser plug-ins to scan and report synthetic media on the screen

Deepfakes and synthetic media can pose threats to individuals and society by spreading false information. These technologies can manipulate public perception, damage reputations, and create widespread confusion. Key risks associated with media manipulation are:

  • Misinformation: False information that is not intended to cause harm, but still misleads and confuses the public.
  • Disinformation: False information designed to manipulate, cause damage, and guide people in the wrong direction.
  • Malinformation: Truth-based content that is exaggerated or distorted to mislead and cause harm.
  • Reputational Damage: Deepfakes can harm individuals' reputations by depicting them in false situations.
  • Economic Loss: Misuse of deepfakes can be used in financial scams and market manipulations.
  • Social Discord: Trust in media, institutions, and personal relationships can be undermined, leading to social instability.