Researchers and legal scholars at a SXSW panel warned that machine-learning technologies are exploding the scope of media manipulation, prompting society wide concerns.

Experts on a panel titled “AI-Powered Media Manipulation and its Consequences” highlighted various manipulation techniques powered by artificial intelligence, such as “Deepfake,” a term used for a technique that synthesizes images of humans. The technique can be used to create images of someone doing something they didn’t actually do or to generate photo representations of people who don’t exist. The implication of such manipulation depends on how it is used.

Deepfake technology has been used to deceive audiences and alter perceptions. It has been used in politics to mimic and mock world leaders and to create pornography, for example showing a particular actor or actress engaged in a sex act that didn’t happen.

Jessica Fjeld, an instructor at Harvard Law School’s Cyberlaw Clinic, warned that today it’s pretty easy to tell when a social media account is not a real person by their likes, friends, relationships, and posts. But the way the technology is advancing, telling the difference between a real person and a fake will quickly become much more difficult, she said. Fjeld explained how users could be tricked into friending a profile that appears to be a past acquaintance only to have that account begin collecting reams of data about you and eventually impersonate.

But the future of AI-powered media manipulation isn’t just mimickry said, Joan Donovan, director of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center. New technologies are currently working on the notion of a “data void”—examining human behavioral cues to determine when and how best to influence people. She described it similar to the autofill function on most search engines, guessing what you may be looking for, need, or could be convinced that you need.

There are several barriers to stop AI that generates fraudulent or otherwise malicious or unlawful content, the experts explained. Barriers include:

  • anonymity of the creator(s);
  • the sheer volume of content that can be created, posted, reposted, and shared;
  • jurisdiction of the crime should you want to file suit or press charges once you locate the creator;
  • the consequence that filing a lawsuit may call even more attention to the thing you wish to stop; and
  • the First Amendment.