Our legal system is about to get overwhelmed!
Fake fingers, Fake invoices, Fake accusations? Will social media be held liable?
My last post generated a ton of discussion and ideation--folks have come up with many creative (and demented!) ways in which AI could be maliciously applied in the coming months and years.
As a brief recap for new readers, new AI technologies let you automatically create a realistic video of someone talking--in their own voice! While there will be a ton of incredible uses for these technologies (think marketing videos, digital avatars, training videos, etc.), evildoers and hackers will also have a field day.
There were way too many ideas to share everything, so I selected a few of my favorites!
Fake fingers!
Dan's tweet sums this up well. Right now, AI tools often leave small artifacts--and for that matter, so does bad photoshopping! Those artifacts can be a way to distinguish a real picture or video from a fake or AI-generated one.
Many, if not most, businesses and homes use security cameras these days. That's not a secret; robbers know this too! It's easy to imagine the well-prepared bank robber using subtle prosthetics like the fake extra finger to make the case that the video is doctored and should be thrown out from evidence.
Fake invoices!
By now, most savvy businesses are aware of the dangers of wire fraud. There are a wide variety of scams behind these attacks (the FBI report on this makes for some eye-opening reading).
Unfortunately, the simple scams still work: a hacker sends a fake invoice to a business claiming to be one of their vendors, along with a note saying, "We changed banks; please use the new account numbers below").
As I described in my last post, these simple scams work because of the economics of cyberattacks. It's incredibly cheap to send millions of fake emails. The hacker only needs to fool one person to make a tremendous return from their effort. Even alert and diligent people can be fooled--everyone, myself included (!), has days when they are tired or distracted or otherwise just off their game a bit.
Hopefully, any business you do wire transfers with these days will call you on the phone to verify bank account information. That is absolutely a best practice--for right now!
With the deep fake ability to mimic your voice, attackers will have new ways to trick even more people and do so extremely cheaply! Here's what a new attack could look like:
Hacker: sends a fake invoice to Bob's Pizza, claiming to be from Charlie's Wholesale Produce.
The hacker knows that Bob will likely call Charlie to verify the new bank numbers. Fortunately for the hacker, Charlie posted a promotional video on the website for Charlie's Wholesale Produce. Rather than wait for Bob to call Charlie, the hacker gets ahead of the game and calls Bob using an AI-generated version of Charlie's voice: "Hey Bob, it's Charlie--just so you know it's not fake, I just sent you an email with our new bank account info. We just switched to Citi from BofA, much much happier there. Hope the family is doing well--take care".
The bad breakup!
Oh my, where to even begin on this? As wonderful as relationships can be, breakups can sometimes lead to very unpleasant behavior by one (or sometimes both!) of the former couple. Messy breakups are not a new phenomenon; it's been happening for thousands of years (Helen, Paris, Menelaus, and the Trojan War and anyone?)
(https://en.wikipedia.org/wiki/Trojan_War#/media/File:Triumph_of_Achilles_in_Corfu_Achilleion.jpg)
Over the years, I've seen all kinds of misdeeds (thankfully, none that directly involved my family or me!)
A young man sent in an anonymous accusation of cheating to his ex-girlfriend's university. It took over a year for her to clear her name. The university essentially had a 'presumed guilty' internal investigation model; the accusation alone was enough to put her in the penalty box.
A couple was in the middle of a very contentious divorce. One of them made a fake domestic violence call to the police. It was a classic "he said" / "she said" scenario and resulted in a lot of extra time, money, and court-ordered third-party evaluations to get it sorted. Ultimately the restraining orders were lifted, but the goal of exacting torment on the former partner was achieved.
A spurned lover posted unsavory and (presumably untrue) details to social media.
I'm sure many of you have stories of friends and acquaintances entangled in bad breakups. If you haven't, read through the Amber Heard / Johnny Depp trial, or watch the classic movie "Fatal Attraction"!
Thankfully, our current legal system has tools to address misbehavior like this, though it may take a while and may be very expensive (and admittedly, the system is not perfect). The Amber/Johnny trial mentioned above is an excellent example of the cost and complexity of resolving those disputes.
The new AI tools, though, will make it extremely difficult to sort out truth from fiction. What happens when the 'anonymous cheating tip' is not just an email but an AI fake video of Bob at home supposedly admitting that he cheated on a school exam?
What happens when the AI fake video is posted anonymously to social media? How would one recover from or defend themselves from this? Abuse of social media is not an idle problem--last year, Meta / Facebook bragged about how many fake accounts they took down (over 1.5 billion).
I find it odd to brag about this number (just as odd as Microsoft bragging about making $15 billion yearly in the security business). On the one hand, it is great to see the company actively investing in combating spam and fraud. On the other hand, at that scale, how much bad stuff made it through? Why is it so easy to create a fake account that there are billions of them? It's truly staggering.
Overwhelming the legal system
At one level, it's easy to take a Pollyanna-ish view of these issues. "The courts and legal system will sort it out." Over a long enough period of time, I share that optimism. Western legal systems have held up well over the years as technology has changed. Of course, these systems aren't perfect, but the fundamental principles (due process, jury trials, legal representation, etc.) are solid. These principles worked two hundred years ago and have continued to work through the invention of radio, television, the Internet, and so forth.
Thus, given sufficient time to evolve, I suspect the system will catch and penalize these kinds of misbehavior, even if that misbehavior uses AI tools.
The catch to this is the timeframe! AI tools are evolving and being adopted at dizzying speeds:
How fast can our legal and court system adapt? If an increasingly large number of court cases involve potentially AI-faked evidence, how will the courts sort through this? How much extra time and money will it take every time this happens?
There is a genuine possibility that our legal system will become rapidly overwhelmed in the short term. As I have hopefully demonstrated, there are just too many ways in which the new AI tech will be abused, and at the immediate moment, there are no widely deployed solutions.
Solutions
There is hope of course! It's tough to predict exactly what will happen, but there are some promising technologies and legal approaches on the horizon. Content supply chain, verified accounts, and holding social media companies legally liable for promoted content will be discussed below.
Keep reading with a 7-day free trial
Subscribe to ThoughtfulBits: Ideas that Matter to keep reading this post and get 7 days of free access to the full post archives.