Boy: Mom! Mom, can you hear me? There's so much noise in here.
Mom: (straining to hear) Yes, I hear you. What's going on? Where are you?
Boy: I'm at the police station. They arrested me, Mom.
Mom: (increasingly panicked) What? Why? What happened?
Boy: I was at a protest and things got out of hand. I need your help, Mom. I need you to send $100 via Venmo to seattlepolice@gmail.com. It's for bail.
Mom: (yelling over the noise) What? I can barely hear you! How much did you say?
Boy: (yelling) $100! Send it to seattlepolice@gmail.com. Please, Mom. I need your help.
Mom: (sighing) Alright, I'll do it. But you need to be more careful. This kind of trouble is not good for you.
Boy: (relieved) Thank you, Mom. Thank you so much.
Mom: (trying to sound firm) Just stay put and stay out of trouble, okay?
Boy: I will, Mom. I promise. Thank you.
Of course, none of this is real.
Yet.
ChatGPT generated this dialog. And soon, an AI will also generate the voice for the phone call.
One of the coolest and simultaneously one of the scariest AI demos recently was Microsoft's demo of Vall-E, an AI that can turn text into the sound of literally anyone's voice as long as the AI has a three-second sample of that voice. Thanks to social media, getting a sample of someone's voice is easy these days!
There are countless amazing and positive scenarios for this kind of technology. As a simple example, imagine quickly changing your voicemail prompt depending on who the caller is, making it much more personal without recording a dozen different messages. Just let the AI generate unique prompts for family members, work colleagues, etc.
The nefarious implications are even more dramatic, unfortunately.
I still get plenty of spam phone calls pretending to be coming from Windows Support or the IRS or the bank, etc. These calls are typically easy for me to detect (as a human). But what if the spam call came from my son--and it sounded like my son?
The world of cybersecurity is about to get turned upside down.
Show me money!
Some context first. Economics fundamentally governs all cyberattacks. Hackers are constrained by basic economic principles, whether working for governments, organized crime syndicates, or solo as individual hacktivists. Cyberattacks take time and money to perform. In general, the payoff from a successful attack must be greater than the cost of the attack itself. In the case of cyberattacks from organized crime, the gain is often financial. In the case of nation-states like Russia, the gain may be attempts to cripple Ukraine's power plants and other critical infrastructure.
If we dig a little deeper, this simple principle explains why spam emails from "Nigerian princes" and the like are filled with obvious mistakes, poor wording, and so forth. It's incredibly cheap to send millions of emails. But to actually get someone's bank information, that typically involves interaction with a human. Somebody in the hacker's organization needs to talk to the victim. That interaction can be thousands of times more expensive for the hacker than just sending an email.
Thus, the hackers want to avoid talking to somebody who will hang up on them or otherwise waste their time and not make them any money. They want to speak to the most vulnerable, the ones they are most likely to trick. Their solution: mistakes in spam emails! The poorly written emails are a good litmus test. If that kind of sloppy email fools someone, that person is much more likely to be fooled by the rest of the crime. (see https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/WhyFromNigeria.pdf for a more detailed analysis of the economics in these attacks).
Economic principles apply to other forms of cyberattacks as well. Many software products, such as Microsoft Windows, are clones. Every copy of Windows is effectively the same as every other copy. Thus, if an attacker successfully learns how to infect one copy of Microsoft Windows with malware, they can do the same to a billion other computers. With a billion-to-one payoff, it doesn't really matter how hard it is to create the initial hack.
The payoff for attackers is amplified when the target itself is valuable. Microsoft's ActiveDirectory, for example, often controls many, if not all, of an enterprise's computers. Similarly, password managers like LastPass have passwords for millions of customers (the ongoing LastPass hacking saga is a great topic for another day). Hackers are fond of these kinds of juicy, high-payoff targets.
At the end of the day, be it sophisticated memory malware or a social engineering scam, hacking still boils down to economics. How much effort versus what is the payoff?
AI is going to disrupt those economics dramatically—and in the attacker’s favor unfortunately! As discussed earlier, AI is good enough now to impersonate your friends or children. While I'd like to think that most of us would be smart enough to tell apart our real friends and family from an AI friend, think about the starting story of your child being in jail. You'd be in a heightened emotional state and distressed--not the best scenario for making rational decisions.
What happens when sending a million fake phone calls supposedly from kids in jail is as cheap as sending a million "Nigerian prince" scam emails? I think we will see many of these kinds of scams in the coming months and years.
Of course, it is not just the economics of social engineering attacks that are going to be changed by AI. Viruses and other forms of malware and cyberattacks will get a lot more sophisticated--and fast.
I've written several times about how effective AI is at writing code. So far, code from ChatGPT, CoPilot, and other tools ends up in the hands of a human programmer. That human programmer then takes the code, fixes it, and applies it to their projects.
What happens when a hacker eliminates humans from the loop? Why not have the AI generate some code for a virus directly? Well, it doesn't work quite like that--at least not yet! However, there are approaches like fuzzing and genetic algorithms that (vastly oversimplifying) allow computers to automatically take a program and experiment with millions of variations until it finds something that works. I predict that a generative AI coupled with a genetic algorithm will be an incredibly effective way to create viruses and other malware automatically.
Just as I think we'll see a new wave of scam attacks powered by AI, AI-created malware will generate another wave of cyberattacks.
The solution
Fortunately, there are ways to defend against this next generation of hacking. The solutions, not surprisingly, are also rooted in economics.
There is a pithy saying for today's cybersecurity dilemma:
"It's cheaper to attack than to defend. An attacker only needs to be right once. A defender can never be wrong."
The solution, therefore, is to flip the economics. Make attacking more expensive than defending.
We see this already with technologies like multi-factor authentication, where you get a one-time password sent to your phone. Sure, there are ways to break that, but it is significantly more challenging to hack a one-time password than for a hacker to break the same single password you've used on every website for the last ten years!
Similarly, highly secure modern software systems use technologies like moving target defense, zero trust networking, end-to-end encryption, and polymorphic code to make hacking dramatically more difficult. The upcoming Boost product from my company Polyverse uses a new technology called "Non-Fungible Executables" (NFE) to encrypt software into a Web3 NFT on the blockchain. If every program is encrypted, a virus, even an AI-generated virus, will have to crack the encryption scheme in addition to all other defenses.
All of these defenses are rooted in the same core idea: making it more expensive for evildoers to hack the system successfully.
In the meantime, if you get a suspicious phone call from a "friend" or relative, remember this scene from Terminator 2:
If you aren't familiar with the movie, the basic premise of the franchise is that in the future, an AI called SkyNet becomes self-aware and tries to wipe out the human race. John Connor leads the resistance in fighting the machines. This thirty-year-old movie was quite prescient, at least in having the machines mimic human voices (and hopefully not in the take-over-the-world theme)! Only, at the time James Cameron made the movie, few people anticipated social media. Now the AI is very likely to know the name of your pet dog!