![](https://i0.wp.com/bobsullivan.net/wp-content/uploads/2024/02/Taylor-Swift-block.jpg?resize=600%2C302&ssl=1)
Hold on tight, fellow humans, there’s artificial turbulence ahead. Like it or not, the time has come to stop believing what you see, what you hear, and perhaps even what you think you know. Reality is indeed under attack and it’s up to us to preserve it. The only way to beat back this futuristic nightmare is with old-fashioned skepticism.
Lately, it feels like all anyone wants to talk about is AI and how it’s going to make life much easier for criminals, and much harder for you. I’ve annoyed several interviewers recently by saying I don’t believe the hype. There is not an avalanche of voice-cloning criminals out there manipulating victims by creating fake wailing kids claiming to need bail money. The so-called grandparent scam has operated successfully for many years without AI. But I think that misses the point. First of all, as many journalists have demonstrated (even me!) it’s trivial to create deepfakes now. An expert cloned my voice for $1. But more important, a recent offensive, vile Taylor Swift deepfake was viewed 47 million times before it was removed from most of social media. This kind of violation is here, today, and it’s going to be very hard to stop.
There are celebrated efforts, of course. The FCC just made voice cloning in scams explicitly illegal, which is certainly welcome, but if FCC efforts to stop robocalling are a guide, AI scams won’t be stopped by this. There are also some high-tech efforts to separate what’s real from what’s fake, and that’s also welcome. Watermarking — even in audio files — can be used by software to declare items as AI-generated, so our gadgets can tell us when a Joe Biden video has been manipulated. Naturally, I wish tech companies had built such safety tools into their AI-generating software in the first place, but this kind of retrofitting is what we’ve come to expect from Big Tech.
I don’t have high hopes that an “AI-generated” label on a negative presidential candidate video is going to do much to stop the coming attack on reality, however. I’m afraid to say this, but it’s true: the problem, dear Brutus, lies not in our stars but in ourselves.
I am the last person to lump responsibility for the failures of billion-dollar tech companies onto busy human beings. And that’s not what I’m doing here. I still want tech workers to speak up when managers ask them to make tools that can be used to hurt people. I still want regulators to staff up and lock down companies that behave recklessly. But when it comes to defending reality, the truth is, we are on our own right now. Human beings are going to have to develop radical inquisitiveness when it comes to things we see, hear, and feel while interacting with technology.
This is going to be hard. Many of us want to see a video of our least-favorite politician looking stupid. A large number want to see “exclusive” video of famous people in….candid…moments. We would love for them to contact us directly and offer to be our friend, or even our lover.
We have to help each other learn to resist these base urges, to choose reality over this dark fantasy world that’s being foisted on us.
As if often the case with tech crises, this problem isn’t really new. Marketers have always manipulated consumers. Propagandists have always lied to populations. Many dark periods of history can be blamed on large groups failing to exercise proper skepticism, their prejudices and predispositions used against them. What’s different about our time is the scale. As we learned back in 2016, a room full of typists half-way around the world can persuade thousands of Americans to attend real-world rallies. The tools for liars and criminals are very powerful; we have to respond with equal force.
I recently interviewed Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, about this problem, and he’s persuaded me that humans must react by adjusting to this new “reality” of un-reality. We must stop believing what we see and hear. And there is precedence for this. At the dawn of photography, many people believed that photos couldn’t lie. Most folks now know that it’s trivial to manipulate images, perhaps even on a subconscious level. If you see something that doesn’t look right — a man’s head on an animal’s body — your first instinct is to react as if Photoshop is the culprit. Hopefully, we’ll all engage in a learning curve now where this is how we react to any media that’s unexpected, be it a fake desperate child, a celebrity asking to meet with us, or a politician doing something foolish.
My fear is that people will still believe what they want to believe, however. A “red” person will believe only “blue” fakes, and vice versa. And that, in my view, is the greatest threat to reality right now.
Be the first to comment