The dark deepfakes I’m most worried about

Everyone should be concerned about deepfakes and voice cloning; what’s difficult is deciding how concerned to be.  When it comes to the use of AI in crime, I think we have a 5-alarm fire on our hands.

I just came back from a week of talks at the University of Georgia journalism school, and I can tell you students there are very worried about the use of artificial intelligence in the generation of fake news.  I’m less worried about that, perhaps betraying naivete on my part. This is the second presidential election cycle where much has been made about the potential to create videos of candidates saying things they’d never say; so far, there are no high-profile examples of this swaying an electorate. There was a high-profile cloning of then-candidate Joe Biden’s voice during the New Hampshire primary, when an operative paid for robocalls designed to suppress the vote (he said, later, just to make a point).  That fake call was exposed pretty quickly.

We can all imagine a fake video that a candidate’s opponents might want to spread, but my sense is that such a deepfake wouldn’t persuade anyone to change their vote — it would merely reinforce an existing opinion.  I could be wrong; in an election this close, a few votes could make all the difference.  But there are far easier ways to suppress votes — such as long voting lines — and those should get at least as much attention as deepfakes.

I am far more concerned about more mundane-sounding AI attacks, however. Research I’ve done lately confirms what I have long feared — AI will be a boon for scam criminals. Imagine a crime call center staffed by robots armed with generative conversational skills.  AI bot armies really can replace front-line scam call center operators, and can be more effective at finding targets.  They don’t get tired, they have no morals, and perhaps most significantly — they hallucinate.  That means they will change their story randomly (say from, ‘your child’s been kidnapped’ to ‘your child is on the operating table’), and when they hit on a story that works, they’ll stick with it. This might allow a kind of dastardly evolution at a pace we’ve not seen before.  And while voters might see through attempts to put words in the mouths of presidential candidates, a hysterical parent probably won’t detect a realistic-sounding imitation of their kid after a car accident.

As with all new tech, we risk blaming too much fraud and scandal on the gadgets, without acknowledging these very same schemes have always been around.  Tech is a tool, and tools can always be used for both good and bad.  The idea of scaling up crime should concern everyone, however.  Think about spam. It’s always been a numbers game. It’s always been an economic battle.  There’s no way to end spam.  But if you make spam so costly for criminals that the return on investment drops dramatically – if spammers make $1 from every 100,000 emails, rather than $1 from every 1,000 emails — criminals move on.

That’s why any tech which lets criminals scale up quickly is so concerning.  Criminals spend their time looking for their version of a hot lead — a victim who has been sent to a heightened emotional state so they can be manipulated.  Front-line crime call center employees act as filtering mechanisms. Once they get a victim on the line and show that person can be manipulated into performing a small task, like buying a $50 gift card or sharing personal information, these “leads” are passed on to high-level closers who escalate the crime.  This process can take months, or longer.  Romance scam criminals groom victims for years occasionally. Now, imagine AI bots performing these front-line tasks.  They wouldn’t have to be perfect. They’d just have to succeed at a higher rate than today’s callers, who are often victims of human trafficking working against their will.

This is the dark deepfake future that I’m most worried about.  Tech companies must lead on this issue. Those who make AI tools must game out their dark uses before they are released to the world.  There’s just too much at stake.

Don’t miss a post. Sign up for my newsletter

About Bob Sullivan 1668 Articles
BOB SULLIVAN is a veteran journalist and the author of four books, including the 2008 New York Times Best-Seller, Gotcha Capitalism, and the 2010 New York Times Best Seller, Stop Getting Ripped Off! His latest, The Plateau Effect, was published in 2013, and as a paperback, called Getting Unstuck in 2014. He has won the Society of Professional Journalists prestigious Public Service award, a Peabody award, and The Consumer Federation of America Betty Furness award, and been given Consumer Action’s Consumer Excellence Award.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.