I’m very excited for this episode of Debugger. Kashmir Hill has broken many of the most important privacy stories in the past twenty years, many of them in the pages of The New York Times. She’s been chasing after a secretive company named Clearview AI story for years, and recently released a book chronicling the crazy story. It’s an important work, because as the insane James-Dolan-banning-lawyers-from-MSG-concerts-story shows, the dystopian world of “artificial intelligence” is banging on the door right now. We are *almost* too late to have the conversation about what it shouldn’t do — but it’s *not* too late. Even James Dolan can be stopped, as you’ll see. And facial recognition is a good place to start figuring out how to do that.
Click play below to listen, or click here for the Debugger home page, or find the episode in your favorite podcast player. Below that, you’ll find a transcript of our conversation.
(Transcript lightly edited for clarity.)
Bob: Facial recognition is one of the most controversial frontiers of the tech world, and if you’ve read any story about facial recognition in the past decade or so, it’s probably been written by today’s guest, Kashmir Hill, a New York Times reporter who has a new book out called Your Face Belongs to Us.
Congrats on the book, Kashmir, and welcome to Debugger.
Kashmir Hill: Thanks so much, Bob.
Bob: The book has a rather stark name, so I’d like to start there. We’ve been talking about tracking and privacy technologies for a long time, and you’ve been writing about them for a very long time. What is special or significant about facial recognition?
Kashmir Hill: Well, facial recognition technology, the kind that we’re talking about, the kind that I talk about in the book is a technology where someone takes a picture of your face and then links it to as many online photos of you as they can find. And so, this is different from just picking up your phone and using that to unlock it. This is really unmasking you, putting a name to a stranger’s face and linking you to this huge online footprint that you may have left out there. Photos you’ve put up yourself, but also photos that you didn’t post yourself …maybe that you’re just in, maybe photos you don’t even know about.
Bob: I think my face belongs to me. Am I wrong?
Kashmir Hill: I mean, I do agree that your face belongs to you. But there are a lot of other people who are also using your face. I mean, I’m fairly certain, and you could find out, but your face is probably in Clearview AI’s database and it may be in the database of another public-facing search engine called PimEyes … because these are companies, these are startups that have just gone out there and scraped public photos from all over the internet without anyone’s consent.
Bob: Okay, so you mentioned Clearview AI. Tell us, tell us about the book. What’s the premise of the book?
Kashmir Hill: So the book starts with this crazy tip that I got in the fall of 2019. A public records researcher that was looking into how police were using facial recognition technology – had come across this company called Clearview AI that claimed to have done something unprecedented. It said that it had billions of public photos scraped from the internet.. from social media sites like Instagram, LinkedIn, Facebook … and it was selling it to hundreds of police departments, so that they could take a photo of an unknown suspect or take an image from a surveillance camera and find out who that person was. And I had never heard of this company before. Experts I talked to hadn’t heard of it before, and everyone was really shocked.
And when I tried to look into the people behind Clearview AI, they were trying to stay in the shadows and they actually began tracking me and put an alert on my face so that they would know when police officers I was talking to about the app were talking to me and when they uploaded my own photo to show me what my results were.
Bob: Okay, that’s really creepy.
Kashmir Hill: It was, it was very chilling. And so, I looked into Clearview AI and it’s just such a crazy tale … the people involved in the company, their backgrounds, and it really sent me on this journey to try to understand…. How did they do this? And how did facial recognition technology get to this place where it’s so powerful? And so the book is really about that. Just how did we get to this moment in time? And, and why weren’t we prepared for a superpower like this being unleashed?
Bob: It’s not lost, I think, on anyone, the irony of the company behind this trying to keep a low profile … to hide the identity of the CEO?
Kashmir Hill: Yes, they had a very bare website. It said Clearview AI, artificial intelligence for a better world. There was an office, a few blocks away, it turned out, from The New York Times, where I work. So I mapped it and I walked over to where it was supposed to be. The building did not exist. The company had one fake employee on LinkedIn named John Good with two connections. He did not respond when I sent him a message. Basically, anyone I found that was connected to Clearview AI … from Paul Clement, a very high profile lawyer who wrote a legal memo for them, reassuring police departments that using the tool wouldn’t be illegal … to Peter Thiel, who turned out to be their very first investor … none of these people would respond to me. They really did not want to have what they were doing exposed at that time.
Bob: So I would like to go back a little bit in the timeline. I know in the book and also in your other writings, you talk about how Google and Facebook had developed technology like this, but basically had decided it was too dangerous and wasn’t ready for the world. Is that true?
Kashmir Hill: Yeah. I mean, when I first found out about Clearview AI, I thought it was a technological breakthrough and experts I talked to thought the same thing. But as I spent more time reporting on this and researching it for the book, I found out … actually other companies had developed it. Google, as early as 2011, they said that facial recognition technology was the one technology …. that they had developed this ability to recognize a stranger … but decide to hold back so it was too dangerous … Facebook in 2017 I watched this video of engineers who rigged up this smartphone on a baseball cap held in place from rubber bands … which looked ridiculous … but when the person wearing it looked in the direction of another person in the room, it spoke their name. And Facebook too … again, not necessarily a company known for holding back on privacy changing practices … privacy intruding practices … said, ‘This is too dangerous. We don’t want to be the first ones to put this out there.’
It really took a Clearview AI coming along … who was willing to break that taboo. It was an ethical breakthrough by them. They were willing to do what these other companies weren’t willing to do. And, you know, one of the lessons of the book is that the building blocks are there. And, it is now easier for a radical startup, like a very small actor, to wield this very powerful technology.
Bob: I think you do a really good job of describing how difficult it is to decide if this technology is good or bad, or should it be banned or … or any of that. Can you talk about that tension a little bit?
Kashmir Hill: Yeah. There are clearly good use cases for facial recognition technology and I talked to many officers who say it’s a powerful investigative tool … that they’re able to solve crimes they wouldn’t have been able to solve otherwise … particularly child crime investigators who often have just an image to work with … maybe an image found on the dark web … it’s unclear who that image belongs to, where it came from. And so now they had this superpower, they could put in the photo of the abuser, or even the child’s face to find and identify that victim. And so they really value a tool like Clearview AI.
But what I want to talk about in this book, the reason why I think it’s important to tell this tale now, is that there’s this whole range of what’s possible with facial recognition technology. And I think we can choose the uses we want. And the uses we don’t want. I don’t think we have to accept ubiquitous facial recognition technology deployed everywhere … that we can kind of constrain it in ways where it doesn’t become very dystopian and very chilling for society, such as having facial recognition technology that was just deployed on every surveillance camera in America so that you could be tracked, anytime, wherever you are … and easily be either picked up if you’re wanted, or have something embarrassing that you do in a public space come back to haunt you.
Bob: I don’t know that there are many people in the English-speaking world who haven’t read some of your stories about the bad uses of this technology, but for those who haven’t, describe them.
Kashmir Hill: Okay, so I think some of the surprising uses … well, one, facial recognition technology does not always get this right. As one privacy professor put it to me, we are not all unique snowflakes. So there have been cases where, face recognition has led to the arrest of someone guilty of the crime of looking like someone else. So that is a terrible outcome.
We have seen private companies using facial recognition technology to identify and keep out shoplifters, but also in the case of Madison Square Garden, using the technology to keep out the owner’s enemies. James Dolan, who owns Madison Square Garden, decided that he didn’t want lawyers coming in who worked for firms that had sued his companies. And so, they scraped the websites of these law firms, 90 law firms, and put thousands of lawyers on a ban list. And so they can’t go to a Mariah Carey concert or a Rangers game or a Knicks game until their firm drops the suit. And so that’s really an intimidating use.
And I think that example is actually really important because of what it says about what laws can do. Madison Square Garden has…banned lawyers with facial recognition at its venues in New York City … Madison Square Garden, Radio City Music Hall, the Beacon Theater … but they aren’t able to do this at their theater in Chicago because Chicago is in Illinois, which has this prescient law passed in 2008. I tell the history of the law in the book called the Biometric Information Privacy Act. And that law says that if you are a company that wants to use an Illinoisan’s biometrics, including their face print, you have to get their consent or face a fine of up to $5,000 per use. And so lawyers in Chicago can get into the Chicago theater. They’re not going to have their face scanned and be turned away.
Bob: So it turns out it’s not hopeless to try to regulate or put guardrails around some of these technologies, it sounds like.
Kashmir Hill: It is not. And then there’s other places like China that has face recognition much more widely deployed. And, it’s been used in oppressive ways to identify protesters in Hong Kong. And then in really wild ways, um, to… name and publicly shame people who wear pajamas in public … and in at least one public restroom in Beijing, there is basically a toilet paper dispenser equipped with face recognition, because they had a problem with toilet paper thieves. And so you have to look at this dispenser. It registers your face print. You get a small amount of toilet paper. And if you need more, you have to wait seven minutes.
So I think this is a scary thing about this technology. You really do need to regulate it. Think a lot about the potential uses and rein it in because otherwise you can have this slippage where it’s just all over the place and used for all kinds of methods of control.
Bob: We are talking so much right now about AI and how it might create this dystopian future and how we might regulate it. I think this conversation is so important because that’s the future. Facial recognition is today. The Dolan example shows you what’s happening right now. So this is great practice for all the things that are coming down the pike, right?
Kashmir Hill: So many of the questions around generative AI echo the questions around facial recognition technology. The public comments and what right we have to the data there. You know, our faces were collected without our consent with generative AI. It’s collecting our artwork, our writings, our comments on Reddit.
And then these questions about transparency. What are these companies doing? Who has this data? You know, is the technology that they are offering free of bias? How accurate is it? It’s just so many of the same questions. And, hopefully we act on this before it just really gets out of control.
Bob: You’re at Duke to give a presentation there. I know you’re a proud Duke graduate. Can you talk for just a moment about how your time at Duke helped shape your journalism?
Kashmir Hill: Sure. So I didn’t know I was going to be a journalist when I was at Duke. I ended up getting interested in journalism because I was studying abroad at the Duke program that at the time was in Florence and outside in Sesto Fiorentino. And I was there when September 11th happened. And I was seeing the story of that and how the U.S. was responding told through European news. And it was so different from the American news based on the conversations I was having with my Duke classmates who had stayed in the U.S. And it just made me very aware of how journalists can shape narratives and help people understand what’s happening in the world. And when I came back to Duke at the end of that, I took a class with a professor named David Paletz, a media and politics class. And it just really put me on the course of wanting to understand the power of media and eventually become a journalist myself.
Bob: And I have one last question. I’ve always wanted to ask you. What is a privacy pragmatist?
[00:14:48] Kashmir Hill: A privacy pragmatist is how I describe myself… because I look at these new developments, these new technologies, and I try to weigh the benefits with the harms and really demonstrate for people … here is how this information is being misused. Here are victims of what’s going on. And really trying to strike that balance, because there are benefits to these technologies, and I want us to be able to harness the good and not be stuck with the bad.
Bob: The name of the book is Your Face Belongs to Us. The author is Kashmir Hill. Kashmir, thank you very much for being here.
Kashmir Hill: Thanks so much, Bob.
Be the first to comment