Elon Musk has taken over Twitter and .. well, a lot is going on. One really important development that might be overlooked in the noise and chaos since the acquisition is Musk’s new – and now delayed – attempt to change Twitter’s Blue Check system.
Verification online is a very, very tricky subject — one of these things where the problem is easy to spot but the solution is not. In this episode of Debugger in 10, I’m joined by Robyn Caplan. She’s currently a senior researcher at the Data and Society Research Institute and soon to be an assistant professor of Technology Policy at Duke University. We begin with a bit of a history lesson on Twitter’s Blue Checks. It’s a good reminder that many people have never really been happy with how they work.
You can listen by clicking here, by subscribing to Debugger wherever you get podcasts, or by clicking the play button below. If you are a reader — you can read the transcript below that.
BOB: Twitter Blue Checks, they’re, they’re going, they’re gonna cost $8 a month, maybe not yet, maybe soon. What’s actually going on with Twitter and Blue Checks?
Robyn Caplan: So right now we’re in a bit of an intermediary phase.
So basically Elon Musk, when he came in, he decided that he was going to take the. Blue check, um, that has been such a ubiquitous and important symbol on Twitter and he was going to fold it into a product that Twitter already had that was called Twitter Blue. It was a subscription-based service that existed before.
It enabled Twitter users to gain access to lots of different features, like an edit button and an undo button, and things like bookmark folders. It was actually very useful. I was a subscriber and he decided that he was going to fold in the blue check mark to be part of that. Part of that was an ideological goal that he had when he bought the site where he was kind of starting to see how media organizations were using it, how the site itself was kind of becoming a legitimizer or an intermediary point for all of these different user groups. And he kind of rightly saw that verification in my view had become the product of Twitter. What he kind of incorrectly also assessed was that that was something that he would then be able to sell to these Twitter subscribers without consequence without really changing the meaning in the process.
So what he did was that he made it a feature that you could buy, made it available to anybody who was willing to pay $8 a month, but in the process, really changed the meaning of that blue check mark.
What ended up happening, was really exactly what was expected, Twitter users paid that $8 and then began to really mess with the site in very amusing ways. They pretended to be these major organizations and these major brands. They posted things that were sometimes quite funny, but definitely presented a lot of problems for these brands. And now what Elon Musk has done is that he has suspended the program. He has shut down Twitter Blue subscriptions, and we don’t really know what’s gonna happen next.
He said he’s going to relaunch the program on November 29th and, quote unquote, fix the problems. But we don’t know what those fixes look like yet.
Bob: And so the great irony of adding verification made it easier for imposters, right?
Robyn Caplan: It made it easier for imposters. Exactly. So you actually see — I’ve been doing research on the verified badge system for about a year now, looking at the history of how these programs were introduced and why. In most cases — and Twitter was the first social media platform to actually introduce the blue check — they were introduced to kind of solve some of these early concerns. You didn’t see the same sort of chaos that you see now, but the reason why they were introduced was because of concerns from high-profile users — particularly celebrity users — that they could be too easily impersonated on these sites, and that was causing problems for them and their public relations teams. There was a really early lawsuit that didn’t really go anywhere, but played a pretty big role in the rollout of the Twitter Blue Check, and it was between Twitter and the manager for the St. Louis Cardinals. Sorry guys. I’m not a baseball fan. This is basically, as much as I know about baseball …is this guy, Tony La Russa, sued Twitter because there was an impersonator who had used his name and, and was posting very offensive things on Twitter. And at that same time, a lot of celebrities were bringing up the same concerns. So Twitter unrolled this feature of the Blue Check to really address some of these, some of these issues initially.
Bob: And lots of other companies followed. Right? So Facebook has a verification system, all sorts of other, um, websites, membership tools, right? They’ve all added something. Well, not all many of them added something similar, right?
Robyn Caplan: Exactly. So verification has really become the standard across the platform industry, across a range of different services. So for social media, you see, it wasn’t immediately following Twitter, it was actually a couple of years, but all of the other platforms start integrating verification processes as well to respond to very similar concerns. So between 2012 and 2015, you see Facebook and Instagram and Snapchat all start to bring in verification as a feature because they were all trying to vie for these celebrity users, and they all needed to give them some guarantees. They also needed to address the concerns of users who happened to share the same name as the celebrity users. So because, when you’re on a platform, these two accounts, they’re coming in from a very similar position. Ostensibly, users were going online and they were looking up the name of a celebrity, and they would see somebody else. And they might follow them even if that person just happened to share the same name.
The other thing that happened was that when these platforms started moving more squarely into the advertiser business, platforms needed to start giving some assurances to brands that they also would not be impersonated. They were asking these companies to spend money on their sites, and in return these brands wanted some assurances that somebody wouldn’t be able to come in and dupe their customers with fake products and things like that. And what ended up happening, and what’s really interesting about verification, is that that kind of created this whole model of verification as coming through things like brand partnerships, as emerging through these relationships between these bigger organizations and the companies themselves.
And in some cases, like with Twitter, we actually saw that really being used in kind of funny ways. So Twitter in 2012 started doing this practice where they were using the blue check kind of in a ransom-like way against brands. So, they said to brands, basically, if you don’t spend $5,000 on our website every month, we can take that blue check away.
So what’s happening now? So the blue check — as you know, a condition of some sort of financial transaction with the platform — is not entirely new within Twitter’s history. It is new in the sense that the blue check as paid for as envisioned by Elon Musk does not include any of the same features of verification that we have seen in the past. It doesn’t require you to prove your identification. It doesn’t require to you to prove that you’re notable, that you have any sort of cultural relevance in any way, that you are a member of an important organization. And so it doesn’t have any of the features of a verification program. It just has that, that blue check symbol.
Bob: But it was very opaque, right? I mean, there were people who didn’t know how to get verified or why they got verified. It just sort of happened in some cases.
Robyn Caplan: Yeah. And that’s a big concern, a complaint with these systems, and it has been for some time. You know, I think at various points in times the platforms themselves were not entirely clear about why they were verifying some accounts over others. There has been a perspective from both outsiders and within the platforms themselves that, you know, verification can mean a special relationship between an organization and a platform, and that could be problematic, especially because these platforms are often operating globally and you know – a publication that is operating out of India may have a harder time getting verified than a publication that is operating out of the United States because. The platform worker might recognize the publication for the United States, but they might not recognize the publication that is coming out of India. So there’s lots of different ways that inequity becomes built into these systems. And then, there’s actually just much older ways that inequity can become built into these systems. A lot of the time it’s actually built on older notions of notability, that can build past forms of media bias and issues with media diversity.
So something like Twitter really used to, in the pre Elon era, build their notability standards on Wikipedia’s guidelines and Wikipedia’s guidelines built their notability standards both on … in terms of what media would cover, and then also what the editors at Wikipedia thought was important or of public relevance. And those themselves have been shown to have pretty significant gender and racial and cultural biases because of the different systems that they’re pulling off of in creating their guidelines.
So yeah – there’s the older, more predictable ways of seeing how power gets reinserted in. And then there’s, a significant amount of randomness and opacity that’s added in, that’s more unique to the platform era, with these companies where we just have very little insight into how they’re making these decisions and why.
Bob: So you can see the attraction to someone saying, right, we’re gonna professionalize this, you pay for it, and then, you know, we have a credit card in your name or whatnot. And so we know who you are. Why did that go so wrong?
Robyn Caplan: I mean, it went wrong for a variety of different reasons. So, firstly, the presumption that was made, which was that we will know who you are because we have a credit card in your name may not have been a completely correct presumption. I think that the logic that he was operating under was that, if somebody pays $8, and they will lose that money if they are being false, that will be enough of a motivation. So basically they were saying, we wanna make this impersonation act actually have a literal cost, so it’s too high of a cost for people to engage in. It was $8 a month, so it was not a high enough cost that people weren’t willing to lose it.
Bob: I feel like we just set a price on sarcasm, right? That’s good to know.
Robyn Caplan: Yeah, exactly. We set a price on sarcasm. Secondly, it actually didn’t have those same identification features. And what is special about Twitter and what is really important to preserve about Twitter is that Twitter has never had a real name policy. You do not have to have the same name on Twitter that you do in real life. So your name on Twitter has never had to match something like your credit card information. So the credit card itself is not enough of a verifier because it actually has no relationship with the username. So somebody can put in a credit card and then they can change their username to absolutely anything that they want it to be. And, that’s what was happening.
And that is, you know, in effect I would say a very good thing. You don’t want there to be a system where everybody has to be verified all the time, because that can have some pretty extreme consequences for things like privacy and freedom of expression. But it did mean that there was a missing link in the program that Elon Musk had designed as he envisioned it.
Bob: So that leads me to the grand question here, which is, is verification a good thing? I mean, anonymity is a good thing too, right?
Robyn Caplan: Exactly. Anonymity is a good thing and verification is a good thing. It is. Verification is … part of the Internet’s infrastructure at this point…the internet is a space where different users converge, whether they are organizations or individuals, and there is some value in having a symbol or having infrastructure on the back end that might signify that a particular user — whether it represents an individual or an organization — has a special relationship to the public because these are spaces where mass media and participatory media are converging. These kind of two logics are happening at once and verification enables us to hold that in suspension. As it has been implemented by platforms thus far, I think there is a lot of work that needs to happen to not reproduce the same problems that we had with old media where you had some voices that were amplified and prioritized at the expense of others.
What was interesting about what Twitter was doing with their program pre-Elon was that they were actually taking some steps to mitigate that. They were taking some steps to look for people who were important within their communities and verify those people. And that was, that is really important in a lot of different contexts. It’s important for communities to feel like they are recognized, that their expertise is recognized, that their values are recognized. It’s important for things like — in times of public health crisis, where people might not trust a centralized authority, but they might trust people who are important and vocal within their specific communities. So that is very important. At the same time, there was still a lot of room for improvement with that system. The standards that they had set for people to apply for verification were often quite arbitrary and random. So to be recognized as an activist, you had to be in the top 0.05% of accounts within your region. And as we can kind of imagine, that has very little to do with whether or not somebody is an activist. And it also means that when you’re living in high population areas where there may be more Twitter users, a place like New York City for instance, it’s much, much harder for somebody to get verified than if you’re living in a smaller locale. And then additionally, like I mentioned before, there are a lot of global discrepancies in how verification takes place. And we now know, it really provides a very important function to both these organizations as they operate within these spaces and as the people who work for them operate in these spaces. It’s really, really important to journalists in particular. But it’s also important to users who are trying to kind of tell the difference or trying to make sure that they’re interacting with the right account.
At the same time, it is not something that you want to apply – you want to make a default — for the Internet because verification often does include some components of identification. And so it is something that is very necessary for a subset and should be available as an option for a much larger swath of people than it is currently available for. But it is not something that we should be seeing as the default for everybody online.
Bob: I know you’ve been studying this for years and people have been working on this problem for years. Can Elon fix this in the next 12 days or so?
Robyn Caplan: I mean, absolutely not. Absolutely not. This is something that has been very slowly developed over time. It’s also something that requires a lot of resources. Doing verification right requires having people who understand the cultural dynamics of where the platform is being used, where people are located and Elon keeps firing or engaging in strange games with his employees that mean that they are either fired or resigned. It’s very unclear at this point … there was newer revelations today which means that he does not have the staff to be able to do this. This is something, verification is something that you can, you can automate to a degree. But for the most part, needs to be done by human beings. And he doesn’t really have many human beings left.
Bob: Well, I wish him luck at trying. And now thank you very much for being here.
Robyn Caplan: thank you so much for having me.