After the ‘Sandy Hook of disinformation,’ can the Internet be fixed?

INTRODUCTION: 

As cries of censorship,”de-platforming” and domestic terrorism wash over the Internet, it’s time to have a long-postponed conversation about free speech and moderation in the age of the digital megaphone.  There are no easy answers. Donald Trump has pushed many Internet-age values to their limit. Duke University’s Phil Napoli has called the events of January 6 the Sandy Hook of the Disinformation Crisis. Tech firms have been pushed into drastic action they seem to have never contemplated — in one fell (coordinated?) swoop, they de-platformed the current president of the United States. Like it or not, this precedent-setting action will have consequences far beyond this week, and this presidency.  Jack Dorney, Twitter CEO, is clearly uncomfortable with the power he seems to have wielded:

“I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us,” he wrote on Twitter. “Yes, we all need to look critically at inconsistencies of our policy and enforcement. Yes, we need to look at how our service might incentivize distraction and harm. Yes, we need more transparency in our moderation operations. All this can’t erode a free and open global internet.”

He seems to be begging for help figuring all this out. So, let’s help him. What should tech companies do now that they have broken the glass in case of emergency? For this dialog, I’ve enlisted the help of Duke University’s Phil Napoli, Ken Rogerson, Jolynn Dellinger, and David Hoffman, and also Tim Sparapani, former director of public policy at Facebook.

(If you are new to In Conversation, I am a visiting scholar at Duke University this year studying technology and ethics issues. These email dialogs are part of my research and are sponsored by the Duke University Sanford School of Public Policy and the Keenan Institute for Ethics at Duke University.. See all the In Conversation dialogs at this link.)


FROM: Bob Sullivan
TO: Ken, David, Phil, and Tim

This is a complex problem — it seems a collision of all the Web’s big issues, from Net Neutrality to anonymity to privacy to academic freedom to disinformation to Section 230 — and we won’t solve it in one email thread. But it’s sure time to start the effort. For readers, here’s a short primer I wrote on what I think the core issues are: The ‘de-platforming’ of Donald Trump and the future of the Internet.

I realize the question is unwieldy, so how about we start here: What’s one or two things tech companies should do now to move forward from this moment?


FROM: Tim Sparaponi
TO: Bob, Ken, David, Jolynn

These are just basics, Bob, but the basics matter here. Tech companies can do a few things immediately. These include having a zero-tolerance policy for all users of their services who violate the community standards of the site or service. Why give elected officials extra latitude? We’ve learned that those leaders will just abuse that privilege, so treating all users alike and then policing those community standard will surely reduce the misuse of any platform to push dis- or misinformation.


FROM: Ken Rogerson
TO: Bob, Phil, Jolynn, Tim

There are so many issues to address right now. One of the biggest tensions is content moderation: with policymakers and citizens calling for it and not completely understanding its complexity, and companies trying to show that they are doing it but broadcasting the [intended or inadvertent?] message that they are only doing the minimum in order to appease critics.

It is clear that perfect content moderation (sometimes called content moderation at scale) is probably a pipe dream, at least in the near future. Over the past year, some companies have tried to be more transparent about their processes, with varying success or failure, probably more failure. A few things that organizations (both private and public from my perspective) can do:

First, bend over backward to acknowledge this. Tell us they are trying but that, right now, this is a very imperfect science. They can tell us when they succeed (go PR people!) and also when they fail. Take responsibility for the imperfect nature of the activity. I agree with Tim: keep doing it. Keep removing people/organizations for violation of policy. Put them back if it was in error, but make the appeals process very open and as easy as possible. This is hard, but needs to be done.

Second, hire people. Hire for both human intervention and to code with the goal of improving the algorithmic intervention. It is only the balance between these two things that will improve the situation. Neither the humans nor the code alone will help. Code by itself is biased. There are some frightening stories about people who do this type of work on the human side. It takes an extreme emotional toll. Acknowledge this and create some plans to address it.

Finally, work more with other groups: journalists, non-profit fact-checkers, other groups who are concerned about dis/mis-information. Create partnerships with them. Listen to them. Allow access to the decision makers in the organization. There are some very important ideas and projects out there that can help.


FROM: David Hoffman
TO: Jolynn, Bob, Ken, Tim

I definitely also want to know what Prof. Dellinger thinks on this topic.

My .02

First, I don’t think we should call them tech companies. There are several other terms that would be more appropriate: “behavioral modification platforms”, “advertising delivery services” or “algorithmic fueled radicalization engines” (ok that last one might be too long to catch on). There are five things I would like to see these companies do immediately:

  1. Jointly fund a non-profit to create an industry best practice content moderation policy and to stand up a content moderation escalation process and decision making body. Create a multi-stakeholder governing board for the non-profit including academics, civil liberties advocates, anti-hate speech advocates and former senior consumer protection officials. They should make certain the board has technology ethics expertise and optimizes for gender and racial diversity.
  2. Publicly commit to follow the decisions of the non-profit escalation process, and agree to be subject to U.S. Federal Trade Commission authority under Section 5 of the FTC Act for unfair or deceptive trade practices if they violate their public commitments.
  3. Provide the non-profit with considerable funding (at least $5 billion) to invest in research to improve automated content moderation tools and academic investigation of the right amount of human oversight to make certain such tools do not create additional risks for individuals.
  4. Make a public commitment to investigate whether content delivery algorithms are having the effect of radicalizing individuals and if so to take sufficient steps to remedy those effects.
  5. Publish quarterly detailed and anonymized transparency reports about the number of requests for content moderation, the decisions that were made, the progress being made towards automated content moderation, and the degree to which efforts to combat radicalization have been successful.

FROM: Jolynn Dellinger
TO: David, Bob, Ken, Tim

Moving forward, companies should be completely transparent about the efforts they are making to moderate content, increase public awareness about the actual process, and involve people who have actually done the moderating work in the conversation about where to go from here.

Historically, the content moderation process had been characterized by opacity and confusion. I agree with Ken that policymakers and citizens are calling for something the complexity of which they don’t completely understand; the process has been conducted in the shadows to an extent that would make such understanding elusive.  The focus of your question seems to be disinformation, but I think we should not lose sight of the fact that major tech companies are also moderating pornography, child sexual abuse and exploitation, animal abuse, terrorist activities, war-zone footage, suicides, and self-harm. This work is complex and simultaneously tedious, desensitizing and draining, and potentially psychologically devastating to the people who do it. The conditions under which people have been employed to do this work have not received as much attention as they should. Many companies have chosen to outsource this work to people in other countries, like the Philippines, or to hire contract workers to accomplish it, often young people, often without benefits, creating a kind of second-tier status within companies for people doing this difficult work.

It is possible that people are under the impression that all of this work can just be automated using algorithms, but as I understand it, that is not yet possible. And it may never be possible to delegate all of this work to machines given the cultural sensitivity, appreciation of context, and nuance that can be required in the decision-making process – especially in the areas of hate speech and disinformation. Humans are required. All of us who use social media are benefitting substantially every day from the work of people we barely know exist. The price of a moderated internet experience is higher than we may realize. Whatever else we may say about free speech, it does not come without a cost.

I think one of the main challenges for content moderation — no matter what form it takes in the future — will be the practical implementation of the gargantuan task. Who is going to do the actual work? How will they be trained and compensated? Is it even possible for human beings to do this work and remain undamaged by it? What standards will we ask them to apply to determine who gets to be heard and what they are allowed to say?  Our demands for better moderation must, at a minimum, be informed by the realities of the work, and more transparency may help get us there.

For more information on this topic, I highly recommend Sarah T. Roberts’ excellent book Behind the Screen: content moderation in the shadows of social media, and the 2018 documentary The Cleaners, https://www.pbs.org/independentlens/films/the-cleaners/


 

FROM: Phil Napoli
TO: Ken, David, Tim, Bob

I think I’d want to echo David’s first suggestion, regarding the need for some kind of multi-stakeholder governing board.  The process of determining the types of content that are/are not permissible will always lead to controversial outcomes. The best we can do at this point is generate a governance structure for reaching those outcomes that is to some degree removed from the glaring problems of those decisions resting in the hands of either a government agency or the unilateral whims of the platforms themselves. A year or so back I laid out a proposed model that draws upon the Media Rating Council, the body that oversees the audience measurement industry. This body would engage not only in developing best practices, but also would conduct regular audits of curation algorithms.

One question I’d like to hear the lawyers in the group address: Amidst all of the discussion of the possible utility of modifying or eliminating section 230, I’m still grappling with the question of how effectively the removal of the civil liability shield would address the disinformation problem.  So, for example, yes Dominion could sue the platforms for the claims that their machines had been tampered with; but would it be the case that more general, non-specific claims of fraud could continue without triggering platform liability?

Finally, I think the one stakeholder category that has largely received a pass in the wake of January 6th has been advertisers.  A study released last week showed the extent to which major brands are still supporting (sometimes unbeknownst to them, due to how ad networks and programmatic media buying work) sites that are known creators and disseminators of disinformation. Amidst all the blame that is being thrown around, I think advertisers have gotten a bit of a free pass so far, and there needs to be much more organized pressure on advertisers to stop providing the oxygen for these outlets (some shareholder groups have apparently began applying pressure a couple months back). Here’s a thought that just occurred to me – given the lower level of First Amendment protection that commercial speech receives, would it be conceivable to imagine some kind of advertising regulations that prohibit advertising on outlets that have been verified by some third party (maybe the multi-stakeholder governing board described above) as disinformation purveyors? Or is that just a First Amendment non-starter?


 

FROM: Bob Sullivan
TO: Ken, David, Tim, Jolynn

My 2 cents:

There are many good ideas in this thread. It’s my belief that the only thing lacking is the social and political will to force tech companies to try them. That’s why I think this moment is so important. We can’t let it pass or get bogged down in edge cases that might make them less than perfect. Let’s get started, today.  This is my attempt to provide a bit more fuel for that fire.

I will never for the life of me understand why a company will spend millions or billions of dollars building a platform and forging a reputation only to turn it over to anyone and everyone with a keyboard. Well, aside from those who get to take the money and run.  Here’s my personal story about this.
I was an early “mainstream media” blogger, back when blog software offered the option to pre-approve commenter contributions before they were published. I never published a single comment without reading it. While many journalists found this cumbersome, I loved it. The readers were great, many times writing responses far smarter than my original entry. The trolls, misogynists, and the merely boring I skipped over.  This made my blog a rich experience. It also made the trolls give up and move along to other places. I was lucky to have a large platform, won after years of work on my beat, and my pieces sometimes attracted thousands of comments. so time constraints meant many comments just never saw the light of day. That was ok; I thought of my blog as a modern-day letters-to the-editor page.
Then one day, my company bought a firm that made blog software with no pre-approval option. That company was devoted to the notion of Craigslist-like “community enforcement” — anything goes, until users start complaining.  I held out as long as possible but eventually lost my ability to pre-approve comments.  The conversation predictably suffered.
I remember the day I made my final stand on this issue. “I need the right to control what is published under my name, and under our company’s name,” I argued. Why would we cede that to just anyone?”  “Who are you to make such a decision?” came the answer from the programmer. “I’m the journalist. I’m the professional,” I answered.”So, what?” was the reply, in so many words. The user was most important, he believed.   And, in reality, he needed a scalable solution and editing-by-hand wasn’t it.
This debate has been happening in some form across the Internet for decades now, and we’ve been making the wrong choice over and over again. Why would a newspaper spend decades building credibility, then let a random person write the front-page headlines? Why would a company spend years making great software, then stand by and watch it be used as a tool to make people believe the Earth is flat? Money, sure, but there is more than this going on here. Our logic and judgment have become clouded by the idea that all creations are equal, all contributions are equal, and no one has the right to make adult decisions about anything. It’s a seductive (and often profitable) philosophy, libertarianism drunk on Silicon Valley ambition.  I like to joke that if you rearrange the letters in libertarianism, it spells “That’s not my problem.”    A convenient philosophy when you need a scalable solution.
But this is a dead end. Difficult choices can be hard to make and should always be subject to review and revision. But they must be made. Adults do that. Imagine if those who worked on nuclear power similarly decided to “punt” on how it was to be used? The events of Jan. 6 show we are high past time to make hard choices about what our amazing software platforms can be used for, and when their use should be constrained. We should not recoil from this responsibility, we should accept it. We made this technology, we have to fix it.
Should Facebook be legally responsible for every single thing someone writes on its platform? No. But should Facebook have enough staff to remove dangerous content after it receives a few credible complaints about it? Absolutely. Countless tragedies happen every day because companies like Facebook don’t have a phone number to call in case of emergency, literally and figuratively. When products have defects that cause harm, companies can no longer sell them knowingly without liability.  There is no reason tech firms should avoid such liability. There is no right to sell dangerous toasters; nor is banning dangerous toasters an infringement on free speech.
Pre-approving comments on a blog is time-consuming.  Monitoring social media and other tech firms for safety will be very expensive. Perhaps it’s not scalable. That’s ok. If these tools can’t be made safe, then their business model is fatally flawed and they deserve to fall into the digital scrap heap of history.

Americans, thankfully, have a right to free speech. But they do not have a right to free reach, as Stanford’s Renee DiResta has written. I’m happy to host intelligent comments that move the conversation forward on my blog now.  Trolls, racists, reality-deniers must find their own place to speak.

Don’t miss a post. Sign up for my newsletter

About Bob Sullivan 1640 Articles
BOB SULLIVAN is a veteran journalist and the author of four books, including the 2008 New York Times Best-Seller, Gotcha Capitalism, and the 2010 New York Times Best Seller, Stop Getting Ripped Off! His latest, The Plateau Effect, was published in 2013, and as a paperback, called Getting Unstuck in 2014. He has won the Society of Professional Journalists prestigious Public Service award, a Peabody award, and The Consumer Federation of America Betty Furness award, and been given Consumer Action’s Consumer Excellence Award.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.