Can you learn to trust tech again? How do we close the ‘distrust gap?’


Americans have become less trusting in recent decades, plenty of surveys seem to agree. Yet trust is a funny thing. People say they don’t trust tech companies, particularly social media companies, but they hungrily gobble up Tweets and Facebook updates all same.  Still, lack of trust in … seemingly everything…is hurting us, a break in social fabric that the Covid-19 pandemic has clearly exposed.  If we don’t fit it, we risk falling into what  economist Alex Tabarrok of George Mason University calls a “distrust trap.” Markets require trust to function.  Lack of trust generates really expensive overhead and stifles innovation.  It leads to extensive regulation and encourages black markets.

It’s a big problem, best tackled by breaking it into smaller chunks. So let’s talk about tech’s problems.  By one measure, 40% of Americans say they don’t trust Facebook and TikTok, and 30% say they don’t trust Chinese telecom equipment maker Huawei. About 20% don’t trust Google; slightly fewer people say they distrust Apple and Microsoft.  Chinese consumers have similar, but reversed, sentiments. Whatever the reason for this, distrust isn’t good for these companies, or consumers, or for governments trying to regulate them. In one version of the future, each nation will silo itself with hardware and data produced domestically, cutting citizens off from best-of-breed global technology and innovation.  What a waste.

In a recent talk at Duke University, Ben Wittes  —  Senior Fellow in Governance Studies at the Brookings Institution and editor in chief at Lawfare — discussed tech’s trust deficit. He urged listeners to take a moment and imagine what companies like Huawei would have to do to earn their trust. You can listen by clicking play below, or by visiting

If you are anything like me, that question stopped you dead in your tracks, so I want to draw out the idea in this edition of In Conversation.  Wittes is here, and so is Duke University Prof. David Hoffman, Atlantic Council fellow Justin Sherman, Intel’s Claire Vishik, Audrey Plonk from the OECD, and former Homeland Security assistant secretary Paul Rosenzweig,

(If you are new to In Conversation, I am a visiting scholar at Duke University this year studying technology and ethics issues. These email dialogs are part of my research and are sponsored by the Duke University Sanford School of Public Policy and the Keenan Institute for Ethics at Duke University. See all the In Conversation dialogs at this link.)

TO: Ben, David, Justin, Claire, Audrey, and Paul

I’m worried an entire generation of tech innovations could be lost to this distrust trap. But there’s no easy fix. Trust must be earned over the long haul, and can be lost in an instant. Still, tech companies must get started. Since I find myself a bit at a loss to say how Facebook and Twitter, let alone Huawei, can earn my trust, what do you suggest tech companies can do in the next 12 months or so to start down the road to trust recovery?

FROM: Paul Rosenzweig
TO: Ben, David, Justin, Claire, Audrey, and Bob

I think that trust is not a singular concept.  It is contextual and it also varies from person to person and enterprise to enterprise.  My risk preferences for, say, tech may be different from my risk preferences for bridge safety and both, in turn, may be different from yours, Bob.  Meanwhile, my preferences as an individual are different, yet again, from the preferences of an enterprise (like Duke University) much less from those of a government.  As a friend of ours, Herb Lin, put it the other day in a conversation: “would you trust a pencil made in North Korea?”  I might say “yes”; you might say “no”; and the US government might say “not in an ICBM launch silo.”

And then, too, the problem is not unitary – trust has many dimensions.  Here’s something that I wrote with Claire last year on the topic that captures the dimensions idea:  “First, [issues of trust] implicate questions of technical capacity and security – how are we to know that the manufacturers of a hardware or software system have designed and built it in a way that is secure against error, mistake, natural disruption, or deliberate external misconduct?  In other words, has the manufacturer performed competently? And has it done business only with suppliers that have performed competently?

Second, is the question of corporate intent – how are users to be assured that manufacturers have not constructed and marketed a system that affords the manufacturer privileged access and control?  In other words, is the software or hardware intended to benefit the end user or does the manufacturer see a value to be gained for itself from the design?

On yet a third axis of inquiry, it is a question of politics and law – what protections exist against state-level intervention into the manufacture or operation of an ICT system? Are the flaws in the system such that some third-party, for either well-meaning or malicious reasons, can benefit from the gaps in construction?”

That is how I would frame the question.

FROM: Claire Vishik
TO: Ben, David, Justin, Paul, Audrey, and Bob

There are two problems with the concept of trust that make it so difficult for the diverse stakeholders to come to an agreement: the definition and the single domain approach.

The issue of the definition is clear: in conventional and research literature and even in standards and regulation, definitions of trust applying to the same ecosystem range from strict and technical to fuzzy and emotional. As a result, trust is perceived as many things, including computing architectures based on the root of trust,  technology lifecycle processes that are expected to engender trust,  broad concepts that could not be defined, like “users’ trust in the  digital economy” and users’ trust of content, data, a political system, and a government.  Multiple surveys have been conducted, e.g., IPSOS annual survey, that measure even more high level “trust in the Internet.” As a result, the concept of trust is not only contextual (which is normal), but also so broad that it can be used to describe a wide range of unrelated events and approaches.

The single domain approach draws from the unclarity of the definition and the fact that it is easier to describe trust in relation to one area, such as software development process, data quality, content provenance, regulatory space, political system, or hardware architectures. To simplify the problem, if a software development process appears to be “trusted,” but the hardware architecture or provenance is not clear, would we consider the resulting deployment as trusted? And, in a different example, if some software on a platform is trusted, but other software applications cannot be attested, what level of trust should be afforded to the platform? Or, as indicated by Paul, how is trust, even if potentially attestable, affected by irreconcilable differences in geopolitical systems?  The situation is even more complicated when trust is a composite measure of safety, security, privacy, and reliability, as is the case in almost all modern applications, e.g., autonomous cars. The technology community coined the term “trustworthiness” to address the multi-domain nature of trust, but the metrics associated with this concept are still being defined.

In each “trust domain” there is an ideal situation that has been identified based on threat models and the nature of technology (or usage models for non-technical domains).  For instance, with regard to technical trust, the ideal objective may be to rely on the root of trust,  identify the risk level, activate necessary mitigations, and apply them in real time. But even if all the domains reach an unrealistic goal of supporting the ideal situation, it doesn’t mean that the combination of the ideal conditions for separate trust domains will create the ideal situation for the ecosystem as a whole. For example, a technically trusted system may not be acceptable to the users or governments, for additional reasons.

So, what can companies (I assume technology companies in the context) do to gain trust? Several things:

  1. Understand and define contextual trust and trustworthiness for their space, work with stakeholders to develop a joint understanding of the subject and make this definition a de facto norm in this area.
  2. Use multi-domain approaches to include technical, process-oriented, regulatory, and, when necessary, societal parameters, in order not to focus on one small element of trust as a proxy for the ecosystem trustworthiness.
  3. Develop or use relevant international standards and best practices to ensure a level of transparency and compatibility across the ecosystem.
  4. Understand the international regulatory and geopolitical environments that affect the big picture, from export control to privacy regulations to political differences.
  5. Support big picture research in trust and trustworthiness that will yield a new paradigm in the space.
  6. Quickly and adequately respond to the evolution of the the legal/regulatory frameworks, technology environments, and use cases.

Or, even shorter:

  1. Understand and be able to explain their context of trust and trustworthiness.
  2. Aim at building the big picture, not narrow details, and be able to justify the characteristics of the big picture beyond its compliance with principles.
  3. Define foundations of trust and trustworthiness acceptable to diverse stakeholders that can evolve with the computing, geopolitical,  and regulatory ecosystem.

FROM: Justin Sherman
TO: Ben, David, Justin, Paul, Audrey, and Bob

Paul and Claire make excellent points. I especially want to highlight Claire’s comment about a single-domain approach to trust, and I’ll offer a few thoughts on this in a context broader than security. When faced with a problem of trusting a tech company, it can be tempting to turn to purely or predominantly technologically based solutions (or “solutions”) to remedy that trust issue—whether that be vulnerability testing and other transparency mechanisms into a particular Chinese manufacturer of critical infrastructure technology, or technical controls over data so that it’s allegedly more difficult for a company like Facebook to use it in ways the user would find harmful, deceitful, or abusive. Or why not just put it on the blockchain? (I’m kidding.)

There are a couple of issues with this approach of pursuing purely technological solutions to technological problems. One, it distracts from the many other possible trust mechanisms that exist in other domains, as Paul and Claire have highlighted, such as increased corporate transparency and new consumer data privacy regulations. I would argue, for example, that robust federal data privacy protections for US citizens and consumers are an essential component of being able to trust a technology company’s use of my data—because the lack of those safeguards now presently gives private American firms enormous freedom to essentially do whatever they want with my information (buy it, sell it, analyze it, use it to micro-target me with political ads, etc.). Focusing on just the tech therefore diminishes our ability to draw up comprehensive responses to issues of tech distrust. With Huawei, for example, I don’t think we can talk about technical controls for trust without also talking about Beijing’s authoritarianism and the lack of an independent judiciary in China.

Two, a focus on purely technological solutions is often to the advantage of technology companies that prefer the solution to a trust problem they created or exploited is “more of our technology,” rather than “new regulation,” or “more transparency,” or antitrust enforcement that allows more competition, or fundamental changes to their business practice. It’s like when AI-driven companies say the answer to their sale of racist facial recognition systems, weaponized predominantly against historically oppressed and otherwise marginalized communities (e.g., look at police uses of facial recognition last year against peaceful Black Lives Matter protests), is them collecting even more data on other people to “eliminate bias in the dataset.” As with many techno-solutionist approaches to problems, it presents technology as forever the answer to harms that technology caused, rather than considering possible technological remedies in context with other and/or broader political, social, economic, and process-oriented changes.

FROM: David Hoffman
TO: Ben, Justin, Paul, Audrey, Claire, and Bob

There was a time when we spoke about trust in technology as something an individual could personally measure and determine whether they would want to use a particular device or solution. Those times are now long past. Individuals now depend upon “trust intermediaries” to determine whether they will use technology. For example, individuals can have no reasonable idea whether they should trust the the hardware, software and services they use on their cell phones. The best they can hope for is that they can trust the company that is assembling those elements together and placing their brand on the device. Increasingly, we see these companies communicate to end users some context on what they should be trusted for. Companies do not advocate that these devices should be trusted to never have cybersecurity problems. Instead, they communicate that they will promptly address cybersecurity issues when they arise and will send software patches to mitigate the risks. We continue to evolve these best practices on vulnerability management and “other trust intermediaries” (academics and vulnerability researchers for example) offer advice on which companies are in line with these evolving processes.

We are also starting to understand when we should trust technology on privacy issues. We see companies that have repeated privacy problems and increasingly we see trust intermediaries such as government regulators take action against them. However, there is an area where we have not made as much progress which is the degree to which a company is likely to provide assistance to a foreign government to put an individual or another government at risk. We have not developed clear measures of what an individual should expect from a company in this area, and we do not have a set principles of what oversight and controls are necessary to make certain that governments only require assistance for legitimate law enforcement and national security purposes.

However, we do have a body of work to begin developing those expectations, including an understanding of the importance of checks and balances in a divided government, the right to access to bring disputes before an independent judiciary or tribunal, transparency of law enforcement/national security to government overseers elected by the people, and oversight of companies to make certain that they follow through on their public commitments (such as both Section 5 of the FTC Act and SEC oversight of publicly traded companies in the US). These elements create the foundation on which we can build a framework for trusting technology. More work is necessary to build out that framework to apply to the global supply chain that comprises our digital infrastructure, so that companies that want to be trusted can publicly declare that they are only subject to governments who meet the standards of the framework, and that the companies will subject themselves to robust, harmonized and predictable enforcement to hold them accountable and thereby “trustable.”

FROM: Audrey Plonk
TO: Ben, David, Justin, Paul, Claire, and Bob

I would say that trust in technology is a global challenge; many countries and cultures are struggling with how to create more trustworthy digital economies and societies.   In announcing new work focusing on one of the many complex aspects of trust, the Committee on Digital Economy Policy at OECD stated that “establishing trust and minimizing disruptions in data flows is a fundamental factor in reaping the benefits of digitalization.” This work will focus on identifying common practice among OECD member countries and the EU on the sensitive issue of how government access personal data held by the private sector; a longstanding trust issue for governments and industry alike.

Here is a link:

FROM: Ben Wittes
TO: Ben, David, Justin, Paul, Claire, and Bob

I want to suggest that trustworthiness is easy to describe in concept and very difficult to measure in practice. Conceptually, a product is trustworthy if it (a) performs as expected in some useful task and (b) does not create risk that is greater in magnitude or severity than the risk it mitigates or the problems it solves. The practical difficulty in measuring trustworthiness is twofold: first, it can be incredibly difficult—particularly with complex, dynamic products that are constantly being updated—to develop confidence that the product does, in fact, perform as expected under a wide range of circumstances; and second, the fact and magnitude of the risks a product may generate may be altogether opaque.

In some situations, it is fairly easy to do the mental math of trustworthiness. Every day, for example, we decide not to click on things or not to install a piece of software from a company that seems a bit shady. In many more circumstances, however, it is not easy. That’s true for consumers. It’s true for enterprises. And it’s true as well for governments, both in their capacity as product users and in their capacity as regulators of other entities’ use of products.

One problem is that trust is highly contextual. I might well trust a product on a burner phone with no sensitive material on it that I would never trust on a phone that contained my actual contacts and emails. Another problem is that we have no universally-agreed-upon metrics of the constituent components of trust. How much of it is a technical phenomenon? How much is really about brand and accountability? And how much of trust is actually a creature of the regulatory environment in which the product was made?

At bottom, I believe, most trust flows ultimately from optimally-regulated environments. Where significant trust deficits emerge, it tends to suggest the absence of such a regulatory environment. In the case of insecure hardware and software, we have seen a sustained period in which low trust has combined with rapid adoption—that is, in which people and businesses have professed to have low trust but have exhibited great trust, moving material online in fashions that subject themselves to significant risk. This paradoxical trend illustrates the grave difficulty in measuring trustworthiness; if we could tell empirically that a product isn’t trustworthy, we would presumably not adopt it. It also illustrates the fact that the regulatory and liability climate is not doing the work one would hope in harmonizing professed trust with revealed preferences.

TO: Ben, David, Justin, Paul, Audrey, and Claire

Thank you all. We’ve obviously just scratched the surface here, and there will be more discussion soon. I’d just like to highlight a couple of things. I’m now persuaded that there’s an important “trust paradox,” just as there is a privacy paradox in consumer behavior. People say they care about privacy but often won’t do much to protect it; and they say they distrust companies but often do whatever they say, anyway. We can giggle about this, or we can bake this reality into our approach — it should be obvious which one I prefer. I hear Justin’s “the solution to bad tech isn’t MORE tech” in this context, also.

Also, to show my personal bias, I think trust begins when institutions are incredibly forthright about what they do and the mistakes they make. I’ve reported on firms that covered up data breaches and their consumer relationships suffered even more damage (see Equifax); and I’ve covered firms that were so transparent after a breach they actually gained trust (see A big key going forward will be tech firms opening up their code and shutting down their PR-speak when trust issues arise. It can be done. Really. Microsoft’s Trustworthy Computing initiative is a good example. After years of dodging questions, Microsoft fully engaged its critics; both its products and its perception improved quickly. In this light, I endorse former Yahoo/Facebook security executive Alex Stamos’ idea to create an NTSB for cyber incidents. As in plane crashes, such an agency would study disasters and publish reports that would allow all tech firms to learn from these mistakes, and at the same time reassure a nervous public that there will be no cover-ups. It would be expensive and require a massive cultural shift. We should do it anyway. Flying is very, very safe, and people trust pilots and planes, despite occasional tragedies. Flying on the Internet could as safe and trustworthy, too.

Don’t miss a post. Sign up for my newsletter

About Bob Sullivan 1494 Articles
BOB SULLIVAN is a veteran journalist and the author of four books, including the 2008 New York Times Best-Seller, Gotcha Capitalism, and the 2010 New York Times Best Seller, Stop Getting Ripped Off! His latest, The Plateau Effect, was published in 2013, and as a paperback, called Getting Unstuck in 2014. He has won the Society of Professional Journalists prestigious Public Service award, a Peabody award, and The Consumer Federation of America Betty Furness award, and been given Consumer Action’s Consumer Excellence Award.

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.