If you feel a little creeped out by the idea of wearing gadgets that send your pulse rate or oxygen levels or sleep patterns to a big tech company …wait until you hear what Duke University professor Nita Farahany is warning people about in her brand-new book, The Battle for Your Brain. Welcome back to Debugger in 10, I’m your host Bob Sullivan. We try to explain complex tech issues in just a few minutes.
The burgeoning field of neurotechnology, combined with dramatic improvements in AI, mean tiny sensors near your brain might soon be pretty good at detecting whether you are happy, depressed, angry at your boss, or about to switch political parties. The technology might help unlock human potential in exciting ways or…well, you can probably imagine how things might go wrong. Duke University professor Nita Farahany wants to have that conversation now, before it’s too late, so that’s what we’re doing on Debugger today. I began our chat by asking her about the title of her book
You can listen to the podcast by clicking on this Spotify link or pressing play below, or by subscribing to Debugger through your favorite podcast platform. A full transcript appears below.
Nita Farahany: What I mean by the Battle for our Brain is the battle to control what we think, uh, how we think to manipulate and to change what we think, and the increasingly dangerous ways in which it will be possible to do so in the era of neurotechnology.
Bob: So I know lots of people try to influence how, how folks think, and influence is one thing, but control is another. And that’s where the neurotechnology comes in.
Nita Farahany: Well, it’s, it’s more than just control. It’s the ability to actually much more accurately decode what a person is thinking and feeling. So until now, uh, there’ve been lots of ways in which companies and even governments have tried to understand what a person is thinking and how, they can influence and change that. But most of those have been through relatively remote inferences about what we’re thinking by picking up our everyday activities from our Google searches to our financial transactions or our GPS locations. What I’m talking about is a kind of brain wearable revolution that’s coming. So increasingly companies … major tech companies, including Meta, Google, Microsoft, LG are launching products that embed brain sensors into our everyday technology, like our earbuds or our headphones. And that is consistent with the trend of putting sensors like heart rate sensors or temperature sensors into rings or watches. But it’s different in that it actually enables the decoding of brain activity. That has a lot of promise to finally unlock the mysteries of the brain and enable us to be able to track our own brain health and wellness. But it opens up our brain to much more direct decoding by corporations and governments, and manipulation too … once you can much better decode what’s actually happening in the brain.
Bob: So I can stick an earbud in my ear and that earbud can figure out what’s going on in my mind?
Nita Farahany: At least some of it. So, you know, with just a few sensors, one in each ear, or you know, a few in each ear or around the ears, there’s only so much brain activity that you can pick up. An EEG or electroencephalography, which can pick up the electrical activity in your brain, is not the most precise measure, but the better AI gets, which as we all know, it’s rapidly happening, the better that these large data sets that brain activity or patterns of electrical firing in your brain can be decoded. And right now, it already can be decoded to tell basic emotional features. Like, are you happy or sad? Are you bored? Are you paying attention? Are you tired? Is your mind wandering? But it can also be used to probe your brain for information.
So for example, if you’re looking at a computer screen or playing a video game, with these brain wearables … which allow you to control what’s happening in the game .. embedded in that environment or in the game or in the computer screen, could be prompts that you wouldn’t even be aware of that could try to test out things like your reaction to different images of politicians to try to tell if you are more likely to vote Republican or Democrat, or even to probe your brain for recognition of sequences of numbers like your PIN number, for example.
Bob: So there’s all sorts of scary implications of this, and don’t worry, I’ll get to them. But, there’s a lot of positive potential for this too. So give me a couple of examples of where this could be a powerful tool for people.
Nita Farahany: Well, first I think it’s important to realize that while there are a lot of scary potential applications for this until now, really haven’t put our brain and wellness or even access to and understanding what’s happening in the human brain on par with the rest of our physical health. And I think that the tragedy, it helps to understand and explain while things like life expectancy or increasing the overall burden of neurological disease and depression and mental illness are increasing worldwide, and the ability to actually see objectively what’s happening in the human brain in real time and over time can enable people to both objectively see a lot more from everything to do … they focus better when they’re at home or in the office in the morning or in the afternoon … to even giving somebody with epilepsy advanced warning that they’re gonna have an epilepsy seizure. Or for me, I’m a chronic migrainer.
Being able to decode the earliest stages and patterns of changes that are happening in the brain, like early warnings of migraines, could help me take just in time medication. So there’s a tremendous amount of potential, some of which we can only just imagine because we haven’t really had this kind of widespread adoption of neurotechnologies in the past.
So there’s a lot of health and wellness and meditation and other kinds of benefits which are part of the reasons why I think people will adopt them. It’s also being developed to enable people to interact with the rest of their technology. So type virtually with your mind or control your computer with your mind instead of a mouse, or turn on your lights remotely. There’s a lot of ways in which it could make the interaction with the rest of our technology much more seamless.
Bob: Speaking of making our interactions more seamless with technology, I’ve been writing about technology for a long time, and so it’s hard to freak me out. I was freaked out by something I read in the Boston Globe piece, which suggested that we not only might want this technology, we’re going to need it so we can optimize our own brains, so we can compete with robots and AI. Did I get that right?
Nita Farahany: Well, I think most of us, probably you included, are facing somewhat of an existential crisis as generative AI rapidly becomes better and better. I think for many people who’ve spent time interacting with the different variations of GPT, including the most recent iteration, GPT-4, it’s astounding, what it can already do. And while I think that has the potential to benefit humanity and to enable us to work more efficiently, it also leaves a lot of people wondering: where does that put the human worker and the human mind and many of the kind of generative and creative aspects of humanity? And I think one of the ways that I’ve been sort of pondering how neurotechnology could change things is — What if there’s a lot more potential, which most people believe there is, when we can do things like brain to brain communication, or be able to empathize with each other in ways that are much more multimodal, like if I could actually share with you what a thought or an image or a feeling that I’m experiencing looks like without having to use the modes of communication that we currently use. Could that be transformational or could we work collaboratively, brain to brain, to enable people to work on problems in a collective way? So, I think the, the boundaries of what neurotechnology may make possible could be transformational for our humanity, but also could give us a different way of thinking about the relationship between humans and AI and robots.
Bob: And in order to keep up with them, we might need to have smarter brains?
Nita Farahany: We might. We might need them to keep up. We might need them to have different ways of imagining what the human experience even is.
Bob: Okay. So the other side of this coin is all the things that are pretty scary about this. You use the phrase “cognitive liberty” a lot. Tell me what you mean by that.
Nita Farahany: So cognitive liberty is, as I define it, the right to self-determination over our brain, and I conceive of it as an international human right, an update to our conception of liberty that would direct us to recognize that privacy as a human right includes the right to mental privacy. That freedom of thought protects us against more than just religious freedom or belief, but also protects us against our thoughts being accessed and controlled and manipulated or assaulted by others. And self-determination, which would give us both the right from interference, but also the right to access information about our own brains or to change our own brains in ways that might enhance it or even diminish it if that’s what we wanted to do.
Bob: Okay, so maybe 10 years ago or so sat in a warehouse in Brooklyn that had about 50 people in it who were there for a meeting of people who were obsessed by this idea of wearing sensors all over their bodies to track sleep and heart rate and whatnot. There were meetups like this happening everywhere, but I learned later these were all runners.
Nita Farahany: What were now what were you doing there?
Bob: I was a journalist of course. Asking them questions. And you know, today I actually have a wearable watch on my arm that tells me what my pulse is. And it’s right about 80% of the time. But I feel like even 10 years after this wearable revolution began, the sensors still aren’t all that great. Why are you so confident that neurotechnology is, is right around the corner?
Nita Farahany: Well, first of all, I think there’s a difference between whether or not the wearables are so great and whether or not they’re likely to be used. So as you just said, you wear an Apple watch that has a sensor in it that is probably tracking EEG. People are wearing the rings. If you look at the growth in the industry of wearables, it’s exponentially growing across the world. And most people are fascinated by their own brains. And the ability to be able to access it, I think is part of what will drive it. But also the sensors have improved vastly over the past decade. So brain sensors…it’s been hard to figure out how they would go widespread, in part because many of them were being located in headbands across the forehead. Most people are not gonna wear something across their forehead unless it’s in a hat, which they take off from time to time.
But increasingly in a world where people work remotely all the time, where they are accessing and listening to music and conference calls through earbuds and headphones, I think the ability to both miniaturize the electrodes, but also the growth in AI to be able to filter out noise and other interference that makes the signals much less reliable, has improved dramatically at the same time.
And so the form factor and the interpretation of the data has made it so that they are vastly better than they were a decade ago. Are they perfect? Are you gonna get perfect signal, to noise every time? Definitely not. But they are good enough for a lot of the different applications that are being marketed.
And when you look at all of the tech titans who are launching products in this space and the number of products that are coming out just this spring that are remarkably different than anything that came in kind before, I think it’d be hard to imagine that all of this investment and all of the interests that people have and the gaps in our knowledge and understanding of brain wouldn’t drive a very large industry forward.
Bob: I’ve seen you make the point elsewhere. My own Subaru could have had the capability to check whether my eyes are on the road and infer whether or not I was too tired to drive. And, and that’s really valuable to things like trucking companies, right?
Nita Farahany: Yeah. So, you know, in some ways what I try to help people understand is that while it feels for many people, very scary, me included, to unlock and open up the brain to others to look into, there are some applications that might actually be less intrusive into our privacy than the applications that we already use. So if you’re a commercial driver, you probably already have some kind of driver assist technology that has been integrated into the car. And that includes both sensors inside the car and also outside the car, such as cameras that are looking at the way in which, the driving predicts kind of erratic changes in the driver’s behavior. So like if the lines on the road and the angle in which the car is moving suggest patterns of increasing sleepiness, the person may get an alert letting them know that it’s time to take a break. Or more intrusively, there may be in-cab cameras that are actually watching everything that’s happening inside the cab for a commercial driver.
And that can be incredibly intrusive, that the only thing that you’re trying to figure out is if they’re sleepy or awake. But brain sensors that have already been used by thousands of companies worldwide to track commercial drivers can be put inside the hat, across the forehead or, in these earbuds or even little wearable tattoos behind the ear. And that can pick up just the signal of whether or not the person is sleepy or awake, and the algorithm that interprets that data can give a score from one to five. It may be more precise, and give an earlier warning than you could get from a camera because it picks up as you’re transitioning into sleepy states. And if the only data and information that’s being extracted is about fatigue levels and not everything else that the person is experiencing in their brain, again, that might actually be less intrusive to the person’s privacy than having a camera that’s trained on their face than inside the cab at all times
Bob: I think it’s undeniable that you’re right and I’m wrong, this technology is much further along than people realize, despite the fact that my watch sometimes gets my pulse incorrect. And the most important point that I’ve heard from you is the time to have the conversation about all these things is now, not 10 years from now, or five years from now when all this technology is already upon us, right?
Nita Farahany: I think that’s absolutely right. So one thing that I found, even the neuroscientists who are deep into this space who’ve read my book, the Battle for Your Brain have said is they were shocked at both how far the technology has come and how far it’s already become widespread in its uses across society. And that’s both great and that I can open people’s eyes to it, but it also means that we’re setting at an inflection point. And if you look at a lot of other, books and commentators who’ve written about privacy or commodification of data, they’ve written about it sort of after the fact. They said look, here’s some disaster that has occurred. Shoshana Zuboff wrote about the age of surveillance capitalism…. here’s all the ways in which your data has already been commodified…it’s very difficult to claw that back. But at this moment when we can see very clearly that all these companies are launching products this spring and over the course of the next two years, we have a moment to try to get this right and to say this is a totally new category of data.
This is an unprecedented risk to our freedom of thought. The benefits we can see. But we have to direct the technology and direct our rights and set up the governance standards that enable us to be empowered by the technology, not introduce a kind of Orwellian dystopia upon society. And so waiting, I think, means it will be too late. But we can make those choices now. We can make the right set of choices that will change the terms of service in favor of individuals if we act quickly.