ITIF Logo
ITIF Search
Podcast: Technology Panic Attacks, From Radio to Social Media, With Amy Orben

Podcast: Technology Panic Attacks, From Radio to Social Media, With Amy Orben

If Netflix’s “The Social Dilemma” is to be believed, social media giants are surely responsible for the breakdown of our mental health, politics, and the economy. Generations of fear mongers have found reasons to believe new technologies—from books and bicycles to video games and email—are to blame for society’s ills. Rob and Jackie take a deep breath and discuss these predictable cycles of technology panic with Dr. Amy Orben, an expert in the history of technology panics at Emmanuel College, University of Cambridge.

Mentioned

Related

Auto-Transcript

Rob Atkinson: Welcome to Innovation Files. I’m Rob Atkinson, founder and president of the Information Technology and Innovation Foundation. We’re a DC-based think tank that works on technology policy.

Jackie Whisman: And I’m Jackie Whisman. I handle outreach at ITIF, which I’m proud to say is the world’s top ranked think tank for science and technology policy.

Rob Atkinson: And this podcast is about the kinds of issues we cover at ITIF, from the broad economics of innovation to specific policy and regulatory questions about new technologies. Today, we’re going to be talking about technology panic.

Jackie Whisman: And we have Dr. Amy Orben, who’s a college research fellow at Emmanuel College at the University of Cambridge and a research fellow at the MRC Cognition and Brain Sciences Unit at the University of Cambridge. Her research uses large scale data to examine how digital technologies affect adolescent psychological wellbeing and mental health. She uses innovative and rigorous statistical methodology to shed new light on pressing policy questions, parenting, and mental health. We want to talk to her about her recent article, “The Sisyphean Cycle of Technology Panics,” which is available to read online and also as a very user-friendly YouTube lecture. So thank you for that and thanks for being here.

Amy Orben: Thank you for having me.

Rob Atkinson: I should say, first of all, that my second favorite podcast, after of course Innovation Files, is a really great podcast called Pessimists Archive, which is a wonderful podcast about all the opposition to technology over the last 100 or 200 years. It’s really funny and really insightful. I was out walking recently and I heard this one about basically pessimism about the internet, and Amy was on and talking about her great research and basically about the Sisyphean cycle of technology panics, where it gets worse and then gets better and worse and better.

And it reminded me of the work we’ve done at ITIF on technology panic cycles, where when a new technology appears, people don’t understand it, there’s a group of people who fan the flames of panic. And then people realize, yeah, it’s pretty good, not perfect, but certainly okay. And then it kind of goes away, and then some new technology comes up and we’re off to the races again. So really interested in what Amy’s research is showing on this.

Jackie Whisman: Your study is really interesting, because it frames the current tech panic around social media and internet technology generally, by giving perspective to the first reactions to radio and movies and even bicycles, actually. Parents were worried about the leisure time of children when it came to this new technology, thinking kids were spending way too much time listening to the radio or watching movies. It’s very similar to what we see now with smartphones, social media, video games. My four-year-old is currently occupied on her iPad, and I have a lot of mom guilt around that, but I loved to hear you talk about this idea of moral panics that set in. It made me feel a lot better about my own kids’ screen time when I was watching your lecture. I’d love for you to walk us through the cycles of this technology panic that you talk about, and then we’ll get to the role of science in all of this.

Amy Orben: I’m not a technology cycle specialist by training. I’ve been brought up in the academic sphere of thinking about smartphones and social media and trying to figure out whether the problems that people think they cause are actually happening. I was really at the cutting edge of research on that front and really just focusing on just that. Why should I read about the radio, why should I learn about how people reacted to television, if learning about social media is so crucial and so important? And I spent four years really straight researching that area without really looking back at all and reflecting on where I stand in the history of us thinking about technologies and the history of us getting concerned about technologies.

How this work really started is that I was finishing my PhD at the University of Oxford and I was sitting in this old library. I was trying to just find some nice little quotes to put into my thesis. I started reading this paper from 1941, which was talking about this radio addiction, that the majority of American children are addicted to the radio or to movies, and that this can have harrowing consequences. And all of a sudden I felt like I could have just swapped radio with social media in that paper and it could have been exactly the same paper that is published today.

I think that really changed my vision also for my work. I started questioning, am I really doing original work if I’m just revisiting questions that people have asked, as you said, about bicycles, about radio, about television, about video games, and now about social media? That really started off quite a long stint of me just reading old papers and reading philosophy books about technology panics. Over time I started getting this understanding about how technology panics come and go. Once a technology enters into society and starts being used by a larger proportion of the population, people naturally get concerned. We are concerned about what it means to be human and what it means to have a functioning society, and technologies often are seen as maybe disrupting that. We can probably talk a lot more about that, about why we get concerned so quickly, in the podcast.

But what you see is that these concerns appear and then disappear over time again and again and again, and they just recycle with the new technology that is popular at the time. So I think in both academia, but also in technology policy, that should be central to how we think about technologies in general. Why did I, as a researcher, go four years without thinking about that? I’m just the latest person looking at the latest rendition of maybe the exactly same thing. Shouldn’t we be studying the cycle instead of studying the technology? That really started a big change in my thinking, and so I’m really happy to be talking about it with you, as I think it’s just interesting for me to talk about as well, because it’s still evolving, as any idea does.

Rob Atkinson: You were talking about the radio. I remember when I was a teenager, I was-

Jackie Whisman: Did you have radios then?

Rob Atkinson: We did have radios, but I had a cassette player also. It didn’t have any of this new stuff. But they had a technology at the time, if you will, it was actually a set of stations called Clear Channel, and they broadcast at a certain spectrum or whatever. Late at night, you could actually, I lived in Chicago and you could hear, you could tune into stations from like Arkansas. I mean, it bounced off the atmosphere somehow. And I remember staying up late listening to rock and roll music on the radio, and I certainly wouldn’t let my parents know I was doing that. I would’ve gotten in trouble.

So I was addicted to the radio, I guess, at that age. And I seem like I came out okay. But yeah, I think your point is really a good one. Technology is, as Schumpeter once said, Joseph Schumpeter, it’s a creative destruction. It’s certainly disruptive, oftentimes, new technologies, and so it’s not perhaps all that surprising that there’s a general response that we see over and over again. And you talk about these different stages of the panic. Can you go into a little bit more detail of how your research has identified these stages?

Amy Orben: Yes. I was trained as a physicist before I went into psychology. And in a way, if you have a system that is reoccurring, then there must be similar forces driving it. That was kind of the first idea. And I was trying to figure out, what is driving this cycle to reappear over and over again? I felt like there were a couple of different forces involved, there’s the public, there are the academics, there are the policymakers, and that these interact in different stages. And I don’t want to spend too long in describing each of these in detail, so I’ll just quickly give you an overview of each one.

In the first stage, it’s the panic creation. A new technology comes around. It surpasses a certain threshold of popularity in society. For example, I focus a lot on children, and children’s leisure time is really of great importance to parents these days and to policymakers. If it starts encroaching on that leisure time, you start seeing people getting concerned. That concern can be maximized through what sociologists call a moral panic. Leaders in society, for example, leaders in religion and politics and the media, showing their concern and people becoming more concerned in general, and there being a heightened concern, I think something we saw in 2017 in terms of smartphones, especially.

And further, we become concerned because of something called technological determinism. We often see technologies as the key drivers of change in society. So for example, we’re seeing a drop in mental health in teenagers at the same time we see an increase in iPhone use, and so it’s very natural for us to link the two, just like we see an increase in 5G and an increase in COVID, and it’s also more likely to link the two. I think it’s a very good example of technological determinism. So I think it’s natural for the general public to become concerned at some point about a popular technology. That’s stage one.

Then the second I called political outsourcing. Actually, I wrote this in the perspective of an academic, trying to figure out what I was doing, and what I had felt like in being a big part in the UK of the social media panic cycle was that all of a sudden politicians were under pressure to do something. I’ve read historical accounts of the television in the US, where they had big Senate subcommittee hearings, where thousands of parents wrote in about television and how their children are addicted, and where politicians themselves were parents and felt like their kids were addicted. And the same happened with social media, for example. And so politicians are forced to do something.

I think our society, at the moment we’re very focused on evidence-based policy-making, which I think is very good, and so trying to procure the evidence is important. A lot of research gets funded, a lot of money gets promised to figure things out, to inform evidence-based policy. Furthermore, it’s very easy for politicians to benefit from this technology panic cycle. So for example, in the UK, we have a decrease in adolescent mental health, just like in the US, and I found that sitting at these tables with policymakers, I felt it was a lot easier for people to take this technology narrative, because for the first it doesn’t blame previous policy, maybe from your own party.

For example, austerity in the UK, where they cut loads of youth services, cut health services, economic pressures, the economic crisis in 2008, there were a lot of different reasons why there might be a decrease in adolescent mental health. And further, you’re not blaming your electorate. I could say maybe it’s something to do with parenting or pressures or other sorts of issues. Blaming a new technology company is really a very nice option. And again, we don’t only see that with social media, we saw that with, for example, television as well, but I don’t want-

Rob Atkinson: Especially if it’s an American company.

Amy Orben: Yeah. I think it is. There were always these positive media headlines about politicians saying, for example our health minister saying, “We’re going to stand up to Instagram and we’re going to stand up to these companies.” I felt like it was very interesting to watch, as a person who is playing a different part in this story.

The third part of the cycle is this research role. I feel like a good research strand, if you read scientific philosophers of science, is something that really takes a theory, we think that the earth goes around the sun, and it tests it and it tests it and it looks at whether the data confirms it. The theory gets developed over time, and sometimes the theory gets replaced when we see that there’s evidence that disconfirms that theory.

But how technology research works is completely different. Somebody just says, “Smartphones are really popular. We should start figuring out what happens when we use smartphones.” And so in this paper, it’s really a manifesto about how I feel like research is also really inefficient, because there’s no forward thinking. For example, we know probably that a new technology is going to come around in five years that we’re concerned about, but we don’t do any forward thinking about getting the data necessary to provide the evidence at the time when policy and the public actually want that evidence five years down the line. And there’s a lot of problems about theory as well.

Rob Atkinson: I just would interrupt or intervene on that one. One of the things you said on the Pessimists Archive is that you can get in a cycle where academics, there’s a lot of money to study this particular problem, and then when you have a highlight or a research, which maybe hasn’t been fully peer reviewed or all the data that you need, it’s easy to then get a lot of fame and credit as an academic. I mean, I sound like an old fogey, but when I got my PhD, you really weren’t expected to become having 100,000 Twitter followers. We didn’t have Twitter. It was really much more narrowly expectation. You were supposed to do good solid academic research.

Today for a lot of academics, at least in the US, there’s another expectation, which is you’re supposed to be a public intellectual as well. And one of the ways you do that is if you write, “Hey, Facebook doesn’t have much effect,” maybe that’s now counter enough that you can get, but five years ago, you wouldn’t have gotten any credit for that. So can you say a little bit more about what are the incentives that drive research here?

Amy Orben: Yeah, I think there’s a lot of incentives. A lot of incentives in academia are based around what’s good for researchers, and that’s sometimes not the same as what is good for research. A lot of the teaching that I do to grad students is trying to get that across and trying to figure out how we as academics can navigate that. And I think that, yes, there is a lot of incentives to create research that is, for example, shared in the media. We know that media selectively cover stories of when scientists found something. So for example, if you don’t find anything in your data around social media and educational outcomes, for example, it will, one, not be published very easily. You’ll probably have to go to a lower-tier journal.

Two, you might not want to publish it anyway. Researchers self-select, so some researchers just don’t publish it when they don’t find something interesting. And by interesting, we often mean a negative effect of technology. And three, the media selectively covers stories that are interesting for their readers, and that is often smartphones is causing a generation to fail. There was an article by Jean Twenge in the Atlantic in 2017 called “Are Smartphones Destroying a Generation?” That article actually was the most read science article in the Atlantic in 2017, I heard from somebody.

Naturally, that shows that this fear cycle is also promoted. So as a researcher, you are incentivized to promote your research in that way. I think there’s a lot going on to do this better. There are a lot of researchers who now understand the problem, but it does mean that actually the evidence that reaches the public eye, the evidence that a normal person will reach them through their social media news or through the newspaper, through the radio, will be actually only a small sliver of the evidence that is available, and often biased towards this negative effect.

Rob Atkinson: That’s great. Thank you. And I’m sorry, I interrupted you just as you were about to get to the last stage.

Amy Orben: Yeah. The last stage is, in a way, the one that I’m most interested in. It’s what does policy then do? Because have you have a very inefficient cycle of creating evidence, so probably what will happen is that you don’t have the evidence necessary to, for example, provide policy. For example, now with social media, we know very little. However, a lot of philosophers of science and technology have written about something called technological entrenchment. It’s really important to act fast when a technology first comes out, because only then will you be able to really enact deep changes in how maybe that technology is designed or that technology is used. For example, I don’t think we could do major changes to social media now. People have built their livelihood on social media. People have built infrastructure and policy and society on social media, and so if we really would have wanted to change social media, we would need to do it really early.

So in a way we have this problem. Technology policy needs to be done early to be really effective, but then the evidence is not there, because the evidence takes a lot of time. And I could go on a rant about how science is stuck in the 19th century. I don’t think we as academics help in that respect. So there aren’t a lot of options. You can say we only want to do evidence-based policy, which means that you don’t have the evidence, you don’t have policy. You can say we maybe, I sometimes tongue in cheek say, we do policy-based evidence. We have a policy in mind, and so we just select the evidence that matches it. And I think that is sometimes used, probably not to the best extent.

I have colleagues here at the University of Cambridge who feel very strongly about the precautionary principle. Why shouldn’t we just legislate in terms of minimizing harm, even if we don’t have the necessary evidence? But I think there are a lot of other interesting decisions that I think, especially during COVID now, this is extending from what I’ve written, have come into mind. So for example, some have written about a science court. Could you stick the five experts on, for example, a new technology in a room and say, “You need to come to a decision. I know the evidence isn’t perfect, I know you probably won’t have a consensus, but you need to come to a decision now”? That was one way.

Or what I guess I’m trying to push for is to really think about how can we just help science become more efficient so that the evidence becomes available earlier? So for example, funding data collection about a new technology that comes out, even if we’re not concerned about it yet. So for example, maybe we don’t have longitudinal data yet about augmented or virtual reality. Maybe we should be collecting that now, just in case people, we get concerned three years later. Is the cost of collecting data now, it’s probably a lot less than the opportunity cost of not having that data later on. And so I think there’s a lot of ideas to play with, but yeah, I don’t have time to go into all of them in detail.

Jackie Whisman: Well, I really liked what you were saying in your lecture about the options for politicians when it comes to new technologies, and maybe we can touch on that before we have to wrap up.

Amy Orben: Which ones have I not covered that I covered in the lecture?

Jackie Whisman: I guess, sort of this... We talked about academia and how there are drivers that didn’t exist before. I think that the same drivers exist for politicians. There’s sort of this need to make a splash, and this knee jerk reaction happens a lot, and it results in over-regulation. I think that that is part of your cycle and a really important thing to keep in mind as we’re thinking about new policies that we should be or should not be implementing.

Amy Orben: Yeah, I think you can take it a lot of different ways if you recognize the cycle to be there. One option is to try to stop the cycle, and I think that’s not possible. It’s natural for us to become concerned about technologies, and maybe you can mitigate some parts of the cycle, but I don’t think you can stop it. So I think we need to think about mitigation. In terms of mitigation, naturally, we can try to not fan the flames and we can try to change the incentives of both politicians and academics. But I think in core it’s the last two stages of the cycle that we can really change. It’s the evidence procurement, which I was just saying, we can procure evidence a lot more efficiently. Policy can fund research a lot more forward thinking instead of reactive. In 2017, a lot of funding was released for social media, but actually that train has left the station.

Because that evidence will only come into... it takes a couple of years to create, so that will be a 2020 publication. And at that point, that policy, the policy was asked for in 2017. I think that’s the real problem. So I think we need to have forward thinking, at least forward-thinking data collection, so that we have the data necessary for the questions we might want to ask in the future. I think part of that is really thinking about data sharing as well with technology companies. Actually, technology companies sit on a lot of data that academics can’t access. And there’s a lot of debate about the topic, but I do feel very strongly that, for example, I rely on self-report social media use measures, and a lot of people have to.

I feel like if we want to create good technology policy, if we don’t want to, for example, over-regulate or under-regulate, if we want to have the evidence necessary, we need to have a collective action, not just of academics and policymakers, of technology companies as well. And I think it needs to be a collaboration between all three. I think we’ve had pairs of them trying to make it work. Academic and tech companies doesn’t work, because academics need to sign NDAs and the power relationship is different. Policy and tech companies doesn’t work, because the evidence isn’t there. So if we actually collaborate, and I think we’ll have to figure out a new way to do that, but I think these conversations are very important to have. We’re living in a digitalized society. Technology will become ever more important. It will become our infrastructure of how we socialize, of how we live our lives.

Rob Atkinson: Yeah. No, Amy, I think that’s really, really intriguing, because way back in the old days, I used to work at a thing called the Office of Technology Assessment, which was an arm of the US Congress, providing advice. And to your point about you need this sort of tripartite effort to really be able to get out in front on these data questions, and rather than the precautionary principle, like the EU does it, I think, paraphrasing, we’re sort of talking about a precautionary principle of getting a lot of data. That’s the precaution. It seems like a group like OTA in the US, if there were one, could do that.

But the problem right now is, we don’t have anybody whose job it is to think about the future in terms of data needs and these new technologies and then to organize that ecosystem in an effective way. Maybe the Federal Trade Commission could do it, but they see their job more regulatory in the US. But I think that’s a really intriguing idea, and hopefully, maybe it’s an idea that will get some legs in the US as we go forward.

Amy Orben: I would love to see this idea progress. As a young academic, I feel like having really thought about this cycle, it really shows me that if I could change something, it would be that data procurement, to allow the evidence to appear when needed and to be created when needed. It’s one of the reasons why I’ve stayed at the university. I think there is a real chance to create a better system so that we’re not caught up in this cycle that is really stalling both policy and public conversation, and innovation probably as well through that public conversation. And to really start trying to say, how do we live with a cycle? How do we mitigate it? And I think those are the key questions going forward.

Jackie Whisman: Well, Amy, this was great. Thank you so much for being here. Can you tell our listeners where to find you and follow your work?

Amy Orben: I’m on Twitter a lot. It’s @OrbenAmy, O-R-B-E-N, Amy A-M-Y. Well, I have my one YouTube lecture, which is for an academic audience, but if you are interested in the kind of academic minutia of my thinking around this cycle, do have a look at the lecture. And the paper’s also quite accessible. I tried to make it accessible for a broad audience, and it’s openly available. So yeah. But you can normally find it through my Twitter, and if not, send me a message on that and I’ll probably get back to you.

Jackie Whisman: We’ll link to it in the show notes, so thank you. And that is it for this week. If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to [email protected]. You can find the show notes and sign up for our weekly email newsletter on our website, itif.org, and follow us on Twitter, Facebook and LinkedIn, @ITIFdc.

Rob Atkinson: We have more episodes and great guests lined up. New episodes will drop every other Monday, so we hope you will continue to tune in.

Back to Top