top of page
Search

Opinion: Fear of AI is a Story/Signal Worth Listening To

  • Writer: Helen Feng
    Helen Feng
  • Oct 3
  • 4 min read

People often describe resistance to AI as close-minded. There may be some truth in that, yet stopping at the label risks flattening a much more complex picture. Resistance carries vital signals that show us where the fractures are, where people feel unsafe, and where attention and imagination is urgently needed.


The Weight of Threat

When people push back against AI, they often speak from the weight of profound threat, e.g. by fears of losing work or livelihood, having to adapt to dramatic changes as an additional stressor in life, having to rethink their entire sense of identity and existential stories, and witnessing tremendous rates of change in real time...


This isn’t new. Many major transformations in human history has carried both progress and dislocation. Agricultural revolutions restructured society but hardened inequality. Industrialization generated prosperity for some and immense suffering for others. Even the printing press, which expanded literacy and democratized knowledge, destabilized entire political and religious systems. Fear was always present, often a wise survival reflex.


Nervous Systems Under Pressure

Psychology gives us language for this. When the nervous system perceives threat, fight, flight, freeze, or fawn patterns emerge as embodied states. They narrow perspective, reduce curiosity, and heighten reactivity. Under these conditions, logical debate or visionary imagination is far harder to access, even when we want to channel these skillsets.


I think this matters deeply for AI discourse. When people are asked to “see the opportunities” or “stop panicking,” they may not be able to. Not because they lack intelligence, but because their nervous systems are already bracing for danger. Without environments that foster psychological safety, the conditions for thoughtful, creative engagement simply don’t exist.


Scarcity, Fear, and Social Psychology

Under pressure, our minds often default to what Daniel Kahneman calls System 1 thinking: fast, intuitive, and emotionally charged. System 1 reaches for culprits, explanations, anything that can give concrete form to massive anxieties. It leans on heuristics like availability (latching onto vivid stories in the media), representativeness (blaming visible “others” who seem to fit a pattern), and confirmation bias (favoring information that reinforces existing fears).


These mental shortcuts can distort reality, but can also point to very real dynamics. Automation does change labor markets, necessitating steep adaptations at various scales. Technological consolidation does concentrate power. These are phenomena grounded in class, geography, and other inequities in ways that shape who feels most exposed and who feels buffered.


Moving into System 2 thinking (slower, more deliberate, reflective) is what allows us to untangle nuance and ask harder questions. But System 2 requires significant deliberate cognitive energy, and research shows that scarcity itself drains this capacity. As Sendhil Mullainathan and Eldar Shafir argue, when survival is at stake, mental bandwidth contracts. This means people living with economic, social, or emotional stability often have the space to treat disruption as opportunity, while those at the edge of survival simply cannot. Privilege, in this sense, shapes the very psychological room we have to imagine the future.


I think it is more important than ever for us to help each other craft environments of support, trust, and psychological safety. Systems to redirect us into System 2, to give us space to approach change with curiosity, intention and expansion rather than systems piloted by fear and fight/flight.


The Role of Media and Narrative

How societies talk about AI also matters. Frequently, media coverage perpetuate extremes and tech algorithms reinforce echo chambers, ping-ponging between utopian promise and dystopian collapse... Psychological research on the availability heuristic shows that vivid, emotionally charged narratives outweigh sober probabilities in shaping public perception.


The result: polarization. Either AI as salvation or destruction, either accelerate without brakes or resist entirely... But acceleration without caution risks repeating history’s pattern of innovation enriching the few and destabilizing the many. Rejection without imagination risks freezing the future in fear, blocking adaptations that might serve us. Such binaries leave little room for open discussion, and for the messy middle ground where reality usually lives. It erodes trust and makes collective sense-making more difficult.

What would it look like to move differently? To invite fear as data rather than noise. To treat dissent not as obstruction but as design input. To build technologies with the very communities most threatened by them.


Listening as a Social Practice

Listening to fear ≠ stopping at panic. We could see threatened states as communicating vital information—about trust, about fairness, about what needs to be worked on moving forward.


I think tending to psychological safety and social equity is a crucial pillar of technological development. It leads us toward the bigger questions: Whose values are encoded into this technology? Who benefits first? Who absorbs the cost? What stories are we telling, and what stories are we ignoring?


It asks whether we can break from history’s cycle: invent, disrupt, concentrate power, leave the vulnerable to catch up later. If we can, it might become a major sociological shift in which fear was not sidelined, but honored as a part of ethical and imaginative innovation, so we can collaboratively build something that's honors humanity and empowers wellbeing.

 
 
 
  • White YouTube Icon
  • White Instagram Icon
  • LinkedIn

© 2025 by Helen Feng.

bottom of page