Article
Asli Simsek
Founder, Higher Narrative, Lisbon, Portugal
Abstract
As artificial intelligence becomes increasingly embedded in how we live, work, and relate, the body risks being left out of the conversation. This paper explores the layered space between somatic and machine intelligence, asking what is lost when knowing is reduced to data and computation. Somatic intelligence refers to the body’s capacity to sense, interpret, and respond to the world through lived experience. It includes intuitive, pre-verbal, and felt forms of knowing accessible through sensation, movement, breath, and presence. Far from being a supplement to cognition, somatic intelligence is presented here as essential for shaping futures that honor human depth, relationality, and aliveness. Drawing on Causal Layered Analysis (CLA) and Heidegger’s philosophy of technology, the paper examines how dominant systems and metaphors frame the body as a resource to be optimized, often obscuring more subtle ways of sensing and meaning making. Moving across cultural, symbolic, and embodied layers of perception, it invites a rebalancing: one that does not reject technological futures, but reframes them through presence, attention, and embodied engagement. If the future is to remain livable, it must include the parts of us that move slowly, relationally, and with care.
Keywords
Causal Layered Analysis (CLA), Philosophy of Technology, Artificial Intelligence (AI), Somatic Intelligence, Futures Studies
Introduction
We are living in a time where the idea of intelligence is being reshaped. With the rise of artificial intelligence, there’s growing belief that cognition can be externalized, decision-making streamlined, and complexity resolved through predictive design. Within this shift, the intelligence of the body (intuitive, relational, and felt) often disappears from the view.
This article begins by pausing at that absence. It asks: what are we not noticing when intelligence is treated as a purely computational function? And what kinds of futures become possible when we bring the body back into the conversation not as a system to optimize, but as a source of perception and insight?
This question sits at the intersection of two influential perspectives. The first is Causal Layered Analysis (CLA), developed by Sohail Inayatullah (1998), which explores how surface-level problems are rooted in deeper worldviews and cultural narratives. The second is Martin Heidegger’s philosophy of technology, particularly his critique of modern enframing in The Question Concerning Technology (1977), where he argues that technology is not just a tool but a way of revealing that reduces the world to resource.
Together, these lenses help illuminate how dominant paradigms of intelligence are shaped by metaphors that frame the body as an object, a mechanism, or a productivity tool. In contrast, this essay explores somatic intelligence as a different kind of knowing, one that is embodied, temporal, and resistant to extraction.
Grounded in exploratory inquiry into narrative, embodiment, and perception, this essay explores how these modalities can reshape how we relate to technology. Rather than proposing fixed conclusions, it raises a more foundational question: how might somatic intelligence open new ways of imagining the future beyond optimization, and beyond the machine?
Rather than comparing CLA and Heidegger as parallel frameworks, this essay uses Causal Layered Analysis as its methodological backbone, applying it to unpack AI discourse through multiple philosophical lenses. Heidegger’s critique of technology, Jung’s symbolic archetypes, Merleau-Ponty’s embodied perception, and van der Kolk’s insights into trauma and somatic memory are drawn into the respective layers of CLA, not as competing paradigms,
but as depth lenses that enrich each layer of analysis.
Extending the Context: CLA meets Heidegger
Causal Layered Analysis (CLA) is a futures thinking method that helps unpack complex issues by exploring multiple layers of meaning. Rather than addressing problems only at the surface, CLA examines how systems, worldviews, and deep metaphors shape what we see and believe. Within the context of this piece, it offers a structure for thinking about how our definitions of intelligence (especially in the age of AI) are shaped not only by systems, but by the deeper metaphors we live through.
In The Question Concerning Technology (1977), Martin Heidegger challenges the idea that technology is merely a neutral set of tools or machines. Instead, he argues that technology is a way of revealing, a way the world is brought into view and made available to us. At the heart of this view is the term Gestell (Heidegger, 1997, p.19), often translated as enframing.
Enframing is not a technology itself, but the mindset that drives modern technological development. It positions the world, including nature and human beings, as a “standing-reserve” – resources to be ordered, stored, and used. Rivers become energy. People become data points or productivity units. In this mode of revealing, everything is valued in terms of its usefulness and availability.
Heidegger’s concern is not that technology is inherently negative. Rather, the danger lies in how this particular framing becomes dominant and unquestioned. When the world is always seen through the lens of efficiency, measurement, and control, other ways of relating such as care, wonder, or presence become harder to access. We begin to forget that technology is one way of seeing, not the only one.
In contrast, Heidegger offers the concept of poiesis, a more ancient and poetic form of revealing. Poiesis (Heidegger, 1997, p.10) refers to a mode of emergence that is intrinsic rather than imposed; it allows something to come into being rather than be produced for a specific function. It is not forced or extracted but allowed. It speaks to a kind of making that is aligned with the rhythm and nature of the thing itself.
This distinction is not just philosophical. It shapes how we build systems, relate to the body, and imagine the future. When intelligence is framed only through the lens of machines and output, we risk losing connection with the more embodied, emergent forms of knowing. What this article refers to as somatic intelligence.
In bringing Heidegger into conversation with CLA, we begin to see how deep metaphors (like “the world as machine” or “the body as resource”) shape both our technologies and our responses to them. Heidegger does not offer solutions in the conventional sense. What he offers is a way of noticing, a reminder to stay alert to the frames we inhabit, and to remain open to other ways of seeing.
Mapping the Intersections: Somatic Frames, Symbolic Depth, and Technological Narratives
At first glance, Causal Layered Analysis and Heidegger’s philosophy of technology may appear to come from different domains, one a method used in futures thinking, the other rooted in continental philosophy. Like Heidegger, CLA questions not only systems but the deeper narratives and assumptions that shape how we perceive reality. In this sense, both CLA and Heidegger invite us to see beneath the surface, into the frames, metaphors, and epistemologies that shape the future.
This analysis therefore proposes what we might call a Multiview CLA: reading AI discourse through the layered perspectives of Heidegger (1997), Jung (1981), Merleau-Ponty (1962), and others. Each philosopher reveals a different depth of meaning (ontological, symbolic, embodied) that together deepen our understanding of what intelligence means in a technologized world. What Heidegger offers is a way of noticing, a reminder to stay alert to the frames we inhabit, and to remain open to other ways of seeing.
Recent developments in futures thinking have extended CLA into somatic territory. In their 2024 dialogue on Embodied CLA, Wilson & Inayatullah (2024) explore how Polyvagal Theory can deepen each CLA layer by rooting it in autonomic nervous system states. As Wilson notes, “for deep change to happen, we need to move beyond a cognitive understanding of CLA to experiencing CLA as an embodied experience where the mind and the autonomic nervous system align” (Wilson, 2024, p. 96). This view complements the symbolic and philosophical layers explored here by grounding systemic and mythic narratives in embodied perception, linking safety, fear, presence, and relationality directly to how futures are imagined.
Table 1 below illustrates how the four layers of CLA, Heidegger’s framing, and symbolic-somatic perspectives intersect in shaping our narratives of intelligence, embodiment, and technology.
Table 1: Determination of the first driver
| CLA Layer | AI Discourse / Examples | Heidegger’s Insight | Somatic / Symbolic Reframe |
| Litany | “AI knows you better,” productivity slogans, surface hype | Enframing as default lens | Attending to what’s unsaid, grounding in presence |
| Systemic | Apps for neuro-optimization, biometric tracking, surveillance | Gestell: humans as standing-reserve | Reclaiming slowness, embodied temporality |
| Worldview | Intelligence = computation, emotions = noise | Loss of openness to Being | Merleau-Ponty: perception is embodied |
| Myth/Metaphor | Body as machine, AI as oracle or creator | Dominant metaphors shape revealing | Jungian archetypes, van der Kolk: body stores knowing |
At the litany level, we encounter surface narratives: “AI understands you better than you understand yourself,” “the quantified self is the real self,” “productivity must be optimized.” These are not neutral observations. They reflect deeper assumptions, that intelligence is computational, and that the body is either a problem to fix or a system to streamline. Jean Baudrillard (1994) would describe these statements as part of the hyperreal, signs that no longer refer to any underlying reality but circulate as simulations of truth. As language around artificial intelligence becomes increasingly abstracted from its material and embodied impacts, we risk mistaking simulations for substance, headlines for insight. According to Baudrillard’s orders of simulacra, we have shifted from reflecting the real, to masking it, to ultimately replacing it. Much of today’s AI discourse operates in this third order, where the map precedes the territory.
At the systemic level, CLA invites us to examine the architectures. Policies, platforms, and economic models that reinforce these views. Biometric surveillance, neuro-optimization apps, and emotion-tracking tools in the workplace encode a vision of the human as quantifiable and improvable. Heidegger’s Gestell, or enframing, is especially resonant here: a mode of revealing that organizes beings into standing-reserve, treating life as a stockpile of resources to extract from. The body, in this frame, becomes a data set. Sensation and slowness are seen as inefficiencies. This structural orientation is not just technical, it is philosophical.
The worldview layer digs deeper. Here we find the cultural logics that normalize speed, utility, and control as virtues. The belief that emotions are noise in the system, or that decision-making can be outsourced to machines, is shaped by a modern worldview where knowledge is equated with calculation. Heidegger warned that such framing narrows our openness to being. Walter Benjamin (1969), writing decades earlier, argued that the aura of a work of art (its unique presence in time and space) is lost in the age of mechanical reproduction. In a similar way, embodied presence is eroded when experience is constantly captured, analyzed, and reproduced through data. Today, digital reproduction extends Benjamin’s concerns: not only art but also attention, intimacy, and even selfhood are mediated by platforms that prioritize reach over resonance, engagement over presence.
At the myth/metaphor level, CLA and Heidegger converge most clearly. The dominant metaphors of the modern age – the body as machine, the brain as computer, society as system – are rooted in Cartesian dualism. This worldview divides mind and body, culture and nature, reason and feeling. Maurice Merleau-Ponty (1962) counters this divide by insisting that perception is embodied. We do not look out at the world from a mental interior; we are in the world through our bodies. Perception is not just a mental process; it is a way of being.
This view is echoed in Bessel van der Kolk’s (2014) work on trauma, which shows that the body stores emotion and memory, often outside conscious awareness. His findings provide a scientific basis for understanding that intelligence is not just cognitive, it is somatic. The nervous system, musculature, breath, and movement all participate in how we interpret and respond to the world. In futures thinking, this has critical implications. Strategies that ignore somatic data may misread what people actually need or fear. Embodied intelligence reveals layers of insight that cannot be accessed through metrics or models alone.
Adding another symbolic dimension, Carl Jung (1981) offers a framework for understanding how archetypes shape collective imagination. His theory of the collective unconscious suggests that certain motifs (the machine, the trickster, the creator, the wise old man) emerge across cultures and eras. These archetypes are not fixed symbols but living patterns. When AI is framed as an all-seeing oracle, or as a benevolent creator, these narratives are not new. They are modern manifestations of ancient psychic structures. But without integration, these myths may repeat unconsciously, leading to systems that project old fears and desires onto new technologies.
Digital systems do not simply reflect culture; they amplify and shape it. Platforms that promote disembodied interaction or algorithmic curation affect not only what we see but how we relate. This makes it even more urgent to recognize when metaphors, such as “training the model” or “optimizing behavior,” carry the weight of deeper worldviews that treat human complexity as a problem to be solved rather than a mystery to be held.
Together, these thinkers help us read technological discourse not just as a set of tools or policies, but as a symbolic system. Baudrillard (1994) reminds us that simulations can replace the real. Benjamin points to the costs of replication and loss of presence. Jung invites us to explore the psychic undercurrents that shape our collective imagination. Their insights deepen the myth/metaphor layer of CLA, offering richer ways to interpret how AI is both a technological development and a cultural narrative.
CLA provides a method to descend from surface narrative to deep story. Heidegger challenges us to notice what is being unconcealed (and what is being hidden) when we see the world through a technological frame. Merleau-Ponty (1962), van der Kolk (2014, and Jung (1981) bring us back to the body, the psyche, and the symbolic as necessary spaces for reimagining intelligence beyond machines.
If we do not examine the metaphors we are living through, we may find ourselves designing futures that reproduce the very disconnection we seek to move beyond. Re-engaging with the layers beneath systems – and the symbols beneath language – becomes essential for imagining futures that are not only more efficient, but more human.
Recent applications of CLA further demonstrate this potential. A two-day foresight workshop in Thailand explored AI and constitutional futures through mythic frames like “water is life” and even proposed a Ministry of Spirituality (Milojević et al., 2022, 2024). In another exercise, participants imagined personalized Buddha holograms and spiritual health integrated into universal care. These radical futures, grounded in somatic and symbolic depth, mirror the same turn this essay advocates: from data-driven optimization to meaning, presence, and embodied imagination.
Embodied Futures: Implications for CLA and Heideggerian Reframing
What we imagine about the future is never just about the future. It reflects the metaphors, assumptions, and embodied patterns we already live through. When intelligence is reduced to data, or the body becomes something to optimize, we’re not merely forecasting – we’re actively shaping the terrain of what feels possible, desirable, or inevitable.
Media theorist Marshall McLuhan (1964) famously wrote, “The medium is the message.” A provocation that technology’s influence lies not in its content, but in how it reshapes perception and relation. AI is not merely a tool for producing knowledge; it becomes an epistemological frame. Like Heidegger’s enframing, McLuhan warns that our tools remake us. When intelligence is mediated through digital systems, what we come to know (and how we know) is subtly recoded by the speed, abstraction, and feedback loops of the medium.
Causal Layered Analysis (CLA) has long invited us to peel back these assumptions. From the surface-level litany to systemic patterns, cultural worldviews, and the symbolic metaphors beneath them. It reminds us that the narratives we inherit are not neutral; they quietly govern what we design, what we resist, and what we forget. By moving across these layers, CLA provides not just a method of critique, but a structure for re-imagination – one that integrates analysis with meaning-making.
Heidegger, in turn, reminds us that technology is not just about tools or systems but about a mode of revealing, a way of disclosing the world and ourselves. When everything becomes a resource (including our own emotions, attention, or creativity) we fall into what he calls enframing: a narrowing of vision that conceals more than it reveals. This risk is especially present in futures work shaped by a logic of control, optimization, or prediction. Without noticing it, we may begin to treat foresight as a form of extraction.
This is where re-centering the body and its intelligence becomes not just symbolic, but essential. Embodied awareness interrupts the automatic. It resists simulation by grounding perception in breath, sensation, and relational time. As Tim Ingold (2011) suggests, life is not a line of fixed outcomes, but a continuous process of becoming shaped through movement, responsiveness, and encounter. The future, then, is not something we arrive at, but something we travel through, with each step tracing new possibilities.
Bringing these ideas together, we arrive at a deeper kind of futures thinking. One that does not only ask what might happen, but how are we seeing, sensing, and storying what is happening now? And what are the deeper metaphors and histories that shape that seeing?
In this framing, foresight becomes a form of attention. To the visible and the invisible, to the structural and the symbolic, to both emerging dynamics and recurring patterns
CLA helps us descend through the layers of meaning, and then loops back to interrogate the frame itself, recursively surfacing the assumptions behind our assumptions. Heidegger reminds us that all revealing hides something, and embodiment helps us feel where those frames live: in our language, our posture, our perception, and in the choices, we don’t realize we’re making.
Rather than design the future from a place of detachment, this perspective invites us to walk toward it from a place of deeper contact. One that includes the symbolic, the systemic, the somatic and holds space for intelligence that cannot be measured but is no less real.
Conclusion: Rethinking Intelligence, Reimagining Futures
As we navigate an era increasingly defined by artificial intelligence and accelerated change, it becomes clear that the question is not just what we are building, but how we are perceiving and what we are failing to perceive in the process. This essay has explored how Causal Layered Analysis and Heidegger’s philosophy of technology, when held in dialogue with embodied and symbolic perspectives, can offer a deeper lens for futures thinking: one that sees intelligence not as output, but as orientation.
By working across CLA’s four layers – litany, system, worldview, and myth – we begin to understand how surface-level narratives are sustained by deeper structures of meaning. Heidegger’s critique of enframing warns us of the danger in reducing the world to function and utility, while thinkers like Merleau-Ponty (1962), van der Kolk (2014), Baudrillard (1994), Benjamin (1969), and Jung (1981) offer reminders that the body, the psyche, and the symbolic are not peripheral. They are constitutive of how we know.
In a time when simulation often replaces presence, and abstraction detaches us from lived reality, re-integrating somatic intelligence, mythic resonance, and narrative depth becomes not a retreat from complexity, but a radical act of reorientation. It is an invitation to see futures not only as systems to shape, but as stories to listen to and to sense from within.
What lies ahead is not just a matter of foresight, but of presence. Returning to the body as a place of perception, reexamining the stories we live through, and making space for a kind of intelligence that emerges in stillness as much as in structure. Ultimately, the body does not just anchor us, it reveals to us what machines cannot. Futures worth living emerge not from algorithms alone, but from breath, sensation, and the slow intelligence of being alive.
References
Baudrillard, J. (1994). Simulacra and simulation (S. F. Glaser, Trans.). University of Michigan Press.
Benjamin, W. (1969). The work of art in the age of mechanical reproduction. In H. Arendt (Ed.), Illuminations (pp. 217–252). Schocken Books.
Csordas, T. J. (1994). Embodiment and experience: The existential ground of culture and self. Cambridge University Press.
Heidegger, M. (1977). The question concerning technology and other essays (W. Lovitt, Trans.). Harper & Row.
Inayatullah, S. (1998). Causal layered analysis: Poststructuralism as method. Futures, 30(8), 815–829. https://doi.org/10.1016/S0016-3287(98)00086-X
Inayatullah, S. (2019). Causal layered analysis: A four-level approach to alternative futures – Relevance and use in foresight. Futuribles, 430, 49–63.
Ingold, T. (2011). Being alive: Essays on movement, knowledge and description. Routledge.
Jung, C. G. (1981). The archetypes and the collective unconscious (R. F. C. Hull, Trans., 2nd ed.). Princeton University Press.
McLuhan, M. (1994). Understanding media: The extensions of man. MIT Press.
Merleau-Ponty, M. (1962). Phenomenology of perception (C. Smith, Trans.). Routledge & Kegan Paul.
Milojević, I., Inayatullah, S., & Poocharoen, O. (2022). Drama to Dharma and the holographic Buddha: Futures thinking in Thailand’s civil service and constitutional scenarios. Journal of Futures Studies. Inpress https://jfsdigital.org/21789-2/
Milojević, I., Inayatullah, S., & Poocharoen, O. (2024, July 3). Artificial intelligence, water futures, and a living constitution: Using the Futures Triangle to envision novel futures for Thailand. Journal of Futures Studies – Perspectives.https://jfsdigital.org/2024/07/03/artificial-intelligence-water-futures-and-a-living-constitution/
Wilson, D. E., & Inayatullah, S. (2024). Embodied CLA: The role of polyvagal theory in futures methodology – A conversation with Sohail Inayatullah and Debra Em Wilson. Journal of Futures Studies, 29(1), 87–97. https://doi.org/10.6531/JFS.202409_29(1).0007
van der Kolk, B. (2014). The body keeps the score: Brain, mind, and body in the healing of trauma. Viking.