Turning to Embodied Knowledge: Why climate resilience depends on more than algorithms
🧵Insights from my PhD research on anticipatory action technologies and the politics of humanitarian evaluation.
A few years ago, I experienced burnout and depression—not uncommon in the humanitarian sector, and particularly frequent for neurodivergent folks like me. What pulled me through wasn’t another cognitive strategy or psychoanalysis. It was somatic therapy, reconnecting with my body through dance, pottery, yoga, and embracing my queer identity.
This journey wasn’t just a personal healing. It led me to a question that had been lurking in my professional work for years: Why is embodied knowledge missing from humanitarian and development settings?
The critical debates surrounding Artificial Intelligence make this absence even more striking. If technology’s counterweight lies anywhere, it’s precisely in embodied, human experience—the knowledge systems that live in our bodies, not in the algorithms.
For most of my life, I’ve processed knowledge through somatic experience—trusting what my body tells me before making decisions—without even realizing I was doing so. Only through somatic work did I recognize this pattern as a legitimate way of knowing. And only through my humanitarian work did I see its research implications.
Over the past decade, working in humanitarian innovation across four continents, I’ve seen how communities assess technologies through sensory, affective, and narrative ways of knowing. Yet these knowledge systems have been systematically marginalized in formal evaluation frameworks. This gap reflects deeper questions of epistemic authority and power: whose knowledge counts as legitimate evidence? How can we account for embodied expertise?
Led by my lived experience and encouraged by people dear to me, I decided to pursue a PhD to explore these themes and give them the attention they deserve.
Here is a glimpse into my research:
When the Weather App Gets It Wrong
We’ve all experienced this: checking the weather on your phone, seeing clear skies predicted, then getting caught in a downpour a few hours later. Or the reverse—canceling plans because rain was “certain,” only to watch a perfect sunny day unfold—literally today in Cape Town!
These aren’t just inconveniences. They’re symptoms of a deeper problem: climate non-stationarity, the breaking down of predictive models in a rapidly changing climate. The historical patterns that algorithms rely on no longer hold. The future is no longer like the past.
Indigenous and pastoralist communities have known how to read nature’s weather signals for generations. In fact, most people have—just think about what your grandmother used to say about the weather. They haven’t checked the forecasts; they read the sky’s texture, watch animal behavior, feel changes in humidity, observe how soil responds to morning dew, or feel it in their bones (as my grandma used to say).
This is embodied knowledge—generations of environmental attunement encoded not in datasets but in sensory experience.
Now imagine this: An AI-powered flood prediction system is deployed in the high-risk community through an Anticipatory Action program. It promises to revolutionize disaster response by triggering pre-emptive aid before floods hit. The algorithm says flood risk is low. But community members read different signs: the unusual flight patterns of birds, the smell of rain carried on the wind from distant hills, the way water pools in certain fields.
Whose knowledge governs the decision to act?
This isn’t hypothetical. It’s happening across the globe, as humanitarian organizations (full disclaimer: myself included) partner with tech companies to deploy predictive systems for climate disasters. These programs claim to “center local knowledge” while simultaneously privileging algorithmic outputs over communities’ own assessments.
The Ways of Knowing
At stake here are fundamentally different epistemologies—different ways of producing and validating knowledge.
Western scientific knowledge operates through abstraction, quantification, and universalization. It seeks patterns that can be replicated across contexts, measured with precision, and encoded in models that claim objectivity. In Anticipatory Action programs, this manifests as rainfall thresholds, food insecurity indices, satellite-derived soil moisture readings—quantifiable triggers that determine when humanitarian aid gets released (Schneider, 2024).
Embodied knowledge operates through specificity, sensation, and situated experience. It’s the knowledge that lives in bodies that have spent decades or centuries reading a particular ecosystem. The elders who can predict flooding by the behavior of ants. The farmer who knows when to plant, not from a calendar but from how morning light hits the fields. The fisherman who reads ocean currents through subtle shifts in wave and cloud patterns that satellites won’t see.
Neither is inherently superior. But one gets labeled “traditional,” “local,” “indigenous”—implicitly supplemental to the “real” knowledge produced by science (Escobar, 1995). The other gets embedded in decision-making systems, evaluation frameworks, and funding structures as the only legitimate basis for action.
As Arturo Escobar argued three decades ago, development itself was constructed through this epistemological violence: defining entire populations as “underdeveloped” because they didn’t conform to Western notions of progress, then deploying “scientific” interventions to fix them.
The Validation Paradox
Here’s where it gets particularly insidious: Even when Anticipatory Action programs claim to value local knowledge, they create a validation paradox.
Communities’ environmental observations can only become “actionable” if they’re first legitimized by Western science. The elder’s flood prediction based on cloud formations? It needs to be corroborated by rainfall measurements. The collective’s assessment of crop stress based on plant appearance? It requires satellite-derived vegetation indices for confirmation (Hiwasaki et al., 2014).
This creates what scholars call “science-first mentalities” (Hermans et al., 2022), where alternative knowledge systems are systematically marginalized, forced to translate themselves into the dominant language before they can be heard.
The result? Communities’ embodied knowledge gets filtered, flattened, and often erased entirely by the time it reaches formal evaluation reports. What I call the “storytelling deficit” reveals the gap between how communities assess whether technologies meet their needs and what appears in official impact documentation.
When Predictions Fail, Who is Accountable?
The accountability question becomes especially fraught when algorithmic predictions diverge from reality.
When the flood comes despite the AI saying risk was low, who bears responsibility? The technology provider who designed the model? The humanitarian organization that deployed it? The donors who demanded quantifiable metrics? Or the communities who “failed” to evacuate based on flawed forecasts?
Usually, communities bear both material consequences (they experience the disaster) and epistemic ones (their knowledge was dismissed). When their embodied assessments prove more accurate than algorithmic predictions, this rarely redistributes decision-making authority. Instead, it gets framed as an edge case or as local contextual knowledge that can be integrated into future models, still subordinated to the primacy of computational prediction.
This reveals how evaluation frameworks function not just as documentation but as governance mechanisms. They determine what counts as legitimate evidence, whose assessments matter, and ultimately whose knowledge shapes humanitarian futures. When algorithmic predictions script (Akrich, 1992) automated responses, they simultaneously script which knowledge forms are actionable and which actors have authority over anticipation.
The integration of AI into these systems transforms accountability relations in another way too—replacing traditional community accountability with new forms that blur care and control (van den Homberg et al., 2020). What gets framed as protection through prediction can become coercive management, especially when the technology says one thing and embodied knowledge says another.
What Gets Measured, What Gets Missed?
Technologies aren’t just tools we use—they’re experienced through our bodies, senses, and emotions, shaping what we notice and how we know. They mediate what becomes perceptible and legitimate (Ravn & Johns, 2024). What anthropologists call “regimes of sensory values” (Howes, 2003) and “affective economies” of power (Ahmed, 2004; Mazzarella, 2009) describe how technologies train us to perceive certain things as important while ignoring others.
When communities translate embodied knowing into narrative telling, these stories are filtered through evaluation frameworks that privilege particular forms of evidence.
Consider what typical Anticipatory Action evaluations measure:
Number of households reached with early warnings
Percentage of triggered actions completed on time
Cost-effectiveness compared to post-disaster response
User adoption rates of digital platforms
Accuracy of flood predictions against observed events
Now consider what they typically miss:
Whether community decision-making capacity was strengthened or eroded
How anticipation labor was redistributed, whose time was extracted or replenished
Whether collective care infrastructures were supported or depleted
What kinds of knowledge became more or less valued
Who gained or lost authority over defining risk and appropriate response
The first list generates clean metrics for donor reports. The second list captures what determines whether interventions actually build community resilience or create new dependencies.
When evaluation frameworks privilege quantifiable outcomes: “households reached,” “actions triggered”, over relational and embodied dimensions of change, they render invisible the very dynamics that determine long-term sustainability.
This isn’t just a measurement problem. It’s an epistemological one. The frameworks themselves embed assumptions about what constitutes legitimate evidence, who qualifies as a credible evaluator, and what kinds of impact matter enough to track.
Toward Epistemological Justice
How do we create evaluation frameworks that genuinely incorporate communities’ embodied knowledge alongside algorithmic predictions, not as supplementary “local context” but as equally legitimate forms of expertise?
This requires more than adding a few qualitative indicators to existing frameworks. It demands fundamentally rethinking what we mean by evidence, expertise, and impact.
Here are some possibilities that I propose:
Redistributing epistemic authority – Moving beyond token consultation toward genuine co-governance, where communities have actual power over determining success criteria, not just input into predetermined frameworks. This means acknowledging that different knowledge systems aren’t just different sources of the same type of knowledge, but fundamentally different ways of knowing that each reveal different aspects of reality.
Documenting knowledge transformation – Tracking how embodied knowledge gets translated, filtered, or erased as it moves through the humanitarian knowledge value chain—from community assessment to technology design to evaluation reports to scaling decisions.
Operationalizing somatic indicators – Developing assessment tools that can capture sensory, affective, and relational dimensions alongside quantitative metrics—not to reduce them to numbers but to make them legible to decision-makers.
Centering care politics – Evaluating interventions based not just on productive outputs but on whether they replenish or deplete communities’ capacity for collective care and mutual aid, what Cheesman and Aradau (2025) call “technosocial reproduction,” the life-sustaining labor that humanitarian technologies should support rather than extract.
The Stakes
At a time when climate disasters are intensifying, funding for humanitarian response is tightening, and AI systems are being deployed at an unprecedented scale, understanding these dynamics becomes urgent.
When we privilege algorithmic predictions over embodied assessments, we actively redistribute authority away from communities and toward external experts, despite research showing that effective predictive technologies require contextually rich local knowledge (Iazzolino & Dhungana, 2025). We repeat the same colonial patterns that made communities dependent on external expertise while dismissing their own knowledge.
The alternative requires “decolonizing methodologies” (Smith, 1999)—approaches to knowledge production that not only include marginalized voices but also fundamentally challenge whose expertise counts as legitimate and who gets to define success.
For humanitarian innovation, this means moving beyond rhetoric of “localization” toward genuine epistemological justice: evaluation frameworks where communities’ embodied knowledge shapes not just local adaptations but the fundamental design, assessment, and futures of digital humanitarianism.
Anticipatory Action programs are just one case in point of a fundamental problem: Western epistemology’s relentless pursuit of one “true” way of knowing has left us deeply unbalanced. We’ve privileged left-brain analytical thinking while systematically neglecting embodied, somatic, affective, and relational ways of being. These aren’t “traditional” knowledge to extract from others—they’re human capacities we’ve systematically devalued.
Crucially, the assumptions we embed in our systems shape which futures become possible (Muiderman, 2022). If we’re serious about building resilience, we need to live equally with both sides of our cognition. The question isn’t about better prediction—it’s about acknowledging all the ways humans actually know the world, and letting those shape the futures we build.
What are your experiences with different ways of knowing in your field?
Have you noticed when embodied knowledge gets dismissed or when it proves more accurate than “objective” metrics?
I’d love to hear your stories in the comments.
References
Ahmed, S. (2004). The cultural politics of emotion. Routledge.
Akrich, M. (1992). The de-scription of technical objects. In W. E. Bijker & J. Law (Eds.), Shaping technology/building society (pp. 205–224). MIT Press.
Cheesman, M., & Aradau, C. (2025). Technosocial reproduction and humanitarian reason. E-flux Architecture. https://www.e-flux.com/architecture/humanitarianism/6782979/technosocial-reproduction-and-humanitarian-reason
Escobar, A. (1995). Encountering development: The making and unmaking of the Third World. Princeton University Press.
Hermans, T. D. G., Šakić Trogrlić, R., & van den Homberg, M. J. C. (2022). Exploring the integration of local and scientific knowledge in early warning systems for disaster risk reduction: A review. Natural Hazards, 114(1), 1125–1152.
Hiwasaki, L., Luna, E., Syamsidik, & Shaw, R. (2014). Process for integrating local and indigenous knowledge with science for hydro-meteorological DRR and climate change adaptation in coastal and small island communities. International Journal of Disaster Risk Reduction, 10, 15–27.
Howes, D. (2003). Sensual relations: Engaging the senses in culture and social theory. University of Michigan Press.
Iazzolino, G., & Dhungana, N. (2025). Prediction and data curation in digital humanitarianism. Big Data & Society, 12(3).
Mazzarella, W. (2009). Affect: What is it good for? In S. Dube (Ed.), Enchantments of modernity: Empire, nation, globalization (pp. 41–69). Routledge.
Muiderman, K. (2022). Approaches to anticipatory governance in West Africa: How conceptions of the future have implications for climate action in the present. Futures, 141, Article 102982.
Nadasen, P. (2021). Rethinking care work: (Dis)affection and the politics of caring. Feminist Formations, 33(1), 165–188.
Ravn, L., & Johns, F. E. (2024). Digital humanitarianism: An interview with Fleur Johns. Critical Humanities, 3(1), Article 10.
Schneider, P. (2024). “The locals will know”: The role of local actors and local knowledge in trigger development for anticipatory action. IFHV Working Paper Series, 14(2), 1–58.
van den Homberg, M. J., Gevaert, C. M., & Georgiadou, Y. (2020). The changing face of accountability in humanitarianism: Using artificial intelligence for anticipatory action. Politics and Governance, 8(4), 456–467.







Really insightful article. Thank you. It got me thinking about that balance between human experience and technology and how we can apply your thinking to so many aspects of our lives as well as the big existential crises of our time.