Practical Wisdom in the Age of AI

Introducing TC’s AI for Sustainability Transformations Project

AI’s impact is no longer an abstract concern, it is actively reshaping how we think, write, coordinate, and decide. For those of us working in sustainability transformations, this shift touches the heart of our work.

In a moment when ecological, political, and epistemic systems are destabilizing, AI amplifies our risks and our responsibilities, but it also brings us new opportunities.

Over the past year, TC members helped us develop an Ethical Compass for AI and curate a range of useful AI tools. What became clear is that what we need is not more tools, but better judgment. This next phase builds directly on that insight.

AI is moving into our conversations and workflows at a time when our field is grappling with rupture: imagining alternative futures, navigating complexity, building relationships in diverse networks, shaping narratives, designing adaptive strategies, and making just decisions.

The use of AI offers real promise: expanded collective intelligence, deeper analysis, faster synthesis, and broader participation. But these benefits depend on careful design and use. 

Its risks are equally significant . AI can entrench value extraction and labor exploitation, intensify energy and resource demands, flatten nuance into easy consensus, reinforce epistemic monoculture, deepen power imbalances, and erode human skill through overreliance.

To explore what this means in practice, Transformations Community is working with Mútua Technologies, Metarelational Tech, and the Sustainable Impact Foundation to set up an ambitious agenda of enquiry and experimentation that will be running throughout 2026 on this topic. 

Getting Started

We began by reviewing TC’s previous work in this area, which sparked rich community dialogue and informative tool showcases. Several tensions quickly surfaced.

Deep ethical concerns about AI’s socioenvironmental impacts, political economy, and geopolitical context sit uneasily alongside the excitement of new speed and scale in sustainability work. We also saw that how we relate to AI reflects how we relate to each other, to knowledge, and to nature. Meanwhile, governance lags behind adoptions, leaving us to build habits before we develop the literacy and safeguards to use these tools well. 

This next phase goes deeper. We aim to close the gap between our values and daily practices, surface blind spots, confront discomfort, build new skills, and co-create what is needed. Above all, we want to strengthen our ability to recognize, assess and adapt emerging patterns of good practice. 

Our central question is simple: what does practical wisdom with AI look like in transformations work? 

To explore this, we’re first developing a pattern language for responsible, skillful AI use in research and practice. From there, we will co-design tools, services and artefacts that embed these patterns into real workflows, cultures and decisions. Both of these phases of our work will aim to work in close partnership with members of the Transformations Community. This is research in service of practice. 

In parallel, we are convening a global network of experts and forming a semi-independent council to carry this work forward. The council will connect disciplines, strengthen rigor and amplify the most valuable insights and cases that emerge. 

What is a “pattern language”? 

A pattern language identifies recurring problems in a field and documents proven, interconnected solutions. The result is a living, lightweight resource that makes collective knowledge usable in real time. First conceived by architect Christopher Alexander in the 1970s, it has since spread across disciplines.

As a lens on reality, it offers: 

  • Usability: Access to shared learning without reinventing solutions. 
  • Portability: Transferable practice across organizations, sectors and contexts. 
  • Navigability: Interlinked patterns that evolve through use, feedback and adaptation. 

A pattern language is not neutral. We plan to question and redesign it for this context. 

Its risks include: 

  • False universality: Treating solutions as if they work everywhere, every time. 
  • Context erasure: Stripping away history, power and place in favor of quick fixes. 
  • Over-template thinking: Reducing living relationships to tiny building blocks, as if order equals control, and as if control were a worthy aim in a complex world
  • Hidden norms: Framing “best practice” as unquestioned authority.

In response, we’re designing a pattern language that can also see what patterning leaves out. It will surface hard questions: What can’t be patterned without harm? What does “good refusal” look like when something shouldn’t be patterned? How do patterns  track ethics as tools and norms shift? How do we prevent “AI legitimacy laundering” (where AI processes  stand in for real consultation and deliberation), or premature compression of complexity?

Through a codesign process, a diverse cohort will then help turn these insights into practical tools. Possibilities include a scenario-based card game for improving practice, a chatbot that guides users to relevant patterns, a drag-and-drop workflow AI visualisation tool for practitioners to reflect on and socialise the patterns they use, a peer matching system based on who’s using similar patterns, or a reusable workshop format for redesigning workflows. But this will be an open-ended design process – anything could happen.

Join our newsletter for insights, opportunities & events

Join our newsletter for insights, opportunities & events