Design Council Digest: Designing Entangled Futures


Back

Plastic Free July

Plastic Free July round up
Innovation / Opinions

Movement in Colour

An exploration of how colour is used to define different movements
Innovation / Opinions

Jesper Eriksson - Transformative Materials

We sat down with the London based artist to discuss coal, fossil fuels and the nature of materials.
Icons

KASIA WOZNIAK & LISA JAHOVIC

Photographer Kasia Wozniak and set designer Lisa Jahovic present 'Negative Mirror' ...
Innovation

Loading...

Words by Ve Dewy

Design Council Digest: Designing Entangled Futures

A systemic design lens on AI, sustainability, and governance

Photo: Design Council 

Design Council Expert, Ve Dewey

“Amid this terrifying climate crisis, designers need to step up.”

These words of Indy Johar (pictured below), co-founder of Dark Matter Labs, cut through the World Design Congress (Congress). His provocation marked a turning point: climate boundaries are collapsing, and design can no longer be surface or an afterthought. As AI reshapes society at a rapid pace, design stands at a critical juncture.

Credit: Design Council

This essay is a design-led reflection on how the Congress revealed AI’s potential, through its various companies and speakers, to support sustainability, and why systemic design must be at its core.

If approached appropriately, AI can contribute to sustainability, as demonstrated by Desolenator, co-founded by Alexei Levene (pictured below) and featured as a keynote speaker at Congress. By combining solar technology and AI to deliver clean water, it shows how intelligence can align with planetary limits when guided by regenerative principles. 

Credit: Design Council

Another example is CarbonTrac, founded by Yasmine Abdu. This AI-powered platform helps retailers and consumers measure and reduce product-level carbon emissions in real-time, demonstrating how AI can influence everyday choices and industry practices toward greater accountability and climate goals.

However, AI is not a neutral tool; it is a complex, interconnected ecosystem encompassing code, infrastructure, resources, stakeholders, and other elements, with significant impacts across society. Addressing sustainability challenges related to AI cannot be reduced to reactive or isolated interventions. It requires a holistic, iterative process that accounts for social, economic, and ecological consequences across the wider system. This is the role of systemic design: to move beyond surface-level fixes and embed accountability, interdependence, and long-term stewardship into the foundations of AI. Only then can AI become part of a regenerative future, one designed for life, not extraction.

AI and Sustainability

Credit: Design Council

The question is no longer whether AI will transform society, but how and by whom it will be designed.

During the “Sustainable Intelligence” panel (pictured above), Dr Ramit Debnath of the University of Cambridge reminded us that AI is a design problem. I would go even further: AI is a design systems problem.

By systemic design, I mean an approach that recognises the entanglement of AI with wider systems, from infrastructures and energy use to governance frameworks and social relations, and seeks to shape these interdependencies from the outset, not as an afterthought. This is what makes systemic design different from the “ethics add-ons” we often see in technology: rather than bolting on principles after the fact, it embeds accountability, reciprocity, and long-term stewardship into the architecture of systems themselves. In other words, systemic design is not merely a wrapper for ethics or efficiency; it is a form of governance. Supporting this view, work from the Bonn Sustainable AI Lab, under Dr Aimee van Wynsberghe, has noted that “sustainable AI” is too often discussed as if it only means practical applications. However, unless the design, infrastructure, energy use, and embedded values of AI are considered, the label risks becoming hollow.

Van Wynsberghe frames “sustainable AI” in two branches: AI for sustainability (using AI to advance goals such as preventing deforestation or achieving the SDGs) and the sustainability of AI (the energy, carbon, and material costs of building and running it). Unless both are addressed together, the term sustainable AI risks becoming hollow. This framing helps situate both the Congress applications and the broader civic debates.

The Congress should not be viewed as an isolated event, but as part of a broader dialogue on how design can guide the green transition; it set the tone: 

“Designers and commissioners of design all around the world have a critical role to play in designing a regenerative future by reducing carbon emissions and increasing biodiversity. Design shapes the world and holds huge power.”

That power was visible across the programme. Greyparrot uses AI-driven computer vision to track waste streams, enabling circular economies. Dassault Systèmes advances “Frugal AI,” pruning models to reduce computation, improving data-centre efficiency, and simulating airflow and cooling to shrink its environmental footprint. Pivotal Future harnesses biodiversity-first AI, expanding value systems beyond carbon. CGI showed how measuring a client’s AI footprint revealed that up to 50% of infrastructure emissions could be reduced through smarter IT and code efficiency. Congress mission partner, Kearney, champions AI’s role in supply chain optimisation. From academia, the Collective Intelligence & Design Group at the University of Cambridge demonstrates how human–machine collaboration can support equitable decision-making, for example, by mapping misinformation for just energy transitions.

These developments underscore the central question: if left to corporate strategies alone, AI risks locking us into extractive and unsustainable infrastructures. Redirecting these trajectories requires more than efficiency tweaks; it calls for stronger governance frameworks and accountability that weigh climate costs against benefits, investment in less energy-intensive infrastructures, and systemic design approaches that make impacts visible and relational. A few precedents exist, such as Singapore’s Green Data Centre Roadmap, which enforces water-efficiency rules for large users and ties new capacity to sustainability commitments. However, with most AI governance, from the EU AI Act to national policies, they remain too slow or voluntary to meet the urgency of planetary limits.

Distorted Lake Trees by Lone Thomasky & Bits&Bäume

Illustrates the vastness of untouched nature, which supports all (human) life. While much of it is increasingly being first explored, we're increasingly seeing how AI development encourages industrial expansion and exploitation of these previously untouched environments. The digital distortion in the image aims to represent the breakdown of these ecosystems through destructive practices like deforestation and water contamination by AI companies. 

Credit: Lone Thomasky & Bits&Bäume / https://betterimagesofai.org /https://creativecommons.org/licenses/by/4.0/

Urgency Beyond Congress

Yet the optimism of these applications sits uneasily alongside the reality of AI’s unchecked growth. Since the Congress ended on the 10th of September, announcements have already been made of £31 billion in new investment from US big tech into the UK’s AI infrastructure, alongside plans for the number of data centres to increase by a fifth. Government oversight remains weak: the UK’s new AI Energy Council is dominated by tech and energy firms, with no role for civil society or local communities.

The stakes are high. Training a single large language model (LLM) can emit over 200 tonnes of CO₂, equivalent to more than 100 international flights. By 2040, AI-related electricity use could reach nearly 8% of global demand, rivalling entire industrial sectors. The burden is not evenly shared: data centres consume water and energy in already vulnerable regions, while companies report “net zero” on paper, which masks millions of tonnes released in practice. Meta claimed “net-zero”, but upon closer examination, it reached 3.9 million tonnes in 2023.

These developments underscore the central question: if left to corporate strategies alone, AI risks locking us into extractive and unsustainable infrastructures. Only systemic design, accountable, transparent, and relational, can redirect these trajectories toward regenerative futures.

Web of Influence II by Elise Racine & The Bigger Picture explores the concepts of interconnectedness, complexity, and transparency in a world shaped by AI.

Credit: Elise Racine & The Bigger Picture / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Beyond Congress: Accountability and Alternatives

The Congress voices resonate with a growing global conversation, one that makes clear the need for design to be at the heart of how AI is imagined, built, and governed, not just mentioned in passing as “design and deployment.”

Dr Sasha Luccioni, AI and Climate Lead at Hugging Face (and named to TIME100 AI in 2024), has mapped the full lifecycle impacts of AI, from training and deployment to hardware manufacturing, data centres, and e-waste. In a primer on the environmental effects of AI, Luccioni et al. warn that every stage of the process consumes natural resources and releases greenhouse gases, underscoring the need for transparency and accountability across the sector.

Yet transparency remains weak, and research is reinforcing these warnings. The Minderoo Centre for Technology & Democracy at the University of Cambridge, in its 2025 report, Big Tech’s Climate Performance and Policy Implications for the UK, showed how inconsistent disclosures and unchecked infrastructure growth risk derailing Net Zero goals. AI remains dominated by Silicon Valley firms, framed through Western logics of scale, optimisation, and control. Looking beyond this status quo requires embracing alternative ways of ethical knowing, including sovereign and Indigenous approaches. 

As Lord Deben reminded the Congress: 

“Policy should set clear outcomes, not prescribe solutions. Designers must think from the very first step about lifetime use and end-of-life.”

Norway’s “Stargate” AI, framed as an “AI gigafactory” for Europe, was announced in July 2025. Powered by renewable energy, it frames AI sovereignty through environmental responsibility, showing how green computing can align scale with sustainability. Switzerland, with its Swiss AI Initiative, ties its open-science model Apertus to climate commitments, positioning ecological accountability as part of its sovereign AI strategy. Whilst Abundant Intelligences, an Indigenous-led program, prototypes AI grounded in reciprocity. Through “pods”, from Māori kaitiakitanga in Aotearoa to Hawaiian ocean protocols, it reimagines AI through life-sustaining worldviews. These perspectives highlight both the richness and the fragmentation of current approaches to the topic.

What makes them instructive is that each embodies elements of systemic, accountable, and relational design. Norway’s Stargate is systemic in reimagining the infrastructure of AI itself: built on surplus hydropower and designed with closed-loop liquid cooling, it reduces water use whilst enabling regional innovation, embedding ecological limits directly into scale. SwissAI’s Apertus initiative demonstrates accountability by tying open-science development to climate commitments and making ecological impact as measurable as technical performance, becoming the first major model to comply with the EU AI Act’s transparency requirements. Meanwhile, Abundant Intelligences exemplifies systemic and relational design: a cross-disciplinary, cross-cultural collaboration grounded in regeneration, generosity, and reciprocity, shifting away from scarcity logics “toward a future where Indigenous communities have capacity to fashion AI systems that nurture us, and all the beings around us.” The common thread is clear: AI’s sustainability potential cannot be realised through technical fixes alone. It depends on design that is systemic, accountable, and relational.

Design as Governance

In weaving these voices together, this essay situates the World Design Congress not as an isolated event, but as one node in a broader dialogue on how design must guide the green transition. The Congress itself set the challenge: “Design shapes the world and holds huge power.” The question now is how we use that power.

To engage meaningfully with sustainable AI, both AI for sustainability and the sustainability of AI, design must not be introduced after systems have already been built, but rather be present from the very beginning: from the development of LLMs to the design of AI governance frameworks. Without systemic design at the centre, AI’s sustainability promises risk remaining surface-level, or, as van Wynsberghe warns, “hollow” as these commitments will not reach the depths and rigour of the systemic entanglements that AI cultivates.

Johar’s call for designers to step up finds resonance in Karen Hao’s warning that today’s AI empires thrive on opacity and automation without consent. Both underscore the same truth: without systemic design as governance, AI risks deepening extractive logics rather than enabling regenerative futures

Hanna Barakat & Archival Images of AI + AIxDESIGN (above & cover image)

“Data mining” triptych series: visual exploration into ""new frontier"" of AI technology. The base of these images depicts “The Ascent of Mont Blanc”— an effort to summit the mountain, painted in 1855 by John MacGregor. Reappropriating the intention of these paintings, the collage overlays images of wires and circuits that are “melting the ice”—an ironic commentary on “progress” in the name of environmental extraction The juxtaposition of gradient backgrounds with layered microchips, wires, and cell towers offers an ironic visual commentary on digital colonialism and the visual language of high-tech companies. The use of playful colours contrasts with the fragile foundations of AI infrastructure, rooted in digital colonialism and climate costs. This work positions AI as a system built on labor, material, and capital—revealing the often invisible labor in electronics manufacturing throughout the Global Majority."

Credit: Hanna Barakat & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

About Ve Dewey

Ve Dewey is a globally networked executive design leader whose career has been at the intersection of technology, design, and innovation, with success across industry, the third sector, and academia. Her natural state of thinking is on the bleeding edge, allowing her to see opportunities others do not and collaborate with diverse leaders and organisations to create a positive impact for the collective.

Over a 15+ year career, Ve has championed inclusive approaches to design leadership, organisational change, and AI. This is evident with her experience of curating and executing the inaugural design HR role at Mattel, supporting 400+ creatives during the company’s cultural and digital transformation; leading multi-million dollar global rebrands at Mattel; product marketing at Adobe, where she established a new creative marketing programme in Europe to support key customers (e.g., WPP and TfL) to creatively meet business imperatives; and at the Royal College of Art, where she collaborated on evolving a neuroscience-based inclusive leadership model to address equity, diversity, and inclusivity.

Ve is the founder of the RSA’s Global Decolonising Design Coalition, on the board for the Australian-founded creative organisation Never Not Creative, a visiting Fellow at ZincVC, and holds an MBA (distinction) from Central Saint Martins with a research focus on design and systems-based leadership.

Ve on LinkedIn: https://www.linkedin.com/in/vanessadewey/

fullscreen
close

Subscribe To Our Newsletter

Catch up on all the latest news from State Of Design Affairs right in your inbox.

Remind me later
Please dont ask me again

Thank you for signing up, don't forget to check your inbox to confirm your email.