Visualising Positive Climate Futures with AI

We recently hosted a workshop at WDCD Live Amsterdam exploring how to imagine positive climate futures. While news, films and other popular media often depict dystopian outcomes of climate change, we focused on envisioning a world that not only adapts to climate urgencies but is also a place we’re excited to live. We were energised by hearing so many positive discussions on climate change, and we thought to try and bring these ideas to life using AI image generation to add more positive future imagery about the climate into the world.

Initial considerations to use AI

We thought that AI image generation could help us bring the stories and worlds from the workshop to life. But first we had to consider some valid concerns:

  1. We wanted to first get consent from the workshop participants that we could turn their positive climate future ideas into AI prompts.
  2. Inevitably, AI repeats history, as it draws from information and imagery that currently exists. This can make it challenging to visualise things that have never existed before.
  3. The imagery used to train AI is not globally representative. This means that western, colonial imaginaries can dominate the image generation process.
  4. An AI experiment about climate change can be seen as ironic, as AI has a significant environmental footprint. We want to acknowledge this by not being careless with our experimentation, but also recognising the nuance that AI can also be helpful in creating positive climate narratives.

The results of our experimentation

A poster campaign that features interviews with humans and non-humans doing good things for the climate and how they are impacted by climate change.
A community centred on people caring for each other, the ecosystem, and their own well-being. It should include communal green spaces, and a shift away from consumerism and big tech, drawing inspiration from past ways of living that fostered closer connections with nature and community.
A community that shares resources, takes responsibility for their waste, and revives forgotten knowledge and practices, all while fostering collective learning and sustainable living.
A water commons that includes soil knowledge and community-based strategies for managing water resources. These commons would help communities mitigate the effects of both drought and heavy rain.
A future where farmers grow diverse crops, human-made structures exist in harmony with nature, and housing projects are focused on community living.

Our initial response to these generated images was a sense of excitement for what it would feel like to live in these realities. Growing local food, living in community, being in harmony with nature… these were all outcomes from the workshop, but after getting to see what they could look like, we could begin to imagine what such a reality would feel like. How great would it be to step out of your front door and pick breakfast with your neighbour, all while still feeling part of an urban environment? What if ad campaigns included the perspective of bees? I would think they’d have a lot to say to us about how we’re destroying their home.

Overall, this experiment was fruitful. AI has a place for being a tool that helps bring positive imaginaries of the future to life to uplift our current dystopian narratives. Not one image is the “perfect solution”, but they help us get our visions out of our minds and make us think about what we would want to keep, get rid of or re-think. In this way, these images help us spark more concrete conversations about the futures we want. Using AI to help bring our imagination to life can bring new ideas for how we want to adapt to the climate crisis we are in.

Limitations of AI for visualising climate futures

Though the imagery helped bring the ideas to life, we still ran into some limitations along the way:

  • The output from the workshop tended to be descriptive, often using terms like “regeneration”, “care”, and “well-being” which are not very clear (if you’re an AI). After some testing, we found that the image generator responded well to stories instead of shorter and more vague descriptions.
  • In the beginning we often used the word “future” in our prompts, but this tended to generate more typical sci-fi imagery we are used to, like shiny architecture and futuristic vehicles. Leaving out this word helped broaden the scope.
  • Some of the groups described alternative agriculture systems. When trying to imagine this, AI had a hard time visualising anything but monocropping. Even with specific edit prompting, it still put crops in rows.
  • The AI preferred to make new kinds of architecture in the “future”, even though we didn’t ask for it. Some of these positive climate futures preferred to use existing architecture, but the AI didn’t understand how to visualise that.

You can read more about the workshop here.

Sophie Tendai Christiaens & Katy Barnard

*Images created using DALL-E 3.