Our MozFest 2024 Takeaways

As one of our Stby colleagues says, “MozFest is the best week of the year!”. Luckily for us, MozFest House was hosted in Amsterdam right across the canal from our office at Tolhuistuin for the second year in a row. For three days, our Amsterdam team immersed themselves in activities, discussions and movements around the theme ‘Togetherness and Solidarity’ in the context of building Trustworthy AI. Read our key takeaways below:

Bas’s takeaways:

  1. It’s nice to think about AI with physical objects. (Summ()n workshop)
  2. It’s great to use music at a tech conference for building community beyond the rational or opinion-level. No wonder all social movements have their favourite songs to sing together! 
  3. ’Data donations’ are a cool way to gather data for large-scale digital design research, eg. on how TikTok algorithms work (as Mozilla Foundation is starting to do via their specially designed app to collect these donations from TikTok users)

Sophie’s takeaways:

  1. There is such a profound disconnect of our digital activities and their impact on the planet. For example, video notes on WhatsApp are very bad for carbon emissions, and I am devastated. However, nature and digital tech do not have to be as exclusive as they seem to be, and there are many ways that we can connect the two. Whether this is looking at different energy sources to power our servers (i.e. soil batteries) or nurturing a relationship between digital tech and nature, i.e. by finding organisms in nature that may be able to benefit from the heat used by servers or continuing to brainstorm ways we can make tech regenerative.  
  2. In a world where it is easy to feel like our individual agency means very little, events like this are a good reminder that we are part of a community that cares, and our individual agency is more powerful than we think. We may not be able to stand up to big tech companies or broader institutions, but we can choose not to contribute to systems and services that don’t benefit us and continue to exacerbate oppression. This is a powerful thing. 
  3. I have many takeaways on time, but one that I think is very profound is that time goes at different speeds, and ours is faster than that of the planet, solar system and the universe, for example. This provides a different perspective on how we think of climate timelines and really puts into context how fast the development of tech, especially digital tech is moving in this context. How can we align the speed of this development with longer-term climate thinking? 

Paulien’s takeaways:

  1. It’s important to teach young people about how algorithms work even though they are heavy users of social media and know their way around the digital world: https://whatsthealgorithm.com/ 
  2. Interesting question: How do you make people care about the negative side of technology and AI?
  3. Misinformation and fake news shared online are a threat to democracy (they have an effect on elections – in India and Indonesia for instance), and we should put effort into understanding why people share and believe these stories and how we can reach people with narratives that are supported by research and journalism. It takes effort and time to understand nuanced stories. How do we motivate people to put in that effort? How do we reach vulnerable people with counter-narratives? From the discussion: “Disinformation is digital marketing” by RNW Media.

Katy’s takeaways:

  1. Data centres do their best to conserve and/or reuse energy (because it’s cheaper), but they still take an insane amount of power to run. You may think “well, let’s just use green energy then”, but that solution turns out to be not so great either. In the Netherlands you can find data centres running on wind energy, but these windmills are often built in rural communities that do not want them or benefit from them, as they are solely used to power the massive data centres. These structures obscure the locals’ views and interrupt their landscapes. This is a clear example of how tech ethics, climate justice and social justice are very intertwined.
  2. AI language models lack training on indigenous languages which leads to a great gap of knowledge, especially about how to live with and listen to nature. I was inspired by an example in Indonesia (a country with nearly 700 languages). There is a word for tsunami in the national language, but not all indigenous languages have such a direct translation. For example, one language doesn’t have a singular word but rather the community knows what song to sing when the water rises – this has saved countless lives because the song lyrics say to go uphill. The song is the word for tsunami. This important context is currently lost when we try to make language a collection of words to fit into an algorithm, losing out on important information and meaning that grows our global understanding of the environments we live in.
  3. The internet wasn’t always privatised. Being on the cusp of millennial/gen z, my mental model of the internet is basically Google and social media platforms. I didn’t fully realise that, originally, the internet was built to belong to the people in order to share information and connect across longer distances. At some point, this system was seen as a way to profit. We should teach the origin of the internet more to young people! Learning this truly changed the way I see the digital world.

Geke’s takeaways:

  1. AI developments are going fast at the moment, and everyone is scrambling to get their heads around it. Discussions at MozFest strengthened my thoughts on our/Stby’s role to play in this field. For effective implementation of AI we need explorations and guidance on different levels and from different stakeholders’ voices: The organisation (strategy), the final beneficiaries (society), the staff delivering this (organisation) as well as the tech itself (tools). Stby is good at making sure beneficiaries and staff are being heard and considered.
  2. In discussions about AI we need to be careful to not only focus on individual use of specific tools. This is embedded in the wider culture and ways of interaction in society or organisations. Successful implementation of AI will be a gradual process of customising and fine-tuning. This needs exploration, conversation, trials, guidance. That all needs to be supported and informed by research, in a step-by-step process.
  3. Decisions on ew legislation, such as the recent EU AI Act, do not automatically mean that all is clear and done from that point. It is usually followed up by a multi-years process of gradual clarification, negotiation and implementation. All parties involved (not just policy makers but also industry, NGOs, press and society at large) need to stay involved to make sure that the original  intend does not get watered down.
  4. Bonus takeaway (something I posted on Linkedin last week): Really loving the vibe at MozFest in Amsterdam this week. Such a great gathering of people who are committed to all things related to justice and ethics around the current big changes in tech. Sensitive, driven, outspoken, curious, imaginative, critical, open minded, bold, welcoming. Meeting so many new peers from all over the world. What a joy to have the time to hang out and ponder things.