AI & Design Research

Since the breaking news of ChatCPT flooded all over the media, we have been watching this trend and where it might take us. Though it is totally exciting that machine intelligence could finally achieve a form that humans recognise as similar to human intelligence, we are also aware that there are issues and limitations. We spent a some time looking into the basics of this technology development, discussing with peers in the field its implication to our practices, and trying out new tools that fit into our workflow. 

The basics of how AI works

Firstly, we looked into the fundamental principles of how AI works and also how this new wave of generative AI works in particular. We dived into the 3 fundamentals of AI development: neural networks, data and computing power. And learnt how these reveal the possibilities of this new wave of AI tools (literally dozens of them are popping out every week) and its limitations. By studying the fundamentals, we can better understand concepts like ‘overfitting’, ‘hallucination’ or ‘alignment problem’, which are very relevant to our work.

We are not experts in computer science – though luckily, some of us have a computer science degree. By learning about the basics, we can overcome the fear of the unknown and thus start to connect better what AI can do (and cannot do) with what we do and put things into perspective. 

Implications for design researchers

We then looked at how generative AI could impact our work, especially the practice of design research, culture research and qualitative studies. From talking to peers in the field and also hearing from general discussions around us, we can foresee the following:

  • Receiving requests for projects to focus on or include people’s perceptions and interactions with generative AI in all services. There are many ways generative AI could be integrated into various services: many monotonous tasks are likely to be automated, more content will be generated in collaboration with AI at greater speed, and human-computer interaction will become more smooth and more natural in human languages. Meanwhile, data privacy, bias in data training, and digital inequality would gain more awareness as AI tools scale fast and leave their prints on many new services. We should expect to bring these aspects into our study for all types of clients, big and small, public or commercial.
  • Being more aware of and starting conversations about the safety & ethics of AI continuously. While exploring the 3 fundamentals of AI, we became more aware of ethical issues, and we expect debates around these to continue and influence how we choose and work with AI tools. For example, questionable training data, ethical concerns around the data training process for the human trainer, and carbon footprints of GPU usage in AI training are just a few examples. 
  • Learning to work with AI & adopt new tools. We have seen many lists of jobs being replaced by AI in the media. We believe that design researchers are unlikely to be completely replaced by AI (yet); however, design researchers who can utilise the power of AI tools (with good judgement on when not to use them as well) are likely to replace the ones who can’t. At STBY, we see the future as a place where we might consider AI a new colleague that we learn to work with and be part of our team.
  • Attending conferences, workshops, event that focus on AI and its social impact. Or some conferences or events that we usually attend would have AI focused discussion groups, just like how we might see sustainability or diversity conversations popping around in many different conferences nowadays. Obviously, for us the human elements of AI development is the most relevant and important. 
  • Having some fun while doing research. It’s not just about efficiency or making everything faster, AI can make research fun and inspirational. Given the nature of generative AI and its creativity, we can imagine a future where we combine force and get inspiration from the creations of AI-generated images, text, audio and video materials. 
  • Experiencing that working with multiple languages will be made easier with AI.  It’s not going to replace our valuable research partners around the world when it comes to conducting research. However, overcoming language barriers is likely to help us better understand each other and save time. With many AI tools claiming to cover more languages and AI models trained based on different languages, we are super excited to explore more in this area because we do a lot of cross-country and cross-cultural research. 

AI tools for design researchers

We created an internal AI toolkit for ourselves, and here are four services we would recommend for design researchers who are eager to give AI tools a go:

Research Rabbit

Miro AI

Customer Discovery

Reduct

We have extensively tested Reduct as an AI-facilitated video editing tool. Read more in our previous thought piece about Reduct.

A critical lens towards AI applications

Though AI tools are powerful and seem to promise a lot, we should stay critical towards our choices and stand our ground regarding ethical questions. After all, we should be the master of our tools, not the slaves of them.

When adopting generative AI tools, we shall always ask ourselves:

  • What is the AI base of this tool? Can we trust its training?
  • Are we aware of the limitations of the AI base, was it prone to hallucination or bias? How can we validate its outputs?
  • How is our input data being used? Would the AI training model use our data?
  • If we discover inaccuracy or bias in the system, what can we do? How do we report to the platform? What would the tool provider do to mitigate such inaccuracy or bias for future usage?

We expect that more and more AI tools will be generated based on what GPT and Google’s PaLM s have achieved as AI bases. Many organisations use APIs from these two well-known AI bases to create more targeted tools for specific tasks. Yet, we should not forget that new AI bases are likely to surface in the next few years as well, possibly outside the Western world, building on different neural networks, using different data for training, thus having different capabilities of handling bias, hallucinations, alignment problems and even different ethical standards. We remain critically optimistic about AI’s future development and would like to explore a future world where AI becomes a positive force for our society and our environment.

Qin Han