Stby and Teaming with AI ran an online participatory study in April 2025, aiming to collect real-life stories of how people worked with AI at work as part of their teams in 2025. This diary study attracted 20+ participants from six countries around the world, resulting in 40+ stories of human/AI collaboration. As the study reaches completion, we’d like to share some insights with everyone.
1. AI as a Productivity Multiplier
It comes as no surprise that AI tools are fundamentally reshaping individual productivity, consistently reported across both junior and senior team members. The involvement of AI effectively establishes a higher average standard, as individuals can readily leverage available knowledge to complete tasks with significantly greater speed and scale.
Accelerated Deliverables
One design lead vividly illustrated this, noting that using an AI tool for drafting design proposals made the process “three times faster than what you would usually do” and resulted in “much better quality” of written output. This demonstrates AI’s capacity to not only expedite work but also elevate the standard of the deliverables.
Enhanced Information Synthesis
Some participants offered interesting observations on developing sophisticated techniques to maximise AI’s utility gradually. For instance, one individual tasked with reviewing a 130-page report specifically asked an AI tool to “extract the key insights without deleting any of the interesting facts”. This allowed for rapid navigation through relevant sections without losing crucial context, showcasing a nuanced approach to AI-assisted information processing. Such techniques are often shared among team members, further amplifying collective efficiency. Swapping tips on useful prompts is a common behaviour we hear when the team is trying to learn together.
2. The Evolving Role of Human Oversight and Judgement
While AI unquestionably enhances speed, its integration necessitates a heightened degree of human vigilance and critical judgment. AI tools, despite often providing “very confident” answers, are prone to “silly mistakes” which humans can easily recognise. This creates an “illusion of confidence,” paradoxically increasing the workload for human colleagues who must validate, check, and ensure accuracy, especially given the sheer volume of output generated by AI in a short timeframe.
Addressing Bias
Significant concerns emerged regarding bias, particularly racial and gender bias, observed in visually generated AI. One designer recounted attempting to create poster personas for a design team, where the AI consistently depicted a project manager as a “white male” and referred to them as “he”. Another example involved an AI generating a visual analogy for AI-human interaction as a “robotic person touching a female human’s face,” which was deemed inappropriate. This highlights that AI collaborators lack the “inherent ethical standards” that social common sense and professional experience typically provide.
Accountability and Pressure
The ultimate accountability for the team’s output remains with the human members. As one senior team member succinctly put it, “if AI makes a mistake and I let this slip, then from my colleagues’ perspective that is my mistake”. This places significant pressure on team leaders to oversee the AI’s output, much like managing junior staff. One design lead described AI as being “a bit like you suddenly have three junior members that can generate stuff really quickly, but I have to check and oversee what they are doing”. This dynamic underscores the critical need for human team members to develop robust critical thinking capacities and to understand their own professional blind spots when collaborating with AI.
The integration of AI is not merely about individual task completion; it profoundly reshapes human team dynamics and influences strategic task allocation. A significant observation is that current AI tools are predominantly designed for individual use rather than collaborative team functions.
3. Reshaping Team Dynamics and Strategic Task Allocation
Collaboration Challenges
This individual-centric design poses challenges for teams. Some participants recounted needing to “huddle around the table” with one “AI handler” typing prompts, or manually copying AI-generated content and correcting errors before sharing it with colleagues. One story detailed a colleague feeling uncomfortable about having to read an AI co-pilot’s insights from her colleague’s phone because conversation history wasn’t accessible on her own account. Similar stories like this highlight a common friction point in collaborative AI use.
Deliberate Task Allocation for Skills Development
Interestingly, some senior team members described deliberately reserving certain “lower risk” tasks for junior staff, even if an AI could perform them. These tasks, which are easily checked for accuracy, provide valuable opportunities for younger colleagues to gain professional development, practise their judgment, and engage in crucial “learning conversations” with senior team members. This proactive decision-making protects the human skill growth and team cohesion alongside AI efficiency.
AI as a “Sparring Mind”
In some advanced instances, teams have embraced AI as “an equal part, a sparring mind”. One example involved a team brainstorming user journeys with an AI co-pilot on the table, feeding it data and even allowing it to “listen into their conversation between the humans”. The AI then generated new ideas, some of which the human team hadn’t considered. This iterative approach, going back and forth between AI and its human teammates, signifies a shift from viewing AI merely as a tool to a more integrated, collaborative entity, fostering new dynamics within the team.
A second round of diary study is taking place in November 2025, hoping to extend our understanding of human/AI collaboration at work as both AI and human adapt and develop new capabilities to work better together. For anyone interested, please get in touch with qin@stby.eu