Software Testing Blog | Zenergy Technologies

AI-Augmented Agile : Navigating the Future of Human-AI Teams

Written by Shaun Bradshaw | Apr 10, 2025 3:03:35 PM

 

I remember hearing about a great new tool called ChatGPT a couple of years ago. I was fascinated by its responses to a multitude of queries, but I also encountered its limitations and hallucinations. One experiment I conducted was to see how it would respond to the following prompt, “Briefly describe how to successfully implement an agile transformation.” I would say the response was quite reasonable, though a bit derivative, but overall I was pleased with it.  

If you are interested, you can read the blog post I wrote about it here.
 
Since I wrote that prompt, I’ve observed a few significant shifts (and one minor shift) in the use and capabilities of AI. First, there is an emphasis on the increased use of AI agents. Second, the popularity of agile (and nearly everything agile-adjacent) has declined over the last couple of years – it is no longer the “hot new methodology.” And third, my prompting skills have improved considerably since that early experiment. 

Zenergy has spent quite a bit of time trying to figure out what role AI will play in the future of IT and particularly how it might impact our clients moving forward. We’ve not only worked with ChatGPT, Claude, and Notebook LM, but also researched Human-AI collaboration – think, agile teams consisting of Humans and AI agents. What we’ve been most interested in, as we dig through the research and different people’s experiences, is how team dynamics have, can, and will evolve over time.

Just a quick side note - it may seem odd that I capitalized “Human” above, and maybe it’s just my ego, but to me, it just seems right to do so in the context of this blog post.

 

The Promise and Paradox of Human-AI Teams

The use of AI by individuals to assist with their daily tasks has quickly become normalized. However, what isn’t quite as normalized but is swiftly gaining traction is the integration of AI agents into Human teams, forming Human-AI teams or HATs.
 
In these types of teams, Humans utilize their AI counterparts to perform more complex and nuanced tasks that require significantly higher levels of interaction and trust among team members. HATs typically assign autonomous AI agents to activities involving monitoring, coordination, task reallocation, and managing interactions with Human team members. In these roles, the autonomous agents are expected to operate with a partial or high degree of self-governance over decision-making, adaptation, or communication, while working toward common goals established in alignment with their human counterparts on the team.

As impressive as these AI agents are at completing their tasks, research indicates a potential paradox: hybrid AI teams (HATs) don’t always perform as effectively as Human-only or AI-only teams. In the paper AI Teaming: Redefining Collaboration in the Digital Era, the researchers examined various team compositions—Human-only, Human-Human-AI, Human-AI-AI, and AI-only—in a simulated emergency response task, finding that mixed HATs performed the worst, while AI-only teams achieved the highest performance and situational awareness (SA) scores. 

Despite lower performance, Human-only teams exhibited higher levels of shared mental models (SMMs), defined as collectively held knowledge structures that facilitated enhanced collaboration within the team, compared to mixed HATs. Essentially, the inclusion of an AI agent to the team actually hindered SMM development among Human members. This counterintuitive outcome stems from challenges in team cognition, communication, and trust.

 

Key Challenges in Human-AI Teams

It appears that integrating AI into teams impairs team processes, such as communication and coordination, resulting in less effective communication among team members. Again, in the paper AI Teaming: Redefining Collaboration in the Digital Era, researchers found that replacing a Human team member with an AI agent in a video game resulted in decreased coordination and performance due to reduced effort among Human team members, especially in low and medium-skilled teams.
 
If you think about it, this makes sense. Low-to-medium-skilled Human team members don’t know what they don’t know, so they don’t consider asking about certain aspects of their expected job functions. At the same time, for those tasks they are aware of, there may be an assumption that the AI agent is handling them. These types of omissions and assumptions tend to resolve themselves in Human-only teams because our open-ended conversations help identify knowledge gaps or capabilities (although they don’t always).
 
Basically, when Humans talk to one another, we express what we know we know and what we think we know, and we also tend to communicate (although quite subtly sometimes) what we don’t know as well.
 
Having higher-skilled Human teammates can aid both HATs and Human-only teams overcome communication and coordination issues. However, (and this is a significant point) communication must occur.

If or when we can overcome the issues of communication and coordination, then we then encounter a larger problem…trust. Teams typically trust new Human members more than AI. This issue becomes even more pronounced as Humans increasingly collaborate with AI and AI agents. The problem, once again, appears to stem from assumptions. It is common for Humans to initially overestimate the capabilities of their AI teammates  only to find over time that while AI may excel in certain areas, they don’t always interpret Human communication as expected, and there remains the potential for hallucinations.

In Human-only teams, however, we see the exact opposite occurring. Humans tend to develop greater trust with one another over time. This is because we align more easily on an SMM. That is, we can “get on the same page” more quickly when we collaborate as a team.
 
Think about it. Have you ever been on a team where each member worked toward a different goal? Unless or until that team unifies around a shared vision for their work, there will be constant struggles, if not outright sabotage. The reality is, while most Human-only teams may initially have a diversity of ideas about their goals and vision, they can quickly rally around an SMM as they collaborate, learning each other’s strengths, weaknesses, capabilities, understandings, and assumptions. This process comes much faster and easier with a strong leader and effective coach guiding them. But even in the absence of a strong leader or coach, Human-only teams can eventually reach alignment on their own.
 
With HATs though, it’s a bit different. Humans are (mostly) able to pick up on subtle cues from one another that indicate to us if one of our teammates is completely incapable of performing a particular task versus just having a bad day. But that may not happen with AI agents. Did the AI agent produce the wrong outcome because it’s incapable of performing the task? Is there an issue with its underlying training, or was it the prompt (from a Human or another AI agent) that caused the problem? Without this level of understanding, achieving an SMM becomes challenging, and without that, the underlying trust issues will continue to fester.

Now, it may seem like I’m trying to paint a doom and gloom picture, but I’m simply trying to point out the challenges we’ll face as more organizations turn to HATs to build applications, services, etc.

 

Agile in the Age of AI

Well-known agilist Henrik Kniberg wrote a fantastic blog post last year titled “Agile in the Age of AI.” In it he imagines how future development teams will operate with AI agents taking on more coding and development tasks. Of course, as this happens the roles of software engineers and other Humans involved in the software development process will need to evolve. I believe this is where an agile mindset for the Humans (and perhaps AI, too) will come in quite handy.
 
For example, instead of primarily writing code, automated tests, or automated delivery pipelines, Human team members will likely take on more of a “specialized mini-product owner” role. In that role, the Human will decide what code, tests, etc. must be written. They will need to focus on building a solid, scalable, and maintainable architecture, identifying the priorities and potential value of the code to be written, serving as lead code reviewer and tester, and committing to correct and/or complete the last 10-20% of the code that AI still finds challenging to generate.

Assuming Henrik’s vision of development teams comes to fruition (and it looks like it will), will there still be a need for Agile Coaches? The nature of coaching may look quite different. Would coaches guide both humans and AI? How would they actually coach AI? Would they give input on training the models? Or perhaps offer behavioral prompts regarding trust and safety?
 
Henrik describes multiple teams working together - he doesn’t refer to it as a Scrum of Scrums, but it resembles that concept. Would coaches shift their focus to mentoring at that level? How will coaches engage with AI team members? What, if any, planning will be done for upcoming iterations? Will AI agents participate or give input on the features to be included? Will coaches ask AI agents the same questions they might pose Human team members in a retrospective? Will there even be retrospectives or any of the typical Agile/Scrum ceremonies?
 
Many businesses have already scaled back the number of Agile Coaches and Scrum Masters involved in their organizations. I interpret this as both an indicator of success and failure for the agile community. I think agile has been successful, to some extent, due to a significant shift in the structure of development activities across most organizations. These structural changes almost always align with agile's core principles.

Yay! 
 
On the downside, however, I believe organizations have had a more difficult time shifting team, manager, and leader mindsets to foster greater trust, communication, collaboration, and experimentation. Granted, these aspects of agile are the trickier to implement and this is where I think the agile community has disappointed clients – by allowing too many people to flood the Agile/Scrum “zone” who possess little-to-no actual experience on development teams and frankly, were not particularly skilled at their jobs. Note that this is a rant for a different day.
 
What I do see as a possible future, though, at least from an agile perspective, is that the traditional responsibilities of Scrum Masters and Agile Coaches will also need to shift. More emphasis will need to go into helping build trust across multiple HATs (similar to what is done in Scrum-of-Scrums or SAFe). Agile Coaches will aid Humans (and perhaps AI) in more effectively communicating with and understanding each other. They can also help teams figure out the most effective ways to use AI tools to align around an SMM.

 

Agile Practices in the Age of AI

Here are some other things I suspect we’ll see happen as more teams incorporate AI agents into their development processes, and where I believe effective Agile Coaches can help organizations achieve greater value from that integration.
 
Shorter Sprints: Many Human teams today buy into the idea of two-week sprints (or iterations) but continue to struggle to maintain a consistent, workable pace that delivers value. As AI takes on more of the grunt work, sprints will likely become shorter—think one-week sprints, probably even one-day sprints. Balancing the necessary communication across multiple teams against the frequency of builds will be an absolute necessity.

Evolving Meetings/Ceremonies: Daily stand-ups may be replaced by more frequent informal communications, while cross-team coordination becomes more critical. Of course, this can be a recipe for pure chaos and the risk of things going off the rails because Humans (especially knowledge workers) need time to contemplate what is happening and what needs to happen next. This will be especially important as Humans take on more of the mini-product owner role I mentioned earlier.


Prompt Retrospectives: Retrospectives and experimentation are two key ceremonies that are often overlooked in agile implementations. I believe these ceremonies will become even more important in the age of HATs. Humans collaborating with other Humans and AI agents will review prompting techniques that not only assist the AI agents in creating the code, but also help generate more of an SMM, which will be essential for building HATs that can work together effectively. These retrospectives must then be followed by frequent experimentation to improve what has already been implemented.

Human Feedback Loops: Human feedback loops will become a crucial aspect of the development process. Not just in terms of code reviews and testing but also, as mentioned above, for identifying the priority and value of the work remaining to be done. I cannot overemphasize this one idea: any application, software tool, feature, function, etc., that has a Human user at the end of it MUST have a Human in the middle of the development process.

Despite AI's capabilities, Human-based testing remains essential. Human testers can validate the edge of the edge cases –  the kinds of things Humans have a tendency to do (or attempt) that can severely disrupt an app or make it unusable (we used to call them “Stupid Human Tricks” after the old David Letterman segments). Human developers and testers act as a backstop when Human-to-AI communication (and vice versa) breaks down and negatively impacts the system.

 
This also leads to the role of ethical considerations. As AI takes on more tasks, it’s crucial to consider its impact on internal processes (e.g., team dynamics and collaboration), as well as the potential external effects on users and society as a whole. Our rush to integrate AI into everything should not cause us to neglect the vital need to foster Human connection, creativity, and innovation. Ethical considerations surrounding AI, such as biases, fairness, and transparency must be addressed. Agile practices may need to evolve to ensure responsible AI development and deployment.

 

In Sum....

In order to fully harness the potential of HATs, we will need to identify and implement strategies for successful Human-AI collaboration. Beginning now, organizations should focus on:
 
Building Trust: Emphasize the AI traits that enhance trust, including transparency, explainability, reliability, and pro-social behavior.

Developing Shared Mental Models: Ensure AI systems are transparent and explainable to facilitate the development of accurate mental models.

Upskilling and Training: Invest in training programs that enable employees to develop skills in prompt engineering, AI model selection, and result evaluation.

Enhancing Communication: Agile Coaches can help Humans (and perhaps AI) communicate and understand each other more effectively.

Redesigning Business Processes: Reimagine business processes to optimize collaboration between Humans and AI, emphasizing flexibility, speed, scale, decision-making, and personalization.

Ensuring Human Feedback Loops: Establish policies that require a Human to review, test, and experiment with any AI-generated applications, features, and functions that will be Human-facing and/or used by Human end-users.


Wrapping Up

The integration of AI into agile teams presents both challenges and opportunities. While HATs may not consistently outperform Human-only teams, the potential for enhanced productivity and innovation is significant. 
 
By addressing the challenges related to communication, trust, and SMMs and by strategically redesigning agile practices, organizations can pave the way for successful Human-AI collaboration. As we move further into the age of AI, the focus should be on augmenting Human capabilities and fostering synergistic partnerships between Humans and machines.
 
Ultimately, the goal is to recalibrate our agile practices and embrace a future in which Humans and AI work together to achieve more than either could alone.

                                                     
 

Interested in learning how to utilize Scrum agile methodology as a means of delivering your software more effectively?

Check out our Agile 101 workshop, suitable for anyone looking to learn the basics of Scrum or Kanban.