Agile in the Age of AI
by Henrik Kniberg
Scrum is over 30 years old, the Agile principles are over 20 years old, and we are now entering the Age of AI, a strange new world where intelligence is available as a service. How does AI impact Agile, and popular methods and frameworks like Scrum?
Most Agile practices and principles are based on assumptions about human behavior and team productivity. Some of these assumptions still hold true, but some need to be challenged and reevaluated, and this will impact the practices.
This article is a mix of observation and prediction - what I've seen happen already, and some speculation about what I think is going to happen in the near future.
As a reader, the main takeaway is this: be ready for change. I don't know exactly how things will change (this article is just some of my guesses), but I'm pretty confident that Agile in the Age of AI looks different from before. So take a step back, look carefully at how you work today, and start questioning everything.
Cross-functional teams
Agile development is normally done by small, self-organizing, cross-functional teams. Cross-functional means the team members have different skills that complement each other. The drawing below uses circles to symbolize the overlapping knowledge and skills of each team member.
But why do we actually need cross-functional teams?
The underlying assumption here is that the team needs a mix of complementary skills, otherwise they can't build whatever they are supposed to build. A cross-functional team has all the skills they need to autonomously build a shippable product incremement, with minimum dependencies on other teams. The overlapping complementary skills are necessary for this.
But with generative AI, every person effectively has an AI colleague who is blazingly fast and knows every programming language, every popular framework, every design pattern, and possesses vastly more knowledge than any person. Using the same visual metaphor, adding an AI team member means adding a knowledge circle that is vastly bigger than any of the human circles (although there is still some human knowledge that the AI model doesn’t have).
The AI models are not perfect yet, they do need human oversight, but one or two people with strong prompt engineering skills and access to a top-notch GenAI model will outperform a traditional agile cross-functional team - in both speed and quality. I experience this on a regular basis myself - with an AI colleague I can build things in hours that would have take days, and build things in days that would have taken weeks.
So cross-functional teams are great, but not as important as in the past, since knowledge isn't really the bottleneck any more. A team of 1-2 people + AI has access to most of the knowledge they need.
Why 2 people? Why not 1? Because it's nice to have another human to talk to, and easier to deal with things like vacations and sick leave.
So what actually happens when team size shrinks to 2 people? Do we fire everyone else on the team? No, I think smart companies will AI-empower everyone, instead of AI-empowering a few and firing the rest. Firing a bunch of people for this reason risks creating a culture of fear and a lack of innovation - people will be not be incentivized to explore this technology further, since improved effectiveness means more people getting fired.
So let's say we originally had 2 cross-functional teams of 5 people each. This might now be split into 5 teams of 2 people + AI.
What do these smaller teams work on? Depends on the context. It's a luxury problem: "we now have increased development capacity, what shall we do with it?".
Let's say the two larger teams were working on one product together, focusing on different feature areas. Now we have 5 smaller teams, each one more productive than a previous larger team. We could have all 5 teams work on the same product, focusing on different feature areas just like before. The result is that we'll get more stuff done on that product. Or maybe we let 3 teams focus on the existing product, and 2 teams work on a new product, thus amplifying the overall company's productivity.
Smaller teams = More teams
One consequence of AI-empowered teams is that we'll likely have smaller teams, and more of them.
Smaller teams means less need for meetings and other internal coordination rituals. If they sit next to each other (physically or digitally) and talk informally whenever needed, then they hardly need any formal meetings within the team - for example there is less need for a daily standup meeting when they can just talk whenever. Although they might do it anyway for social reasons.
On the other hand, we have more teams than before. So we'll have an increased need for cross-team coordination, at least if the teams are working on the same product and have dependencies. In fact, if they are working on the same product, maybe each team can be seen as a single team-member in a larger "super team" or something, and then we do rituals like retrospectives and sprint planning meetings at a team-of-teams level?
Regardless, I suspect most companies will end up with some kind of daily sync meeting, and some kind of planning session every few weeks. But the structure of the meetings will be different from traditional agile practices, for example they will likely be cross-team meetings rather than team-internal, and the planning sessions will likely focus more on the big picture rather then specific backlog items.
To be clear: I don't know where we will land. But I'm pretty sure that things will change, as the underlying assumptions behind many of the agile practices are overturned by the advent of Generative AI.
Software engineers (mostly) don't write code
A clear trend is that AI models are getting really good at writing code. Not perfect yet, but good enough that it makes sense to have your AI colleague write most of the code. This fundamentally changes the role. As a software engineer you still need to be in charge, think about the architecture, write the prompts, review the results, and take responsibility for code quality. But the actual craft of writing the code - AI will for the most part do that faster and better than you. This is partly true already today, and within a year it will likely be true 90% of the time. A software developer who insists on manually writing all code in the Age of AI is likely to become a bottleneck and source of bugs.
So developers essentially become mini-product owners, if I use Scrum terminology. Their job is to decide what code needs to be written, not to write it. This is comparable to when you write high level code using a modern programming language, and the compiler turns it into machine code - except now we raise it one level and the AI model writes the high level code as well.
Your AI colleague can even work in the background. Imagine this conversation between Bob, Lisa, and their AI colleague MrFixit over morning coffee:
- MrFixit: "Good morning folks! A couple of bug reports came in last night, two of them were pretty straight forward so I fixed them and put up PRs (pull requests)"
- Lisa: "A great, I'll review it in a moment. Any risky stuff there?"
- MrFixit: "Well, I needed to change the login a bit. I added more tests so it's probably fine, but that part might be worth some extra reviewing."
- Lisa: “OK, will do”.
- Bob: "Hey MrFixit, did you see that slack discussion on security holes?"
- MrFixit: "Yeah, want me to look into it? I have some ideas."
- Bob: "Yes please."
- MrFixit: "OK, hold on....... Done! I put up three PRs, with three different approaches for how to solve it. See the PR description for details. Have a look and ping me if you want to discuss any of these."
- Bob: "Awesome!".
This may sound rather exotic now, but within a year I think this will be the norm for many teams.
The result: coding is no longer the bottleneck. So what does this mean for concepts like Scrum Sprints, a timeboxed period (usually 2-3 weeks) to allow the team focus on development? Well, if work that normally took a week now takes a day, and work that normally took a day now takes an hour, do we need sprints?
1 day sprints?
My guess is sprints will gradually become a lot shorter, or disappear entirely. Maybe 1 day sprints. Start the day with a quick sync with your human an AI colleague, decide what to focus on today, then finish it up and release by the end of the day, and do a quick review before going home. Daily Standup and Sprint Planning become essentially the same thing.
With multiple teams working together, there will still be a need for a higher level sync meeting, maybe once per week, to make sure they are aligned. This is true with or without AI, but the need for it increases as we move towards more smaller teams rather than few larger teams.
As I mentioned, I think there is still a human need for some kind of sync and planning meeting every few weeks. But the purpose and structure of that meeting will change when we move to faster and shorter development cycles, when there is no longer a need to batch work in multiple weeks of work just because coding takes time.
Roaming or shared specialists?
What about specialists? Let's say our teams need to deal with databases and persistency, and need specialist knowledge for that.
Traditionally, we would make sure each cross-functional team had at least one person with DB skills. In a 2-person AI-empowered team, the humans will lack some skills and will need to rely on their AI colleague. Will that be enough? For routine tasks, probably yes. But sometimes for more advanced tasks, a human specialist will be needed, for example to formulate the prompt or evaluate the result, or maybe to build tools. The human specialist can also help determine which AI models and tools are suitable for the task at hand, or even fine-tune the models to make them better at that specialized knowledge.
My guess is that we'll have either roaming or shared specialists. This is not a new approach, some agile teams do that anyway. But I think it will become more common.
For example with the 5 teams above, let's say all of them use databases. Maybe one or two of the teams actually have a DB specialist, because they are the teams that do most DB stuff. But they are shared specialists that sometimes help other teams as well.
An alternative is to have roaming specialists that don't belong to any specific team, instead they go to whatever team needs them the most. To be clear, the AI models should have most of the specialist knowledge needed, but the human specialist is there as a complement for when we hit the limits of the AI, or need an extra pair of human specialist eyes to evaluate the result.
The job of the Scrum Master or Agile Coach
If you have a role like Scrum Master or Agile Coach or similar, then that traditionally includes teaching and mentoring the team on things like how to effectively split a user story, how to effectively run a retrospective, and how to work effectively as a team.
An AI-empowered team has all this knowledge already, if the team chooses to seek it. So your role becomes more of a coach and less of a mentor/teacher. If the team wants to know how to split a story, sit with them and write a prompt to chatGPT (or other model). You know how that popular edict: “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime”. Now that teams are catching their own fish you have time to coach and assist more teams, and help them figure out how to use those fishing rods.
User feedback loop
User feedback remains a critical part of agile development, even in the Age of AI. However the practicalities change. We will be able to release more often, so our customers and users should be ready to receive small updates often, instead of large updates seldom. As a result, the users can take a more active part in development. Some will appreciate that, some might not.
If the user isn't available as often as needed, we can create mock users by asking an AI model to role-play a particular type of user. We could do mock user research and interviews using this, and soon we will probably also be able to ask the mock user to try out our system directly, and give us feedback. The mock user won't be quite as useful as real feedback from real users, but on the other hand we can get the feedback faster and more frequently so it could be a good complement. That could help the team discover the most obvious and important design issues.
Additional points to consider
Just for fun, I gave the entire article above to Claude Opus (the latest GenAI model from Anthropic) and asked it to suggest any additonal points to add. Here is the unedited response, which I think was quite good:
- Impact on product backlog and prioritization: With AI-empowered teams delivering faster, the product backlog may need to be updated more frequently. The role of the Product Owner might evolve to focus more on strategic prioritization and stakeholder management.
- Changes in estimation and planning: Traditional estimation techniques, such as story points or ideal days, may become less relevant when AI can significantly accelerate development. Teams might need to adopt new approaches to planning and forecasting.
- Adaptations in Agile frameworks: Popular Agile frameworks like Scrum, Kanban, or SAFe might need to be adapted to accommodate the changes brought by AI. For example, the duration and frequency of events like Sprint Planning, Daily Scrum, and Sprint Review may need to be adjusted.
- Impact on team dynamics and collaboration: While AI can enhance productivity, it's essential to consider its impact on team dynamics and collaboration. Teams may need to find new ways to foster human connection, creativity, and innovation in an AI-driven environment.
- Continuous learning and skill development: As AI takes over certain tasks, team members may need to focus on developing new skills, such as prompt engineering, AI model selection, and result evaluation. Continuous learning and upskilling will become even more crucial in the Age of AI.
- Ethical considerations and transparency: Teams will need to navigate ethical considerations surrounding AI, such as bias, fairness, and transparency. Agile practices may need to evolve to ensure responsible AI development and deployment.
Recalibrate
(OK this is me the human writing again)
My friend Kent Beck tweeted this:
I think Kent captures the essence of the what we are facing now, in all roles and professions. We need to recalibrate what we spend our time on. What does it mean to be an Engineer, a Scrum Master, a Product Owner, an Engineering Manager, etc?
Same goes with Agile: we need to recalibrate our agile practices. That starts with self-reflection. What do we spend time on? What are our rituals, roles, artifacts? What needs to be challenged, changed, or reevaluated as we enter the Age of AI?
Share the post with your thoughts!