Practicing Agile IRL (In Real Life)
I mentioned that while a methodology might be a good place to start, you should anchor your practice of agile in exploration and testing of what’s working for your team and your project. In this section, we’ll step through how, specifically, you might do that. Because I try to be about doing stuff > saying stuff.
Doing agile well is about choosing, testing, and iterating on the practices that turn out to work well for your particular project and team. This is rarely the same across teams and projects. High-functioning practitioners know this and are ready to do the work to make their team successful.
The way I recommend doing this iteration and testing is to start from an assessment of how things are going with a team relative to these four fundamental ‘jobs’ in digital development:
These four jobs in turn have sub-jobs and we’ll step through ways to think about how you might explore methods that help your team perform better in each of those, and how to evaluate the outcomes of those explorations.
This job is about learning what’s valuable to your user (learning what works for your team is important, too, but I put that under ‘Managing’). With digital, there are so many possibilities and such opportunity to move fast that this job is super important. A nice feature of agile is that you get working software out the door relatively more often so you have the opportunity to learn faster.
Success here means that even if you’re not sure a given feature, etc. is going to be valuable to the user, you have a specific view of how you’ll learn that, and once you release said feature you have a way of definitively seeing whether you were right or wrong about that. Teams I know that run a strong program here meet at least every iteration, often every week, as a whole (interdisciplinary) team to talk about their experiments, what they concluded from them, what that tells them about their direction with a given feature/program, and what they’re going to do next.
Here are a few more specific notes on the various ‘sub-jobs’ within Learning:
Collaborate on Product Design
This means getting perspective and also buy-in/interest from your whole team and translating that into the work everyone actually does—design, coding, testing, deployment, even support and consulting—though that may be with external stakeholders.
At the start of a new project, the team often needs to earmark some time just to learn about what might be valuable to the customer and cultivate their shared understanding of how that might apply to their individual work. Design sprints are a great way to do this, setting aside time to interview users about what’s on their A-list, to run a Lean Startup-style experimentation on motivation/value, or to test a few approaches to usability with rapid prototyping and exploratory usability testing.
Have you ever wondered why teams that do A/B testing on parallel alternatives are so fanatical about the practice? I think it’s because it cultivates a general culture of experimentation. It takes work, but testing whether users respond better to a red button or a blue button is so much more productive and meaningful than sitting in a meeting arguing about whether a button should be red or blue. A/B testing is just one popular and important part of instrumenting behavioral analytics into your software. Without the quantitative ‘what’ to your qualitative work on ‘why’, your team will have trouble learning.
As high-functioning teams collaborate on product design, focusing their ideas into testable hypotheses becomes a matter of habit. Learn, Build Measure, as they say in the world of Lean Startup. The technical and team infrastructure to support that is an important part of any high-functioning team. Many teams I’ve worked with hold weekly meetings on these experiments outside of their agile retrospectives (partly because they want them more frequently, partly to offset the timing so that they make conclusions before they decide the list of items for their next sprint/iteration).
Acceptance, Usability Testing
These are two different items I’m lumping together because, in practice, one I find should probably be eliminated and the other suffers from chronic underinvestment even by high-functioning teams. Guess which is which!
Acceptance testing isn’t a formal term, but generally it refers to a practice whereby a ‘contract’ (I see agile sites use that literal term) is established between the users of software and its creators. If the acceptance test passes, the software is OK/done. For reasons I hope are pretty obvious, I find this encourages contract negotiation over collaboration (see the beginning section on the Agile Manifesto on why that’s bad). In practice, it’s not so much that the users want to ‘reneg’ on the contract during acceptance testing—it’s that the users who are supposed to be accepting don’t really pay that much attention to the testing until they have to really use the software in question in their actual jobs. Ideally this would be right after the acceptance testing, but often it’s much later. The high-functioning team focuses and structures their inputs, observations, and testing with users much more purposefully than this.
Usability testing refers to testing where you give the user a goal and see if they can accomplish it with the software or prototype in front of them. You’re not testing whether they want to use the software (that’s what Lean Startup is for) but rather how useable is the software, assuming they had the goal you’re supplying to them. This distinction is important because I see lots of teams flub usability testing by asking ‘Do you like this?’ or something equally silly. Spoiler alert: The subject will always say, ‘Sure, ah, it seems great.’
Usability testing is best done early and often. Before anyone writes code, explore parallel possibilities through low-fidelity interactive prototypes. These are easy to make in tools like Balsamiq. Once you find a direction that performs well, flesh out your user stories with a more robust assessment test. Then code. Frequently, teams will ‘run out of time’ to user test. However if they proceed as you see below, they leave themselves time to test and make a habit of it, which results in much better software and a more agile working environment.
Decisions should be well considered relative to the team’s target outcomes and they should be, well, decided. Nothing irks developers (or anyone) like flip-flopping on decisions, and yet operating environments change and new information becomes available. Agile handles this by working in relatively short iterations which conclude with work that is truly ‘done’ and testable.
Prioritize & Batch Tasks
Few things are more important to a product’s success than focus and prioritization. While the product focus comes from elsewhere (something like the Venture Design process), agile has a lot to say about how to prioritize and batch tasks.
In a high functioning team, someone in the role of product owner (or something similar) maintains a prioritized list of stories. They then discuss these with their team (often estimating rough size and then re-prioritizing on the basis of both relative value and relative investment) and that becomes the iteration or sprint backlog. You may recall the mechanics from the section above ‘Onboarding a Team with Agile — The Process Part’:
The first order prioritization delivers substantial benefits to teams. The use of narrative (user stories) creates focus and specificity on the what but leaves the how up to the implementor. The prioritized list (as opposed to say a prioritization scheme) avoids confusion and forces clear decisions.
You’ll also want to consider prioritization with regard to Learning—in practice, this is quite important. The idea is to get complete functional experiences out to the user ASAP so you can observe what happens. The example that I remember from agile thought leader Bill Wake (who first really drove this home for me) was that if you have a product whose basic experience for the user is search, select, and purchase, then don’t build all the search stuff first then move on to select, etc. Why? Because if you do it that way, after your first or even maybe your second iteration, you won’t have any meaningful experiences to test—just a bunch of searching, etc.. Rather, you should implement ‘thin’ slices of searching, selecting, and purchasing.
User story mapping is a popular practice to help teams prioritize and sequence against both their working idea of what is most valuable while remaining focused on delivering meaningful, testable user experiences. While the use of storyboarding is a long-standing practice in the design world, it was popularized as a guide for sequencing stories by Jeff Patton.
A story map (usually posted on a wall) looks something like this:
The top stripe, ‘Stripe 1’, is a set of storyboard squares describing the customer journey and there are subsequent ‘stripes’ of priority below that. This view of the stories helps the team think about layering in complete user experiences vs. getting hyper focused on just one area that won’t test well on its own. The idea is to sequence ‘thin slices’ (horizontally) into your prioritized backlog.
Share out Responsibilities & Tasks
The empowered individual is the best, most reliable judge of what they should be doing—and I’d say that’s one of the most fundamentally important ideas for making agile work. From the iteration backlog, team members select stories and move them through the team’s defined process—a kanban board is an extremely common tool for this. Some teams leave their cards at the story level, others prefer to decompose them into tasks—more on the pros and cons of that in the section below on Work in Progress.
Informal discussions among peers along with the Daily Standup are how the team shares tasks and responsibilities on a more dynamic basis. Just to recap, that’s where everyone answers these questions while everyone else listens:
- What did I accomplish yesterday?
- What will I accomplish today?
- What obstacles (if any) are impeding my progress?
If anyone has an idea or wants to help, they briefly mention this and the relevant parties catch up after the meeting. For example, if someone is struggling with configuring the ‘squiggleplexor’, and one of the other team members is an old hand at squiggleplexing, maybe that person offers to help.
Manage Work in Progress
What’s so wrong about having lots of work in progress, lost of in-process software? It’s all headed to the same place, right? As long as everyone’s busy, why the fuss?
Well, it turns out keeping work-in-process to a minimum has a lot of benefits—some obvious, some more subtle. As a community of practice we’ve learned a lot from the lean manufacturing community—the work around Lean Startup being the most visible current example. The basic idea is that work-in-progress is at best a non-performing asset and at worst a distraction that robs focus from the drive to value. Has your team ever decided to spend a lot of time on a piece of technical infrastructure that’s supposed to make life super awesome? How did that work out? Sometimes it actually is super awesome, but a lot of the time it takes focus off testable output that you can validate (or invalidate) with the user, and that’s not good.
“The basic idea is that work-in-progress is at best a non-performing asset and at worst a distraction that robs focus from the drive to the value.”
For getting started, one idea is to try making a rule (for one iteration that the team agrees on) that everyone finishes the story they’re working before they start anything else. Keeping the cards on your kanban board at the story level may help. If you’re decomposing them into tasks, it’s easier to lose track of the story you’re working.
Define & Communicate Release Content
The nature of this activity varies a lot depending on the nature of your product or project. On one end of the spectrum are SaaS players with a continuous delivery infrastructure like Facebook. They release and test new content daily, mostly depending on their own assessment of what’s going to drive value. On the other end of the spectrum is a packaged software vendor on enterprise model (software delivered to the customer which they install themselves) whose product has many critical inter-dependencies where the customer needs to do detailed planning on compatibility.
I’d say the first step is to know where you sit on this spectrum and have realistic near-term goals for your team. If you’re on the SaaS end of the spectrum, you may not even ascribe a lot of importance to releases—your product is just an ongoing experiment that evolves through lots of small experiments. This allows for robust evolution and a strong emergent product design.
If you’re on the other end of the spectrum (software you deliver to customers which they install and which has many inter-dependencies), this is difficult but important. You probably do a lot fewer releases than you do iterations and these require a lot of management. You’ll need strong discipline and thoughtful management of your customer relationships to make room for innovation. This is true even if these customers are internal and you’re part of an internal IT group.
My top three recommendations for this situation:
- Get out of the habit for talking about solutions and features
Easier said than done, but problems, properly defined as problem scenarios or jobs-to-be-done, are a much better focal point for innovation than specific solutions. Also, we tend to get over-focused on current solutions since they deliver the feeling of certainty we crave emotionally. Particularly when discussing releases further in the future, try focusing your customer on the problem they want to solve/the job they want done vs. the specifics of the solution. Get them to tell you more about what they want to have happen on their end vs. how you’re going to implement your software.
- Learn how to sell adaptability as a feature
This will get easier as you score some wins, but, remember, part of the reason you do what you do is that you have expertise in your area. If you’re doing well in the job of Learning, you’ll have lots of ideas to share with your customer about ideas you’re exploring for their benefit.
- Earmark a big chunk of your pipeline
If all else fails, earmark a chunk of your pipeline (I’ve seen from 25-40%) for ideas you initiate, along with servicing technical debt.
Here’s where the magic happens. Actually, that’s wrong. Here’s where you can see the magic happen. That subtle but important distinction is (IMHO) what leads to the incredible amount of waste in software/digital development. The whole matter is conceptually squishy but my guess is that around ½ of all code written ends up being lightly used or not used at all (here are a few figures).
A big driver of this waste is that most teams chronically underinvest in the design and testing (of their ideas). I think the main reasons for this are that a) the tangibility of development over design makes it feel like more gratifying work and b) the community of practice around design and idea testing is still much less robust than the community of practice around software development itself. The punchline here is that the job of Building can’t deliver value if the job of Learning (or any of the other jobs) is neglected. Or, as they still say: garbage in, garbage out.
All that said, the activity of software development is probably where you’re investing a lot of your money and a lot of agile is about helping that go well. Of the body of work within agile, XP has the most to say about the details of software development itself. That said, the overall community of practice around software development is robust and as a project manager your main role here is to help your team understand what constitutes success and then give them room for purposeful experimentation around what helps them deliver successful outcomes.
Code Creation & Maintenance
It probably won’t surprise you to hear that a lot of agile-related focus on this job deals with a) working in small batches and b) testability. In the section above on Deciding, you learned about prioritizing your story backlog by both importance and sequence in the user journey so you’re delivering testable slices of user experience. What if a developer wants more time to build out part of the underlying software plumbing?
That may be the right decision, but there’s a rubric in agile called YAGNI that suggests careful consideration of such decisions. YAGNI stands for ‘You aren’t gonna need it.’ It’s basically the proposition that it’s often better to build specifically for the problem you need to solve right now and see how that works than it is to build a large, complex infrastructure that you assume will make things work better over the long run. Short sighted? Not in principle: the corollary is that you reinvest the time you saved later to refactor the code when you learn more about what’s really needed by the overall system, an approach called ‘emergent design’. One of the hardest parts about this is probably emotional: there’s a certain part of us that just wants to build monuments. It’s the same reason businesspeople write business plans or build products or factories they’re not sure they need—it just feels better. But the better answer may be to test on a small scale first. You can hear agile thought leader Bill Wake talk about it here: Bill Wake on YAGNI.
The other thing is testability. Many teams define ‘legacy code’ as code that doesn’t have unit test coverage. Unit tests are a type of low level test to make sure it’s clear what an individual function should do and whether or not it’s still doing that thing. The reason for this focal distinction is that it’s much easier to change code and refactor it when you can push a button to make sure everything’s still working after your change. Remember the whole YAGNI thing and emergent design? It’s much harder and more anxiety-inducing without good test coverage. Beyond the ease of refactoring, enthusiasts of a test-driven approach (referred to as ‘TDD’ for ‘test-driven development’) also find that writing tests before code helps them clarify their own intent before they dive into the details and leave behind a much clearer explanation of what the code was intended for future developers who will work with that same code.
Version Control & Integration
You can’t test what you can’t find and can’t build. In fact, you can’t do much of anything in that situation. Version control is pretty standard these days. You’ve probably heard of tools like Git and products like Github that allow developers to organize the different versions of chunks of code they’re working on and then reassembling into a working system. If your team isn’t doing that, it’s certainly worth inquiring how they’re handling the job of version control. Somewhat more subtle is making sure that everything that goes into a working system is under version control. This includes less obvious items like configuration files, which are notoriously troublesome in production when not standardized.
Integration refers to the job of assembling all your code into a working system in order to test and/or deploy it. With the emphasis on small batches and testing, it probably comes as no surprise that frequent integration is popular with high-functioning agile teams. In fact, the rubric of using a high degree of automation to both integration (internally) and deploy (externally) is a hot topic. Such teams organize this work around a ‘continuous delivery pipeline’ like this:
Basically, new code goes in, is automatically tested, and if/as it passes testing, deployed.
Maintaining Best Practice Architectures & Conventions
If you’ve been in the business for awhile, you probably have learned that the biggest costs for a given piece of code come from maintaining it over time. Given this, the code needs to be (conceptually) accessible to other developers and generally fit with the rest of the code and systems. Consistency helps.
Done well, standardization can energize a team and spur ongoing improvement. Done poorly, it can demoralize one. Pair programming and/or frequent code reviews by peers is a popular feature of agile teams. Among other things, teams often find pairing a useful way for more experienced developers to help new team members learn and understand the what and why of whatever conventions the team has adopted.
By Functional Testing, I mean making sure the digital system behaves as expected—basically that it’s not broken. The best way to think about your investments here is that your cost to fix code escalates geometrically from a to f where:
- The developer catches an issue in less than ~10 minutes based on the results of unit tests
- The developer catches an issue that day from some other testing (integration or system, which take longer to run)
- The developer catches an issue that week from their own work or a peer review
- The bug is found later (by a tester, etc.) but before it goes to production
- The bug is found later but before it goes to production but another developer needs to figure out the fix
- The bug is found by a customer and/or support in production
This is an oversimplification but to the extent your practice of agile gets you fewer f’s and more a’s, that’s good.
Also, testing by hand is super boring, regardless of who does it. And, obviously, if a developer writes their own tests to find bugs, you’ll end up with more a’s. For all these reasons, making time for the team to invest in test automation is popular with many agile teams. It’s also integral to the rubric of continuous integration/delivery.
Googling management currently returns the following:
- the process of dealing with or controlling things or people.
“the management of elk herds”
“if there has been any management in the business, it has been concealed from me”
I just thought that second definition was too hilarious to leave out. I hope you enjoyed it as much as I did.
The management goal of agile is more radical than I think most people understand. The high-functioning agile team is self-organizing. This means that most of the ‘process of dealing with or controlling things or people’ happens by the initiative of the individual without prompting or mandating from someone else. That stands in stark contrast to the traditional command and control approach to project management where people are resources to be directed—the bigger picture abstracted away from them to avoid distraction or confusion.
It’s hard for me not to sound negative about traditional project management, just because I’ve seen it fail so many times in software projects. However, agile isn’t simply ‘better’ or ‘more modern’ project management. While it is radically different, I don’t think it’s the answer to everything that needs to be project managed. I’ve never done anything like this, but it doesn’t seem like a very good way to build a bridge or plan the olympics. If you’ve got 1,000 tons of steel girders showing up on day x and/or a crew of specialized people and equipment, you need to manage things so that the project is ready for those. I note this because I think to understand what agile is, you also have to understand what it isn’t.
The topic of management is broad and not all of it is particularly germane to agile. Let’s focus on three primary jobs:
- Helping the team excel at its primary job of delivering valuable user experiences through software
- Dealing with all the other random &%%@*!%^ that comes up
- Developing a positive team culture
I’d be interested to hear your thoughts on what other management-related jobs are particularly important here, but I’ll stick to those for now.
On the topic of helping the team excel at its primary job of delivering value via software, I’m going to punt: I think we’ve covered a lot of how to do that by way of the other material.
On the topic of dealing with all the other random &%%@*!%^ that comes up, there’s no silver bullet, but the consensus is generally that some single person needs to be assigned to deal with it. This way, the team minimizes costly interruptions to the focus required for developing software. In scrum, the Scrum Master does this. In practice, it’s usually someone in the general role of project manager that gets stuck dealing with the other random &%%@*!%^ that comes up. This could be anything from finance rejecting a PO the team needs to buy some equipment to attending the kind of meetings that seem to proliferate at larger companies.
On the topic of developing a positive team culture, I think the most clinical/repeatable way for an individual to pursue that (particularly if you’re at least somewhat bought into the rest of what you’ve read here) is to cultivate a culture of experimentation. This means that generally the team focuses on testable decisions versus, say, lengthy consultation or someone in a position of authority having to be the decider of last resort. It also means responding to results objectively, which is great for focus. Nowhere will your success here be more apparent to the team than in the quality of your retrospectives. Do meaningful questions get asked? Do they get answered meaningfully with a view on real root causes? Most importantly, is the team able to make meaningful decisions about how to test new processes to improve?
Interface to General Management
Your team doesn’t operate in isolation. Depending on what you’re doing, you need to interface with some kind of a ‘boss’ and probably multiple other stakeholders as well (finance, HR, etc.). Agile in general doesn’t have a whole lot to say about this, probably because there’s so much variation. My general advice is to a) focus on the problems you and your stakeholder want to solve and b) anoint someone to deal with these interfaces (unless there’s a compelling reason to do otherwise) to avoid distracting the team.
In summary, I would work with your team to think first about what’s important to it and how it’s doing on these fundamental jobs of software development. Then I would draw on the methods and practices the various methodologies (and communities) of practice present to make your decisions about which approaches you want to test.
How was that for you? What do you think about agile now vs. before? What do you see as your next step? Hit me up on Twitter (@cowanSF) or LinkedIn alexcowan/). I’d love to hear from you and learn about your practice of agile.
Alex has been an entrepreneur (5x) and an intrapreneur (1x). He’s currently on the faculty at UVA’s Darden School of Business teaching product design and advising corporations on product development and innovation practices at COWAN+. His online course on Agile Development is one of most popular worldwide offerings on Coursera.
See why IT Teams
Track all your projects – Free for 30 Days