Agile Testing Fellow This is a blog by Agile Testing Fellow https://agiletestingfellow.com/blog.rss Why we now say holistic testing vs. agile testing Yesterday, a LinkedIn connection started a conversation with Janet. He wanted to learn about agile and how to start. It started Janet thinking so we decided to look at the question more deeply since it’s not an easy one to answer, and testing is such a big part of how we think about agile development.

 

We used the term “agile testing” for many years because it was appropriate at the time. It seemed a natural outgrowth from the “agile” manifesto that came out of the Snowbird gathering in 2001. Agile development focuses on quality, but many agile practices don’t emphasize testing specifically. We believe the agile values, principles and practices apply to testing as well as coding and other development activities.

 

People frequently ask what “agile testing” is. With a lot of community input, we posted our definition of agile testing few years ago:

 

  • Collaborative testing practices that occur continuously, from inception to delivery and beyond, supporting frequent delivery of value for our customers. Testing activities focus on building quality into the product, using fast feedback loops to validate our understanding. The practices strengthen and support the idea of whole team responsibility for quality.

 

With the increased visibility of continuous delivery and DevOps culture, more teams, including ones we have worked on and with, have invested in testing activities on the right-hand side of the “DevOps loop”. Dan Ashby captured this so well in his “Continuous Testing in DevOps” post, and we have had interesting discussions with Dan as well as other leading practitioners about the many testing activities throughout the infinite loop of software development.

 

We came up with the term – “holistic testing”, and Janet wrote a blog post explaining this concept, with a model to illustrate it. Basically, it comes down to this:

 

  • When we test, we need to consider all types of testing, not only the ones we think a tester is responsible for. It includes automation, exploratory testing, or any other type of human-centric testing. It involves the whole team, the product organization, and even the customer. We need to consider testing from a holistic point of view.

Since then, we've published our book, Holistic Testing: Weave Quality Into Your Product, to explain the ideas in detail.

 

How is holistic testing different from agile testing? There is a huge amount of overlap. Both approaches emphasize fast feedback loops. Both embrace whole-team responsibility for quality and testing.

 

However, we do see some differences in the holistic approach. Agile testing certainly encompasses the entire software development loop, yet, when we talked about agile testing, we tended to talk more about the left-hand side of the loop. We put a lot of emphasis on building shared understanding of features, testing feature ideas, guiding development with technology- and business-facing tests. We also emphasized continuous improvement, using practices such as retrospectives to identify obstacles and design experiments to move forward.

 

Holistic testing balances testing activities on both sides of the DevOps loop and shows how testing needs to be a team activity. Many organizations still leave the right-hand side of the loop up to operations specialists and site reliability engineers. This is because many testers still lack knowledge of how code is instrumented to store structured data in logs, or they may not know how to use any of the monitoring and observability tools available today. Anyone on the team can take advantage of analytics tools that show what our production users are doing. We need testing on both sides of the loop to continuously deliver value for our customers at a sustainable pace.

 

The Agile Manifesto is more than 25 years old, and the term “agile” has gained a lot of unfair baggage over the years. Some organizations implement a development framework and say they are agile, even though they still deliver poor quality software infrequently. Labels are hard. “Holistic” is descriptive, it is more precise. It reflects the whole-team approach. It encompasses the whole development cycle. Words are important, and we believe “holistic testing” is a better way to convey this approach to building quality in.

 

]]>
Wed, 11 Mar 2026 23:23:23 +0000 https://agiletestingfellow.com/blog/post/why-we-now-say-holistic-testing-vs-agile-testing https://agiletestingfellow.com/blog/post/why-we-now-say-holistic-testing-vs-agile-testing
Collaboration in 2026 Over the years, we have written a lot about face-to-face collaboration, including this blog post in March 2021.  During the pandemic, “face-to-face” meant something quite different for most teams. Many are still fully remote and a lot of them may stay that way or be hybrid (a bit of both). We discuss what collaboration means in today’s world.

 

As tragic as the pandemic was, some industries got a boost – particularly products that help teams collaborate remotely. Prominent examples include Zoom, Slack and Miro (and their competitors). Before the pandemic, meetings that included both in-office and remote participants often still used telephones and cameras in conference rooms. They sometimes used shared spreadsheets or documents. Now, many teams use Zoom (or similar) for meetings most of the time to include everyone equally. Similarly, Slack and other apps such as Teams were often a supplement to communication for development teams, and now all employees depend on it (or similar tools). Teams have turned to Miro and similar products for online whiteboards and sticky notes.

 

Visual collaboration tools

 

Unfortunately, not all teams use visual collaboration tools to their best advantage to get everyone engaged during planning meetings, strategy sessions, retrospectives and more. For example, when everyone on a team starts writing on virtual sticky notes and moving them around the board, they start generating new ideas. People who might be shy to speak up in a Zoom call may be more comfortable writing and drawing on a virtual whiteboard.

 

Visual techniques like mind mapping, impact mapping and context diagramming are a few examples of activities that work well virtually. Miro and Mural are popular tools that offer a wide variety of formats. Simpler tools such as the free ones Google offers can be just as effective.

 

Connecting for face-to-face collaboration

 

You’re a tester pairing with a developer to review code changes for a story. You need the product owner to answer a question – or, the product owner realizes there is a new requirement and wants to talk to you about it. How can you start conversations quickly when working remotely?

 

Make it easy to connect by creating a Slack (or similar tool) channel for team Zoom (or other video meeting product) rooms. Slack has a Zoom integration that lets you create a meeting, with a description of the meeting, and let anyone in the channel join. Everyone can see who’s in the meeting, so the product owner can simply look to see where you are and hop in. Or, you can @mention them in the channel and ask them to join. Other chat tools have similar features.

 

Working with multiple teams

 

Many features involve multiple teams working on different services that have to integrate. To help speed up communication, they create a temporary Slack channel for the feature. Anyone can post about changes, new information, or ask questions. It’s easy to @mention the people who need to talk and hop into a video call together.

 

At Lisa's last full-time job, the testers on the various teams created an additional temporary channel dedicated to information about testing. It helped them share information and schedule meetings as needed. When the new features were released, the channels could be archived to save conversations in case they were needed for reference. Working together, the testers promoted more communication and collaboration among the teams.

 

We’ve given you a few examples of different ways to foster communication. There are many more out there! Take advantage of the new technologies that let us get similar benefits to having people in the same room drawing on a whiteboard.  Use the power of face-to-face communication, adapt as needed for your context.

]]>
Fri, 30 Jan 2026 18:09:51 +0000 https://agiletestingfellow.com/blog/post/collaboration-in-2026 https://agiletestingfellow.com/blog/post/collaboration-in-2026
Competencies over roles Some lucky testing & quality professionals have had the opportunity to be part of an effective cross-functional team that grows into a high-performing team. It’s hard to understand the “magic” of a team like this if you’ve never experienced it. There are many reasons so many so-called cross-functional teams never experience this magic.

 

We often see that Scrum, Kanban or other cross-functional teams are really just a tiny organization with mini-silos and mini-waterfalls. Analysts still hand off requirements to be executed, coders still hand off code to be tested, testers still hand off deliverables to be released to production. Each role or job title is strictly defined; each individual follows a narrowly defined path for career progression. Team members in this setting have little motivation to branch outside their job description. Someone who is paid to write lines of code will feel they cannot spend time on testing activities.

 

Working towards the magic

 

Having both worked on, observed many high-performing cross-functional teams, we believe the key to success lies in embracing the concept of valuing competencies over roles. Someone with deep exploratory testing skills may also have some excellent business analysis skills. A talented designer might also excel at exploratory testing. A coder and a tester can collaborate on automating acceptance tests to guide development, combining their coding and test design skills. We do need specialists with deep skills on our teams – no one person can know everything about everything. Yet, each specialist can have a broad range of competencies to bring to the party.

 

         

 

We can all wear a lot of hats, and we don’t want to be limited or “pigeon-holed” with a particular job title. Visionary leaders find ways to encourage each team member to contribute multiple skills. When each person on the team feels equally valued, they can collaborate to find creative ways to improve quality of their product and process. Here are a couple of tips from our experience that let people add to their range of skills in a way that benefits everyone.

 

Expand non-testers’ skills matrices

 

A few years ago, Lisa’s team realized they needed more exploratory testing to counteract a steep rise in production issues. Only two people in the team of 30 had an official role of “tester”. The team agreed that programmers would do exploratory testing on the stories they developed. The whole team took responsibility for exploring at the feature or epic level, before flipping the feature flag “on” for all users in production.

 

The testers facilitated exploratory testing workshops and ensemble testing sessions so the non-tester team members could ramp up their skills. Testers and programmers started pairing frequently, to work on stories, write exploratory testing charters, and do exploratory testing. Perhaps most importantly, the managers added exploratory testing skills to each level of the programmers’ career ladder. They were motivated and empowered to add a new range of competencies to their toolbox.

 

Testers act as consultants

 

We recommend that cross-functional teams adopt a whole team approach to testing and building quality in. Agile and DevOps teams often include only one or two professional testers. As testers on these teams, we can be more effective by acting as testing consultants and quality advocates.

 

As mentioned above, we testing specialists can help team members learn a wide range of testing skills through pairing, ensemble work, workshops, and other activities. We can encourage and facilitate team retrospectives to identify the biggest obstacle to improving product or process quality and help design experiments to overcome that obstacle. We can observe and ask good questions. And, we can add to our own skill sets through collaborating with coworkers with other specialties. Of course, like any tester who loves their job, we like to do some hands-on testing as well.

 

If you feel a particular job title will help others understand your value, work with your manager to change it. Whatever your job title is, help your entire team gain testing competencies. See how you can add more value with skills that go beyond testing. Your contributions working together with the team may go far beyond your official role.

]]>
Sun, 21 Dec 2025 16:48:54 +0000 https://agiletestingfellow.com/blog/post/competencies-over-roles https://agiletestingfellow.com/blog/post/competencies-over-roles
Join in the code reviews and use AI with care! Lisa’s team practiced pair programming, with the pair sometimes being a programmer and a tester. This is a great way to get two sets of eyes on the code and the tests and get immediate feedback on any problems. Even so, the team felt an additional review was a good idea. A more formal “code review” can be a good way to keep a record of changes. Even when teams engage in pairing or ensembling on code changes, extra eyes on the production and test code is a good thing.

 

Each developer pair on Lisa’s team created a branch from the code base trunk when they started a story. Typically, they finished a story within one or two days, which included all the testing and automation. At that time, they created a pull request. The product owner, who was also a senior developer on the team, was supposed to review the production and test code changes for that story and merge the changes into the trunk or main branch of the code base.

 

However, the team had a problem. The product owner was a huge bottleneck. He was overly busy and often could not even look at the pull requests for days. This was a big blocker for the team. After a retrospective focused on this problem, the team decided to enable more people to merge pull requests. The team’s testers were included in those empowered to review code and merge changes.

 

Thus, with the help of teammates, Lisa learned how to do code reviews in GitHub. She found this tool-based code review process helpful, so she continued to participate in reviews on subsequent teams. She found it was a quick way to see what was changed in both the production code and test coverage.

 

Below is a picture of a pull request in Github.

 

Benefits of getting involved

 

A healthy code review process is a big part of making code readable, testable, operable, and maintainable. They can help the whole team build shared understanding of their codebase, and they promote knowledge sharing. It’s also an activity where testers can add a huge amount of value.

 

Some of you readers may be groaning now about your own team’s code review process. Maybe you are thinking, “Ugh, the pull (aka merge) requests (a proposal of code changes to be reviewed and merged) are such a bottleneck for us!” or “The pull requests are so big, that I can’t begin to understand them, and I don’t have time to look.”  Or perhaps you’ve never been involved with the code review process. It’s even possible your team doesn’t have a review process.

 

Some teams gather team members together to review code together. Others use a tool-based review process like Lisa’s team did. Either way, participating in code reviews benefits you, and your involvement will benefit your team. Code reviews are – or should be – a conversation. They’re more effective when the conversation includes people with diverse perspectives. It’s especially important for testing specialists to help review the code changes as well as the tests that support the code.

 

Another way to do reviews that Janet prefers is a peer review. Sit with the programmer and have them walk through their code with the reviewer asking questions. The reviewer can be anyone who knows the team’s coding standards. The idea of “articulating out loud” comes into play, and most programmers will find their own mistakes. It’s simple and effective – especially for non-critical systems.

 

A good resource for learning more about code reviews is a new book, Looks Good to Me: Constructive code reviews, by Adrianne Braganza. Code reviews are a great opportunity to collaborate with your team members and learn more about your team’s code. If your team isn’t doing code reviews, or if you do code reviews and they’re a source of annoyance, this book will help you have good conversations about how to improve the process.

 

AI-assisted code reviews

We all have feelings about AI and all the hype around it. The good news is that leading practitioners, as well as research done by organizations such as Google's DORA, show that AI tools can help with the speed, consistency, quality, and scalability of code reviews. Adrianne's book, mentioned above, has excellent guidance to help you get the benefit and avoid the pitfalls in Chapter 13. As she notes, AI won't take over code reviews, but AI-human collaboration can improve them. 

 

We like how our friend Jen Cook, a leading quality practitioner, keeps it real:

 

  • AI is a huge benefit in code reviews - it catches a ton of the basic 'duh' stuff, including nitpicky stuff a single human might slip by. That said, AI code review is no replacement for human code review, it's a complimentary practice.

 

There are many resources out there on AI-assisted code review, including this post from Angie Jones: https://angiejones.tech/how-i-taught-github-copilot-code-review-to-think-like-a-maintainer/

 

Contributing to your team's code review process is a great opportunity to improve your skills and help your team prevent bugs and deliver higher quality code.
 

]]>
Wed, 19 Nov 2025 17:23:08 +0000 https://agiletestingfellow.com/blog/post/join-in-the-code-reviews-and-use-ai-with-care https://agiletestingfellow.com/blog/post/join-in-the-code-reviews-and-use-ai-with-care
The age-old question: Do testers need to be coders We have been asked this question for the last 20 years, and in today’s agile world, does everyone on the team need to be able to code? Many of us have felt the uncertainty of wondering, “Am I technical enough?” We both started our software careers as programmers, and we’ve done our share of writing automated test scripts and other coding activities. Yet, it’s not the kind of work that brings us the most joy. The world of AI has added another dimension to the question, which we won’t address in this post.

 

Skills that enable collaboration

 

We love collaborating with all software team members, including programmers. We’ve seen the benefits of cross-role collaboration pairs and ensembles. In our experience, testers don’t need to be coders, but we do need to be technically aware. We need to be able to communicate with coders and others on our team, so we can work together. This includes having a high-level understanding of system architecture, being able to use source code control systems, knowing the basics of good coding practices and patterns, and being able to follow along when a programmer walks us through the code.

 

Lisa has been on teams that practice ensemble (mob) programming. At the beginning, she, along with her teammates, felt that she would need to regain her code-writing skills to be able to fully contribute to the ensembles. Although she was given an hour every day to take online coding courses, she quickly got frustrated. Lisa understood the concepts but struggled to write code on her own.

 

The team found that Lisa found was useful in the emsembles even if she couldn’t “code”. She took her turn as the “driver”, while teammates acted as “navigator” and told her what to do. She asked questions to understand better, which often helped them find a better solution. She suggested abstracting out duplicated code or writing a new test. Not only was Lisa useful, she was happy whenever as a productive part of an ensemble working on user stories.

 

The value of domain knowledge

 

Another example that Lisa recalls, is when she paired with a teammate who was trying to solve a production issue on which he was completely blocked. It was an area of the product she had worked on, so she understood the functionality. He started walking her through the code. They got to an “If” statement that the programmer was sure that the “else” would never be possible. However, Lisa remembered a case where it was possible. They tried the scenario in the staging environment and reproduced the bug!

 

After that experience, Lisa realized that with her domain knowledge, problem-solving skills, and ability to work with programmers, she can help find and solve code problems. She still wants to learn more coding skills, but it’s not the most important way she can bring value to her team.

 

Bring your deep skills to the team

 

If you want to learn how to write test or production code, there are great resources today to help you. We have found that collaboration skills are more valuable. Notice all the ways you contribute to your team’s success. Look for ways you can add value. Given that your team already has coders, your other skills probably matter more!

]]>
Sat, 11 Oct 2025 14:59:01 +0000 https://agiletestingfellow.com/blog/post/the-age-old-question-do-testers-need-to-be-coders https://agiletestingfellow.com/blog/post/the-age-old-question-do-testers-need-to-be-coders
Task-switching – can it work? There’s plenty of evidence showing the downsides of frequently switching tasks and contexts – search on “task-switching is bad” in your favorite search engine to get an idea. Yet, the reality is that many testing and quality practitioners support more than one product, workstream, or team. What can we do to be effective when we’re required to switch contexts often?

 

                  

There may be side benefits - or not

 

Lisa has had both good and bad experiences when she’s worked on more than one feature, product or team. It was best when she could help the teams communicate with each other. For example, if one team tries a new technique, such as impact mapping to help slice stories and found it super helpful, she helped spread that idea to the other team. When two teams were working in the same part of the code, unaware of each other’s plans, Lisa made sure they talked and coordinated their work.

 

There’s not always a silver lining to supporting two products. For example, Lisa encountered friction with her manager when he assigned her the task to coordinate the testing strategy for a second product. Since she wasn’t familiar with that product, she couldn’t answer all his questions immediately. It shouldn’t have been a problem, since the development lead for the other team, along with other team members, could collectively answer the questions. Yet the manager had expected Lisa to know it all.

 

Janet’s opinion is that testers shouldn’t be on more than one team because they miss too much – information about the product, risks, or even chances to bond with teammates. She would rather see a team take ownership of all the testing themselves and perhaps use the tester from the other team or product line as a test consultant. However, we recognize that it is not always possible.

 

Approaches to counteract task-switching

 

Reflecting on this, one big factor in helping testers and their teams work effectively even when they switch contexts a lot, is autonomy. When team members decide amongst themselves who will take the lead on each feature or story and have the freedom to work together to cover all testing activities, they can find creative approaches to succeed. When managers start assigning tasks to individuals with unrealistic expectations about how much one person can multi-task, everyone may suffer.

 

If you are in a testing role on more than one project or team, you know this can be a difficult and frustrating situation. Look for ways to engage everyone you work to help you be effective. Ask for their help! Find opportunities to get people working on different efforts to share what they’re doing and perhaps solve common problems together. Provide reality checks about the time you’re able to devote to each work stream. Share what you know about the progress on each.

 

Task switching is a good topic to bring up in team retrospectives. When the whole team collaborates to try better ways of working, they can find ways around the multi-tasking dilemma.

 

The idea of a test consultant or quality coach is a powerful idea. We have talked about that type of role in our February 2024 blog post, so please go read that for more ideas. Also, Anne-Marie Charrett has finished her book which we mentioned there, The Quality Coach's Handbook, which is now available on LeanPub and Amazon.

 

]]>
Tue, 15 Jul 2025 20:38:43 +0000 https://agiletestingfellow.com/blog/post/task-switching-can-it-work https://agiletestingfellow.com/blog/post/task-switching-can-it-work
Testing: A low-level task?? Several years ago, Janet was asked this question:

"The majority of developers don't like testing and quite a few business analysts (BAs) don't like it either. It is considered a low-level task. Any comments? Thoughts?"

We are revisited that question since we’ve heard it again lately. Here are our thoughts:

In the days of old, and not so very long ago, that statement was true in many organizations. Testing was a considered a task than anyone could do. Give a person a script to follow and check off the statements that are correct and enter a bug for any one that is not correct – it must be broken.

We still run across a few companies that work that way, but not as many. We also realize that some countries, cities, organizations and even teams …  are slower to adopt new technologies than others. We have been spreading the concept of the whole team approach to quality and testing for over twenty years, and most people realize that testing is not sitting with a script following along line by line. These days, the change is faster than ever.

A wide range of skills

Testing is skill that needs to be learned. Testing requires creative thinking and problem-solving skills. Testing requires knowledge and deep understanding of the whole system to identify impacts of new features. Testing requires the ability to identify and assess risk. Testing requires abstract thinking to ask questions early that will affect the design and testability. Testing requires experience to apply appropriate heuristics to look for faults. Testing requires understanding of regulatory standards that apply to the product.

                          

In addition, testers need some knowledge of programming to help determine at what level to automate tests, and what would be better accomplished as human-centric exploratory tests. They need to be able to communicate and give feedback to all stakeholders, including programmers whose code they are testing. They need to be proactive and helpful and positive when giving feedback, even when the feedback is negative in nature. The need to understand the strengths and weaknesses of AI, and when to use it or test it.

In conclusion: testing is NOT a low-level task. A good tester is worth every penny they earn. In many teams, testers get paid more than programmers, because a good tester is hard to find. These days, many testing and quality professionals are adopting job titles such as Quality Engineer and Quality Coach which encompass the skills and activities we've touched on here.

Taking on new challenges

Every week, we read more books and articles written on testing, because technology is ever changing and testers need to keep up. There are so many specific niches in the testing world that I don’t think teters could ever get bored. Security, performance, observability, artificial intelligence (AI), machine learning (ML), Internet of things (IoT), data science etc.

Not all testers have university degrees in Computer Science, but many do. Some have degrees in Liberal arts, Social Sciences, Mathematics, Physics or Animal Husbandry. Some testers take a tester certification exam to ‘prove’ they have the knowledge because that is the only way a company will believe them. Others have a programming background and moved to testing because it is more fun for them - both of us come from that background.

You can tell we are passionate about this subject. Testing is an activity, and yes – anyone can learn, but people who test are often testers in every part of their life.

]]>
Mon, 16 Jun 2025 15:14:25 +0000 https://agiletestingfellow.com/blog/post/testing-a-low-level-task https://agiletestingfellow.com/blog/post/testing-a-low-level-task
Who does the 'validation'? A few years ago, Lisa’s team used an online tracking tool to visualize and manage their work. The team practiced kanban, though they had regular work sessions such as story readiness workshops and retrospectives on a two-week iteration schedule. The project board’s columns include Sized, In Progress, Ready to Merge, Validation, Ready to Release, and Done. Over a few months, the team had ongoing discussions about the Validation column. It regularly got filled up with stories and became a bottleneck and was getting in the way of the team’s desire to deploy small changes to production frequently.

 

The team practiced ensemble (aka software teaming or mob) programming. Each ensemble, which usually included a tester, used a test-driven approach and incorporated testing activities alongside coding. They did human-centric tasks such as exploratory testing, and automated tests at appropriate levels. The team expected that stories that reached the Ready to Merge column had, in fact, been adequately tested at the story level. So why did stories end up waiting to be validated?

 

Why a validation column?

 

One reason was that the product owner wanted to check some of the stories himself. The ensembles sometimes asked him to join them to validate a story before they moved it out of In Progress. However, he was often in meetings and unavailable. Another reason was simply historical. There was a time when the tester was expected to validate those stories. The team embraced testing as part of the development work, but the tester who was on the team a long time sometimes liked to take an extra look at some of the stories when they felt they may be higher risk.

 

Some stories also needed testing at a feature level, to ensure a feature worked end-to-end. That could be addressed by having separate stories for workflow testing as a “Test this feature” type of story. Lisa also introduced stories that contain exploratory testing charters. Still, the team couldn’t agree to try eliminating the validation column.

 

So, what happened? The ensembles often ended up spending half a day or more validating stories that were already tested so they could move them from the Validation column. It interrupted their workflow.

 

Have a conversation

 

Back in 2018, Janet blogged about whether teams need a testing column on their board. Her conversation with another testing practitioner shows the value of taking a step back to consider more effective ways to organize work and enable more collaboration. She believes a Validation (or to test) column is red flag for teams and would rather see testing activities as tasks within the story. Janet has recommended to many teams to add a To review column which almost every task could benefit from – even testing tasks.

            

 

Lisa's team kept revisiting the validation bottleneck at their biweekly retrospectives. Over time, they found that adding individual stories to the backlog for extra testing efforts, such as feature-level testing, was a better way to make that work visible. It also helped with planning, since it was clear that extra time would be needed. They did keep the Validation column but only used it for special cases. For example, if a story depended on work from another team, they put it in the Validation column to coordinate integration testing with the other team.

 

Make it visible

 

Different teams represent and track their testing work in a variety of ways. For some, all the testing is encapsulated in the user story. For others, a story’s testing tasks are represented in separate cards or items in their online tracking board.

 

It’s important to visualize your work and make progress visible easily. Your tracking board can do even more. It can help your team find good ways to weave coding and testing activities together to build quality into your product from the start. As a team, talk about your columns (or stages) to see if they add value to your process, and try experiments to see if your cycle time can be shortened by doing a better job of testing together.

]]>
Tue, 29 Apr 2025 16:25:18 +0000 https://agiletestingfellow.com/blog/post/who-does-the-validation https://agiletestingfellow.com/blog/post/who-does-the-validation
Quality Attributes – those pesky “non-functional” requirements We commonly hear people talking about “non-functional” requirements. By “non-functional”, they mean quality attributes such as performance, security, accessibility, usability, operability, installability, the list goes on and on. However, we believe non-functional isn’t an appropriate name for these attributes. How “functional” would a web-based e-commerce application be if it wasn’t secure, if it was difficult to use, if its pages took minutes to load, if it wasn’t supported by monitoring and observability?

Have the conversations

Download the chapter on models about the Agile Testing Quadrants. We talk about and plan testing for all the quality attributes that may be important in the context of our team’s products. These mostly fall into the right side of the quadrants, test activities that critique the product. That doesn’t mean teams won’t start testing for them even before the production code is written. For example, a team may do a spike, write throw-away code that is unsupported by tests. They may do this to run load tests against it and verify that the architecture scales appropriately for the expected number of users.

It’s crucial to talk about all these quality attributes as soon as a team starts to talk about a new feature. Frameworks for structured conversations, such as story mapping and the 7 Product Quality Dimensions, help teams consider workflows and identify the “non-functional” testing activities they may need. Even with shared understanding using  concrete examples and business rules, teams miss things. For example, who expected a beaver to chew through cable buried three feet underground to bring down the internet?  Teams also need consider quality attributes like recoverability and observability to help fix unexpected problems quickly.

            

Start having those conversations by examining some of the risks the team might identify. Look at both business and technical risks. Ask those “what if” questions, or “What’s the worst thing that can happen? What keeps you up at night?” to get the conversations started. In her blog post about quality management, Janet shared some ideas about using quality sliders to determine what are the most important quality attributes a team should consider.

Risk vs. value

Risk is about trade-offs. For example, when Lisa needed eye surgery, the doctor was very specific about the risks and the impact of those risks if they should materialize. He also explained the consequences if she didn’t have the surgery. Risk vs. value. It is a conversation to have with your customers.

Perhaps the internet provider planning to bury that cable had thought about their nightmare scenarios. They might have considered risks such as permafrost, animals chewing through the cable, or someone accidently stepping on the cable. Burying it inside conduit three feet down may have seemed like enough to mitigate the risks. Given that the town lost connection for two days and that the story about the beavers got in the headlines internationally, a plan for backup service might have been wise. No one can predict everything that will happen in production. But teams can think about the range of quality attributes, and plan for measures to detect unexpected problems and recover quickly.

Quality – in your customer’s opinion

Testing for quality attributes is about fit and finish for your product. They are becoming more important as time passes. Customers expect more. They expect reliability, security, safety of personal data, performance, accessibility, and so on. Start the conversation with your team if you are not already having that conversation

]]>
Mon, 17 Mar 2025 17:37:26 +0000 https://agiletestingfellow.com/blog/post/quality-attributes-those-pesky-non-functional-requirements https://agiletestingfellow.com/blog/post/quality-attributes-those-pesky-non-functional-requirements
User Acceptance Testing (UAT) A question we get once in a while (though less often than in the past), is about UAT (user acceptance testing).  People wonder how it fit into an agile cadence – especially when the feature spans more than one or two iterations.

User Acceptance Testing (UAT) is important in large customized applications, as well as internal applications. It’s performed by all affected business departments to verify usability of the system and to confirm existing and new (emphasis on new) business functionality of the system. Your customers are the ones who must live with the application, so they need to make sure it works on their system and with their data.

Teams that still perform UAT, likely do not have continuous delivery in place. However, that is not a rule and some clients still want some form of final say before they accept a new release.

In one team that Lisa worked in, prior to a deploy to production, they first installed the release candidate in the pre-production environment. Lisa met with the client’s test manager and explained all the testing already completed – automated tests at the unit and UI level, acceptance testing, and exploratory testing. This was a totally new idea to them, and they were thrilled that they could cut way down on the UAT. Since several different teams also delivered changes to the system, any issue found was reported during a conference call and the appropriate team took charge of it. They had working agreements with the customer about what would constitute a critical show-stopper bug that they had to fix and what could be turned into a user story and put off to the next release.

Janet often tells a story of a team that had three-month release cycles (quarterly). The day she joined, they had just put the release into the customer’s test environment for UAT. She was told it would take six weeks. What?  How could that be? It’s half-way through the next release cycle. Crazy. Slowly she started introducing the customer representative to features earlier and earlier, and on the second release cycle, when the team delivered the candidate for UAT, the customer quoted one day to test in her environment. Mission accomplished.

Fitting UAT into the Holistic Testing Model

The key is to think about how to get testing moved as early in the cycle as possible. Consider what are the constraints, and how can you remove or mitigate them. Have conversations with your customers in advance to see how much UAT will be necessary and how any bugs found will be handled.

We have not specified UAT as an example of a test in our Holistic Testing Model. Many teams no longer have the idea of a separate UAT if they are collaborating very closely with their customers. Or, they have a less formal UAT if they are practicing continuous delivery and using a release strategy such as feature flags to hide new changes until they have been tested by their customers.

If we added UAT as an example in the Holistic Testing Model, it would fit into the Deploy stage, but that doesn’t mean you have to wait until right before a production release to have customers test the system.

Embracing today’s leading practices for deploying, monitoring, and observing the system in production can mitigate risk by allowing instant response to production issues. Teams have many options. Collaborate with your customers to see what fits best for your situation.

]]>
Sun, 12 Jan 2025 22:30:14 +0000 https://agiletestingfellow.com/blog/post/user-acceptance-testing-uat https://agiletestingfellow.com/blog/post/user-acceptance-testing-uat
Service Level Language Service level language is a way an organization can set targets for different aspects of product quality, especially areas like availability and performance. Both of us have worked in organizations that have used SLAs (Service-level agreements), but we’ve learned more detail from Abby Bangser who says Operations engineers improved how they measure quality as complexity and expectations have grown.”

 

Three high-level definitions that are important to know are:

  * Service-level indicator (SLI): ways to measure success and make informed decisions

  * Service-level objective (SLO): targets and priorities for quality and performance

  * Service level agreement (SLA): customer agreement, along with associated penalties

 

Service-Level Indicator (SLI)

An SLI is a measure of an aspect of the level of service provided. It is a service level indicator—a carefully defined quantitative measure of some aspect of the level of service that is provided.

  * Feedback from production and test environments

  * Monitor thresholds for each service’s availability; investigate anomalies

  * Typical metrics include availability, latency, throughput, error rate, durability (long-term data retention). Examples:

        - successful requests as a % of all requests  

        - ratio of home page requests that loaded in < 100 ms.

  * Testers can ask questions and get examples to identify meaningful SLIs

 

Latency is the amount of time to get through a pipeline line including any introduced delays

Most services consider request latency—how long it takes to return a response to a request—as a key SLI.

 

Throughput is the number and size of items that can be sent at any one time through a pipeline and is typically measured in requests per second.

 

Other common SLIs include the error rate, often expressed as a fraction of all requests received. The measurements are often aggregated: i.e., raw data is collected over a measurement window and then turned into a rate, average, or percentile.

 

Another kind of SLI important to SREs is availability, or the fraction of the time that a service is usable. It is often defined in terms of the fraction of well-formed requests that succeed, sometimes called yield.

 

Service-Level Indicator (SLI)

 

SLO’s are the objectives your organization sets for itself and are a big part of perceived product quality. Acceptable level of internal availability and is usually stricter than for external.

 

  * A target that sets expectations for how a service will perform

        - 99.99% of requests got a successful response (availability)

        - 95% of requests were faster than <some threshold> (latency) 

  * Uses SLIs to catch problems before customers feel pain

  * Generates conversations about how services are performing and what to prioritize

  * Testers can participate in setting SLOs; get involved with monitoring and alerting

 

Teams should consider how to build their product to meet these objectives, and how they can instrument their code so that they know the current levels.

 

Service-Level Agreement (SLA)

SLAs are what your organization agrees to with their customers. Not meeting these agreements may cost the business money and likely cause pain for your customers. The customer agreement levels should be more easily attained than the SLOs, which are for internal use. They are tied to business and product decisions.

 

Examples:

  * The system is available 99.9% of the time

  * Financial penalties of $$$, if the agreement is not met (A few hours of downtime can be expensive)

 

SLAs are built on Service-Level Objectives and Service-Level Indicators

                  

This next diagram shows some examples.

     

Customer experience

 

  * SLIs reflect what customers experience - how they perceive the app is working

  * Quantifying customer happiness metrics drives conversations across the organization

  * Do SLIs drop but customers still are satisfied? Are you measuring the right things?

  * Customer happiness may be measured by customer complaints, social media, or the number of completed transactions

 

A visible drop in the SLIs indicates problems that are affecting customers, they are feeling pain. If there are complaints from customers – “Pages are loading so slowly” or “We are seeing a lot of error page” and the SLIs don’t show any drop in level, there is something wrong with how the indicators are defined or how the SLIs are measured.

 

How does this relate to testing?

 

  * Service level measurements reflect quality attributes

  * Trends and anomalies guide future testing & coding

  * Ask questions to drive service level conversations

 

As new features are planned, ask good questions to make sure the changes don’t adversely impact service levels, to understand the expected load on the system, to understand the risks, and to be prepared for unexpected problems. Testers are good question askers, and some questions that they can ask to help the team:

 

  * What risks can we mitigate with tests?

  * What risks may not be known until we’re in production?

  * How much data do we need to store and how do we make sure it is secure?

 

If your organization doesn’t have any of these measures, try to get something in place, even one measure. Start small and pick one aspect that’s easy to measure. Identify people that care. Then iterate and set up a feedback loop so you can improve.

 

Reference: https://sre.google/sre-book/service-level-objectives/

 

]]>
Mon, 21 Oct 2024 15:44:53 +0000 https://agiletestingfellow.com/blog/post/service-level-language https://agiletestingfellow.com/blog/post/service-level-language
Go / No Go! Who decides? Back in the exciting internet startup days of the late ‘90s, Lisa had the title of “Quality Boss” at the travel website startup where she worked. She held the keys to production. She decided when the release candidate was good enough to release and deployed to production herself. Looking back, she finds both the title and the idea that she felt like she owned quality truly terrible. Sometimes the new changes to the product exactly matched the requirements documents, and there were no critical bugs. However, the delivery team failed to notice what was missing. For example, important quality attributes such as load time meant customers didn’t value the new features because a page took too long to load.

                        

       

It is a bit of a rush to think you have that kind of power, but in reality, testers do not have the full picture. We agree with the Modern Testing Principle #5 that says our customers are the best judge of product quality based on user needs*.  Our teams can use today’s monitoring, analytics and observability technology to learn exactly how people use our product in production. We can use that data to pinpoint what would be the most valuable change or addition we can deliver next.

 

In our experience, testers add value by helping business stakeholders think about the problems they want the software product to solve and how best to make that happen. When testers are involved with early design brainstorming meetings and ask good questions, the value added is uncovering hidden assumptions. Techniques like story mapping, impact mapping help to identify the next change to deliver, and example mapping helps to get shared understanding of a feature.

 

Testers can collaborate with the whole team to guide development with business-facing tests. As coding proceeds, team members / testers can complete testing activities such as exploratory testing and test automation, and use continuous integration and deployment pipelines to get fast feedback. Our teams can take advantage of modern techniques like release feature toggles to hide or limit exposure of new changes until we are confident in their quality.

 

                     

Testers add value by helping the team and stakeholders identify the most important quality attributes, and can help the team to make sure those attributes are built in. We can apply our customer perspective, user personas, and give feedback.

 

That said, the customers are not the product’s only stakeholders.

 

When it’s time to make Go/No Go decisions, testers and their teams can provide their input about risks and tell the testing story. They can even express a confidence level with a percentage. What testers should NOT do, is act as the gatekeeper who makes the deployment decisions. So, if you find yourself in that position, tell the team that is not your job - that the whole team owns quality and can make the recommendation together (if the need is there). In the end, the person/people who should make the decision are the business partners who understand the big picture and can make informed decisions based on the input from the team (including testers).

 

*Note: you can read more on Janet’s thoughts on quality in her four-part blog post on testing and quality

 

 

]]>
Wed, 21 Aug 2024 19:33:12 +0000 https://agiletestingfellow.com/blog/post/go-no-go-who-decides https://agiletestingfellow.com/blog/post/go-no-go-who-decides
Fostering collaboration with ensemble testing In our Holistic Testing courses, we talk a lot about the importance of collaboration. If your organization has more of a solo work culture, it can be hard to encourage more collaborative practices such as pairing and working in an ensemble (the newer, gentler name for “mobbing”, which is also called “teaming” nowadays). Let’s look at more ways we can get team members working together, more often.

 

Exploring a feature together in an ensemble format is a great way to spot those hard-to-find issues and learn what might be missing. In an ensemble, people rotate through the roles of navigator and driver, with the navigator telling the driver what to type, and everyone in the ensemble is free to contribute their ideas. A short time box is set, and roles rotate at the end of each one.

The first time you try something new like ensemble testing (or even pairing), it will feel awkward and contrived. The key to getting better at it is deliberate practice. Start small, perhaps with only three people, and rotate, each getting familiar with the roles. Practice what it feels like. The more you practice, the more comfortable you’ll be. At that point you know the benefits and can share what you’ve learned and can expand it to include others.

 

In various organizations where she worked, Lisa and her fellow testers/quality engineers hosted a weekly ensemble testing session, open to anyone who wanted to join. Sometimes these were focused 30-minute sessions, sometimes an hour or 90 minutes for deeper exploration. Programmers on all the feature teams were invited to book a session to have the group explore their latest new features.

 

One way to organize these sessions is to have at least one programmer from the feature team join in to answer questions and benefit from the quick feedback. The programmer and the quality engineer who is facilitating the session, collaborate to prepare a test charter in advance. This charter guides the ensemble on what to test and provides links to the build that is to be tested.

Test charter example to use for security testing

During the session, each driver shares their screen so the others can follow along. These sessions have proven so useful that additional sessions were scheduled and some engineering teams organized their own ensemble testing sessions.

 

Another ensemble testing technique Lisa’s teams have used is a similar format, aimed at acceptance testing stories. As soon as a team had several stories “finished”, they scheduled a 30-minute ensemble testing session that included the product owner, the designer, a tester, the programmers who worked on the stories, and someone from customer support. Rather than switch roles on a time schedule, they switched for each story. Any questions, whether related to design, functionality, or impact on customers, someone in the session was able to answer. 

 

Ensemble testing brings many advantages. People with different specialized skills working together in this type of group format can overcome unconscious biases and blind spots and achieve more in a short amount of time than if they worked individually. The obvious benefits encourage everyone to be more willing to collaborate real-time in pairs or groups. So, if you are feeling timid to bring this idea up with your team, practice deliberately to understand how, and then you will be more confident to bring this powerful technique to your team.

 

]]>
Tue, 02 Jul 2024 20:00:35 +0000 https://agiletestingfellow.com/blog/post/fostering-collaboration-with-ensemble-testing https://agiletestingfellow.com/blog/post/fostering-collaboration-with-ensemble-testing
Sharing ideas to spark conversations When we work in isolation, we risk losing the best part of an idea. By sharing our ideas – even the half-baked ones – we can spark conversations that grow them into something valuable for ourselves and our community. We’ve heard from leading practitioners that incorporating others’ ideas into your own and working with a diverse group of people spurs innovation.

                           

Our work together is a case in point. After we joined our first Extreme Programming teams, we shared ideas and practices from each of our teams and evolved new ideas together. We also shared these with other practitioners, who added their own insights.  

For example, back in 2003 Janet had a conversation with Brian Marick and Bret Pettichord at the end of a long day of a conference. Brian was explaining this idea he had for a taxonomy of testing types for agile development. After many questions and much conversation, this taxonomy grew into Brian’s agile testing matrix.

We both used the matrix with our own teams and evolved our own version which we used (with Brian’s permission) in our first book Agile Testing: A Practical Guide for Testers and Agile Teams. Not only have we continued to evolve our quadrants model, but other people have created their own adaptions (you can see some of them in Chapter 8 of More Agile Testing, which you can download from our website.)

More recently, Dan Ashby created a new model he’d evolved for “left” and “right” testing cycles. He said it was inspired by a shift left – shift right testing diagram Janet created. Janet’s model (below) was inspired by Dan’s original continuous testing blog post and model.

                       

The resulting Twitter conversation spawned a meeting between Janet, Dan and Rob Meaney to clarify and challenge these ideas to create better ones. The outcomes of that discussion has been shared in Janet’s blog posts, and went on to become the Holistic Testing model that we talk about in our downloadable mini book Holistic Testing: Weaving quality into your product. No doubt it will trigger more ideas by other practitioners.

Anne-Marie Charrett has built upon it with her product excellence model. This represents a holistic view of quality across discovery, delivery and support. It focuses more on product quality, rather than  process quality.

These are examples showing the power of sharing ideas with others. When a person has the courage to put their thoughts out in public, inviting conversations, one idea can become something bigger.

Even small conversations can have a huge impact in a team or even on an organization. It starts with one person being brave and sharing their idea – making it visible and inviting others in. Make it grow!

 

Note: If you want to know more about continuous testing models, you can check out Lisa’s blog post.  https://lisacrispin.com/2020/11/01/shifting-left-right-in-our-continuous-world/)

]]>
Mon, 29 Apr 2024 19:27:54 +0000 https://agiletestingfellow.com/blog/post/sharing-ideas-to-spark-conversations https://agiletestingfellow.com/blog/post/sharing-ideas-to-spark-conversations
Flaky tests

We’re not sure if you started singing along with Marit, but we certainly did. This tweet captured the essence of what we think about flaky (sometimes spelled flakey) tests.

What are “flaky tests”?

 

To us, flaky tests are ones that pass when you run them locally, pass most of the time when run as part of an automated test suite in the CI pipeline, but fail sometimes for no understandable reason. Another case might be if you run them and they fail, but rerun them, they pass. Your team is rolling the dice with every run of your automated test suite.

These tests add no value. We can’t trust them, and we don’t know if they are testing the right thing. They give people a false sense of security. Your team thinks you have a test that covers a particular functionality, but really you don’t.

Identifying and addressing flaky tests

The teams that Lisa worked with, had a sophisticated system to identify flaky tests based on pass/fail rates. This visibility was helpful, and the reports help teams see how big a problem they had. Still, analyzing the failures is a headache. Flaky tests get “quarantined” so the team can get a “green” build. Then, if the test passes locally, they put it back into the CI and wait for a failure. Once a test fails, finding the problem often means hopping from one machine to another, trying to find the right log file with the right information. So many of us have felt this pain that eats up so much time and kills our confidence for deploying changes to production.

These days, there are lots of tools to help your team identify flaky tests and prioritize which ones to investigate first. Modern continuous integration tools such as Semaphore CI and Circle CI gather data about inconsistent results across runs. They have features that let you find the tests causing the biggest problems. These might use AI or LLMs, or they may be using statistical analysis. We asked Gemini about using AI and LLM-based tools to help identify flaky tests. It agreed that these can help, and there are other ways to address flakiness. Our favorite part of the response:

Combine AI's analytical power with human expertise. Developers and testers can use AI-generated reports to pinpoint flaky tests and work together to resolve the underlying issues.

Our advice

What’s our advice for dealing with flaky tests? If you can’t fix a flaky test so that it only fails for legitimate reasons, throw it away. (We know, it’s hard to delete something you spent time creating, so perhaps archive it somewhere for now). If there is value in having a test to do that particular check, write a new one. Have one clear purpose for each test, so that if it fails, you know exactly what was being tested. Use good test and code design practices so that your test code is easy to understand and maintain. We’re fans of having coders and testers collaborate to automate tests, making the most of both code and test design skills.

Flakiness is generally most common in the end-to-end or “workflow” tests. Collaborate with your delivery teammates to see if any of these can be split into smaller, less brittle tests. Are these tests for business logic that could be checked at a lower level such as the API level? An easy check to do, is look to see if you have more than one assert in a test.

Think back to the complicated process Lisa’s teammates went through to diagnose test failures. These flaky test failures are in the realm of “unknown unknowns”. We can’t predict all the failures. What they could do is to log every event that occurs before, during and after each test runs in a central location. Then they would only have one place to research test failures, and all the data they might need is there for easy accessibility. Observability, the ability to ask questions of our systems to explore and solve totally unpredictable problems, applies to our automated tests as well.

Finally, investing the time to address flaky tests is important because flaky tests are often indications of flaky production code! Lisa has often paired with a developer to fix flaky tests, only to find that the failure occurred due to a real bug that simply didn’t happen often due to timing or some other reason. You might want to check out Gojko Adzic’s book Fifty Quick Ideas to Improve your Tests, and its section on how to deal with flaky tests.

Don't gamble with your tests. If it’s flaky and you know it – investigate, and fix the problem, whether it’s in the test code or the production code! Please don’t comment it out or quarantine and forget about it.

 

 

]]>
Wed, 20 Mar 2024 21:01:07 +0000 https://agiletestingfellow.com/blog/post/flaky-tests https://agiletestingfellow.com/blog/post/flaky-tests
Quality Coaches In the last few years, we’ve been hearing “buzz” about quality coaches. Everyone seems to have their own take on the subject, and we’ll share some links from others expressing their opinions.

As testers/quality advocates, we care about the quality of our product, as least we hope that is the case. That doesn’t mean the developers or product owners don’t care about it. As testing specialists, we seem to be closer to the subject, since we are often the ones finding and helping prevent bugs daily.

One definition we found for quality coach is: a person who assists the team in setting up the quality culture, quality mindset and quality process in the team to improve the quality of the process and the outcome.

We like this definition because it doesn’t say the quality coach is responsible for quality but is someone who assists the team in learning more about what quality means. It means being able to facilitate conversations about quality, not only for the team, but for the organization as well. Quality is about so much more than the number of bugs in the system. A quality coach needs to think about it more holistically – from a process point of view: how well do we create our products, and also from a product point of view: how well are we meeting our customer’s needs. They guide the team’s efforts to build quality in from the first discussions about potential new features to understanding what people experience when using the product.

The general rules about what makes a good coach also applies to a quality coach: asking questions and listening carefully to the answers, observing, guiding (not directing), helping the team to identify problems and experiment with solutions and continue to improve. Anne-Marie Charrett introduces a model in this blog post on this subject that we encourage you to read.

Janet began her coaching career a very long time ago, coaching gymnastics. She finds many of the same concepts apply for coaching software teams. Being positive, encouraging, sometimes helping them to find the new skill. 

If your team is discussing quality, and you have few (or no) quality issues in production, then you likely don’t need a quality coach because your team is doing the right ‘stuff’. We find that when testers are overwhelmed trying to test everything themselves, maybe it’s time to step back and take a coaching role, or maybe even a consultant role. You can check out our blog post on being a quality consultant.

For more in-depth information on quality coaching, we recommend Anne-Marie Charrett’s Quality Coach Book, which she’s currently writing and sharing in monthly installments. You can read posts on her website, and access the full content with a subscription, check it out at https://www.annemariecharrett.com/tag/book/. Her Quality Coaching Roadshow podcast with Margaret Dineen is another excellent resource, including interviews with a wide range of quality coaches and leaders. Also, listen to her extensive interview with Alan Page and Brent Jensen on their AB Testing Podcast.

]]>
Mon, 26 Feb 2024 17:30:02 +0000 https://agiletestingfellow.com/blog/post/quality-coaches https://agiletestingfellow.com/blog/post/quality-coaches
Meet our instructors: Gáspár Nagy This is the second in our series introducing some of the highly experienced practitioners who teach our Holistic Testing courses. Some of you are already familiar with Gáspár Nagy. Janet and Lisa had the good fortune to meet Gáspár at testing conferences many years ago. He’s been in the software development business for more than 20 years as an architect and agile developer coach. He’s an author, test tool developer, and a leader in the open-source community. We’re honored to have him on board as an ATF training provider and instructor since 2018.

 

 

Q: Gáspár, please tell us your "origin story", how did you get into the software profession?

 

Like many things, it started with a coincidence. I had to do home schooling when I was 14 because of  surgery. Without even knowing me, my teacher of informatics thought that it must be very boring at home and lent me one of the school computers. This was a big thing in Hungary in 1992. There were no games on the computer (and no internet at that time), so I started coding and have never stopped since then. Sometimes even a small help can cause fundamental changes and I'm very grateful to my teacher for what she did. Now it's my turn and my responsibility to give it forward.

 

Q: You're a leader for behavior-driven development (BDD), including being the lead developer for SpecFlow, the SpecSync synchronization tool, and writing the BDD book series with Seb Rose. When did you first try BDD? What made you want to try it? Did you have good results right away?

 

In 2009 we had projects with massive integration-level test automation. We were stunned by the real actual costs of creating and maintaining those tests, so we were seeking ways to increase the value provided by the tests and reducing the maintenance costs. (Sounds familiar?) Somehow our idea was that a better separating of the intent of the tests or the actions performed by the tests from the technical solution would be beneficial. We had tried different approaches already when we found Cucumber that seemed to do exactly what we wanted. The concept of BDD worked for us right from the beginning, or at least we always saw the light at the end of the tunnel, but the new concept introduced or brought other challenges. In the end it took us some time until we could say that we had a working model.

 

Q: What motivated you to help teams learn about BDD, SpecFlow, test-driven development (TDD), and Extreme Programming (XP) by providing your own range of training courses?

 

I like to do training. The feeling when you find the right example that illustrates a concept and can explain it in a way that the other person understands it, compares to nothing else. It complements the engineering work I would otherwise do. Through the courses I can connect with many people, learning about their perspectives and challenges. It is also useful to validate my own thoughts. It is very exhaustive, but it is worth.

 

Q: What made you decide to add our holistic testing courses to your training offerings?

 

My journey to agile software development started with project management (Scrum), agile engineering techniques and agile requirement management. But I felt that something was missing. When I read the Agile Testing book, I realized that this is the missing link. So basically, your book filled up a white spot on my map. For me, the holistic testing courses are a good way to pass on this "yay, I've found it" feeling. And I feel honored to work with you on this.

 

 

Q: We’re so glad you found our book and our courses valuable! You helped us refine and improve the new course, "Holistic Testing for Continuous Delivery". Who do you think can benefit most from it? 

 

Continuous Delivery (CD) is a good starting point. The goals of CD are easy to understand and the CD pipeline is a concrete asset that makes the concept visible for everyone. But to be able to build up a working CD process, you really need everyone to collaborate and at the end you will learn all important concepts of the thing that we call holistic testing. So, I think this is beneficial for everyone who are on this journey, regardless of whether you will ever configure a pipeline yourself. Continuous Delivery (just like holistic testing) is not only for the big online companies. The concept works for projects in all sizes and nature.

 

Q: When you build code yourself, such as custom extensions for clients using SpecFlow, do you use a holistic approach? Do you learn things from your own development work that you share in the classes you teach?

 

I think the short answer would be - yes. But this is a journey and a continuous responsibility. Like many others, I tend to over trust myself and sometimes I feel that "I'm good enough already, I might not need this or that." You can call this laziness or even a risk-based approach, but at the end it is always the same: It works indeed… mostly. However, there may be a circumstance and then a problem is caused that could have been avoided by doing the things "right". "Mostly" is not a good quality measure. But I learn from these incidents and they make the best stories that I can share with my course attendees.

 

Q: Is there anything else you'd like to share about yourself and your interests?

 

I got into testing from software development, so I am generally interested in how devs and testers can better collaborate with each other. This is not easy though. Just putting devs and testers into the same team is not enough. We need to keep seeking opportunities and concepts that enable collaboration, so we can really see quality in a holistic way.

 

Q: Please share any upcoming events and classes you have planned. Also please share links to your website, any writing or videos you have available so people can learn more about your work.

 

I have some upcoming courses in the first quarter of 2024. In the "BDD Vitals" course you gain essential knowledge and skills for writing better BDD scenarios, which enable you to become a strong member of a BDD team (https://www.specsolutions.eu/events/courses/202403bddvitalsonline/). If you have interest in using SpecFlow and test automation as well, I strongly recommend my "BDD with SpecFlow" course (https://www.specsolutions.eu/events/courses/202403specflowonline/). In both courses the attendees can rely on a lot of exercises and discussions.

 

And last but not least, in April I am awaiting attendees for the “Holistic Testing For Continuous Delivery for a quality DevOps culture” course, where you can learn ways to apply the infinite loop Holistic Testing model to testing activities that help your team succeed with continuous delivery (https://www.specsolutions.eu/events/courses/202404holistictesting-cd/).

 

All these courses are offered in remote form. Further information is available at our company website (www.specsolutions.eu).

 

 

 

]]>
Wed, 17 Jan 2024 17:13:30 +0000 https://agiletestingfellow.com/blog/post/meet-our-instructors-gaspar-nagy https://agiletestingfellow.com/blog/post/meet-our-instructors-gaspar-nagy
Success Factor #1: The Whole Team Approach What a great way to start off the new year talking about the number one success factor from Agile Testing: A Practical Guide for Testers and Agile Teams - the Whole Team. That’s it… the number one factor in being successful with testing and quality is to include the whole team.

Both of us have experienced the unicorn magic of being part of a cross-functional team where people with a wide range of specialties work closely together to continually build and deliver new changes. For over 20 years, we have shared our experiences, gave presentations, courses, and workshops centered around this idea. And yet, we still see many ‘agile’ teams with silos.

The silos manifest themselves in different ways. Sometimes it’s by function – for example, testers are not included in all the early conversations about new changes to the software product. Sometimes it’s by teams – for example, teams don’t talk with each other, even though they know they have mutual dependencies. Then, they each blame the other for the misunderstandings.

                      

During the COVID-19 pandemic, and many teams learned to work remotely, but that makes it even easier to become siloed. Your delivery team may be working quite well together, but it can be a struggle to include important people outside the team in the conversations.

A while ago, Janet read a blog post about being a QA leader. It was a good piece, but the author kept talking about her QA team as it were separate from the rest of the people building the software. We think it is wording like this that keeps the silos in place. How can we collaborate effectively if even one team member has conflicting priorities?

One suggestion we have is to identify the people and systems who use or are affected by your product. They may be business stakeholders, or customers, or perhaps the customer support team. Talk about how they are affected, or how their absenteeism would affect your product. Decide how to include them and what you need from them to ensure you deliver valuable changes at the right time.

                     

As a testing specialist on a delivery team, we do what needs to be done to help our team successfully deliver product changes frequently, at a sustainable pace. Most of the time that may be actually testing. Often, it is using our critical thinking skills and question-asking skills with the customer or product owner to understand their needs. It might be using our communication skills to talk with other teams to understand dependencies. It might be using our skill of reading code to peer review a programmer’s code with them. It even might be using our facilitation skills to help non-testers learn some testing skills.

                      

Our first priority is to our teams, but we also have to interact with our extended families – perhaps another team, or the operations folks (if they’re on a separate team) to make sure we are considering their needs, or the database experts or the UX people, or … the list goes on. Generally, high-performing delivery teams include people in all these different specialties. One of our superpowers as testing specialists, is knowing when to get people together to talk – and knowing whom to include in that conversation. We can keep breaking down those silos and discover the magic of the whole team approach.

 

]]>
Thu, 11 Jan 2024 18:13:31 +0000 https://agiletestingfellow.com/blog/post/success-factor-1-the-whole-team-approach https://agiletestingfellow.com/blog/post/success-factor-1-the-whole-team-approach
Success Factor #2: Adopt an agile mindset In our series on the 7 key factors for agile and holistic testing success, we’ve worked our way up to number 2: Adopt an agile testing mindset. In our global testing community, some people do not know what we mean by ‘mindset’. It’s all about attitude and the desire to learn and experiment. Carol Dweck has studied what she calls ‘fixed mindset’ versus ‘growth mindset’. People who believe they have only the innate abilities they were born with are said to have a fixed mindset. Individuals with a growth mindset are continually experimenting and know that they can learn as much from failure as from success. They build on whatever abilities they have.

 

In software organizations that use a phased-and-gated and siloed process, testers tend to focus on finding bugs after the code has been written. In agile development, our goal is to help deliver valuable software frequently, solving our customers’ problems, by doing whatever we can. Our deep skills may be in testing, and as a tester in an agile team, we’re willing to take on any task to help the team, often collaborating with others. We’re not afraid to join in activities on both sides of the infinite loop of software development.  Our Holistic Testing model visualizes this idea. To us, we think of shift left and right as part of the infinite loop rather than a linear timeline. There are so many places where we need to think about testing.

Having an agile testing mindset means getting out of your comfort zone often. Several years ago, Lisa became interested in observability, the ability to ask questions and learn about your system in production without having to ship new code. Learning the basic concepts, but without yet having in-depth expertise and experience, she saw the importance for testers to get involved. She took a job helping to build an observability practice – without already having the skills. Lisa was way outside her comfort zone in this new job. She built relationships with developers and site reliability engineers who were also keen to adopt it. They told her that she added value by asking questions and bringing in new ideas. They appreciated her perspective as a testing and quality specialist. That’s what a growth mindset is all about – being willing to fail, but willing to learn something new, and help the team in the process.

 

Janet recalls a time when one of her grandchildren, Jo, was learning to surf behind a boat. She was able to stand up on the board, but every time she looked forward, all she saw was the waves coming towards her and she would fall. Finally, Janet and her sister told her, “Look at us instead.” The next time Jo stood up, her aunt said… “Look in my eyes.  Just keep looking at me.” Because she wasn’t concentrating on all the problems, she surfed for four minutes.

 

The next time you don’t know how to do something, we encourage you to think before you say ‘no’.. Is there an opportunity for you to stretch yourself and learn something new that will help build your own skills, and also contribute to your team’s success?

]]>
Thu, 30 Nov 2023 13:29:42 +0000 https://agiletestingfellow.com/blog/post/success-factor-2-adopt-an-agile-mindset https://agiletestingfellow.com/blog/post/success-factor-2-adopt-an-agile-mindset
Meet our instructors: Prathan Dansakulcharoenkit Meet the Agile Testing Fellowship instructors  - a blog post series!

Janet Gregory, Lisa Crispin and José Diaz co-founded the Agile Testing Fellowship to help teams and practitioners around the world learn to succeed with testing in agile environments. We agreed from the start to be highly selective about who we would choose as instructors for our courses. All of our trainers are experienced testing practitioners, who are also excellent teachers and facilitators. 

 

We'd love for more people to know about our awesome instructors and training providers. We'll interview one of them for each post in this new series.

 

Prathan teaching a Holistic Testing class in person to participantsOur first featured instructor is Prathan Dansakulcharoenkit. He started teaching our course, "Holistic Testing: Strategies for Agile Teams", in 2018. He's helped more than 160 students along their agile testing learning journey. We hope you enjoy learning more about Prathan. Many thanks to Prathan for sharing his journey with us!

 

Q. Hi Prathan, please tell us a little about yourself.

A. My name is Prathan Dansakulcharoenkit. I live in Bangkok, Thailand. I’m running 3 companies in Bangkok and they provide the services of Agile Software Development. In software testing, I have been working for 20 years since 2003.

 

Q. How did you get started in the software industry? What attracted you to the testing and quality side of things?

A. I started to work in software development in 2003 as a System Administrator and helped the development team to test the software before delivering it to the customer, too. As a full-time software tester, I worked in the number one portal website company in Thailand in 2005 - 2010 and set up the software testing process, tools and acquired the team members. I moved the internal team knowledge base portal website to the public website in the name, WeLoveBug.com in 2008. In 2009, I was promoted to be the service and operation manager and still manage the software testing process and team of the portal website, too.

 

In 2010, I changed the job to be the software development manager in the Thailand ecommerce website where they joined with the number one ecommerce website from Japan, Rakuten. I set up the software testing process, tools and team. In that time was the first step to adapt the test-first development practice with the Scrum framework and some practices from Extreme Programming.

 

In 2012, my colleague and I set up the company, Siam Chamnankit Co., Ltd., where we provide the services to change the software development process and practices from the sequence phases e.g. waterfall model to the Agile Software Development with the Scrum Framework, Extreme Programming and relevant practices. The other service is the workshops, software testing is the main workshop we provide for those who work in the software development both the functional tests and non-functional tests.

 

In 2019, I split the software testing services from Siam Chamnankit Co., Ltd. to the new company name, We Love Bug Co., Ltd. We Love Bug Co., Ltd. provides the all in one software testing services for any software development process.

 

Q: What’s the biggest benefit you see to a holistic and agile approach to software development and testing? 

A: Preventing the defects before starting development and the whole team approach are the two biggest benefits that I got from my personal experience and delivered the experiences to my colleague, customers, universities, software testing community and software development community in Thailand.

 

Q: When did you become interested in helping others learn ways to succeed with testing and build quality into software products?

In 2008 was the first move to helping the others by setting up the website to share the knowledge and experiences through the blog. The training and workshops are the second step both inside the company and some public for the external who are interested. The third step is to provide the services to lead change of the test-last development to the test-first development approach for any software development process.

 

Q: What do you enjoy most about facilitating training courses?

As an experienced giver, the enjoyable moment is when the participants get the AHA moment, nod their head and reflect on what they steal from me every hour.

 

Q: How did you decide to become a training provider and instructor for Agile Testing Fellowship’s Holistic Testing course?

A: I've been providing the software testing workshops since 2012 and the main source of knowledge and experience are from the Agile Testing: A Practical Guide for Testers and Agile Teams book, and slides by Janet and Lisa that are shared on the internet. My colleague and I practice the knowledge and experience from the book and slides in the projects again and again before bringing them into the workshops to prove for the participants that not just the words in the book and slides but they are real world experience and can adapt in Thailand, too.

 

In February 2016, My colleague and I attended the Agile Testing workshop in Singapore and it was the first time we met Janet in real life. On the last day of the workshop, I asked Janet about the loyalty program because I need to bring the workshop to Thailand. That was the beginning of becoming a training provider and instructor.

 

Q: Please share any upcoming events you’re participating in. Also please share links to your website, any writing or videos you have available so people can learn more about your work.

A: For 2024, I’m going to provide the Holistic Testing both Strategies for Agile Teams and Continuous Delivery every quarter in Thailand. The other sources of knowledge sharing are

 

 

 

]]>
Tue, 28 Nov 2023 16:45:28 +0000 https://agiletestingfellow.com/blog/post/meet-our-instructors-prathan-dansakulcharoenkit https://agiletestingfellow.com/blog/post/meet-our-instructors-prathan-dansakulcharoenkit
Success Factor #3: Automation Moving up the list in our series of blog posts about key success factors for agile testing, we get to automation as # 3. We include automation in testing as one of our success factors because we believe you cannot deliver changes frequently, at a sustainable pace, without it. Your regression test suite (when automated) is a change detector. If a test fails, it signifies something is wrong – it could be the test, it could be the code has changed and you forgot to change the test. Or perhaps another part of the system was affected, and you didn’t realize it was even impacted by the change. Whatever the reason, team members need to analyze the failure and determine the cause.

Automation has many flavors. As more teams work towards continuous delivery, it has become even more important.

Expanding your strategy

Most of you will have seen some variation of the test automation pyramid, originally created by Mike Cohn in the early 2000s. This model still works well as a thinking tool – especially if you are new to automation. We use it as a starting point to talk with the team about at what level should we automate specific tests. The conversation is the important part – the automation becomes a secondary but necessary task.

A conversation might sound like this:

Scrum master: Let’s talk about what testing we might need for this feature, and what kind of automation we need to plan for.

Tester: Well, if we look at the different scenarios from the flow diagram we drew, we will definitely need some full work-flow tests. Let’s draw the pyramid and see what else we need.

Programmer: We can test x, y and z at the unit test level, and all the variations.

Tester: What about the error conditions? Where would they fall? I do need to see they are on the screen but don’t want to test every variation.

Programmer: We can help there by …….

           

The variation of the pyramid shown here relates only to functional tests. In both More Agile Testing, and Agile Testing Condensed, we included other variations. Lisa covered some more in a blog post series on modeling your test automation strategy. We think it is important for teams to experiment and think about the visual model that helps them design an automation strategy that fits their context.

There are many other kinds of automation that needs to be considered as well. To succeed with continuous delivery, teams need to automate as many stages as possible in the path their code takes to get to production. This includes continuous integration (CI), creating test environments in the cloud, and deploying artifacts to various environments, In addition, automation is needed to develop useful information for monitoring dashboards and alerts.

Our automation toolbox is ever-expanding. One team Lisa worked on, discussed the possibility of using Slack APIs to trigger creation of complex test environments from Slack, as other teams have done. Modern test automation frameworks and libraries have features like “auto-healing” UI-level tests so they adapt to minor UI changes. Many tools use machine learning to detect visual and performance anomalies. New technology helps us keep pace with the increased complexity of today’s systems.

Enabling the whole team to mitigate risks

Perhaps most importantly, automating everything that can be automated frees our time to focus on human-centric testing activities, where we need our brains, problem-solving skills and creativity. We can take time to ’think outside the box’ and discover the ‘unknown unknowns’ we can’t identify as we plan and build new features.

Several years of State of DevOps Survey results, reported in the book Accelerate by Dr. Nicole Forsgren, Jez Humble and Gene Kim, back up what we have observed for a couple of decades now. Having fast, reliable automated tests correlates with high team performance. The automated tests are fast and reliable when the developers own them, run them locally, investigate the failures in the CI test suites. The book makes the important point – this does not mean we don’t need testers! In these high-performing teams, testers and developers work together to automate the tests, and the testers lead the human-centric testing activities like exploratory testing. We were happy when we learned that science backs up what we have known – the whole team approach to testing and quality really does work.

]]>
Thu, 19 Oct 2023 20:00:00 +0000 https://agiletestingfellow.com/blog/post/success-factor-3-automation-in-testing https://agiletestingfellow.com/blog/post/success-factor-3-automation-in-testing
Provide and Obtain Feedback - Success Factor #4 This month, we’re continuing our exploration of the seven success factors with #4 – Provide and obtain feedback.

Our 2009 Agile Testing book includes a sidebar by Bret Pettichord about the importance of feedback. He summarized: “Agile practices build a technical and organizational infrastructure to facilitate getting and acting on feedback.” Testing is all about feedback and testers excel at providing feedback.

 

Giving feedback starts from the first time an idea is presented, by asking questions and getting clarifying answers, maybe asking more questions. It continues as you work though examples or create tests and collaborate within your team. Even talking about bugs, or potential bugs are forms of feedback. These mechanisms have been available for as long as we’ve been testing.

                                       

 

In 2023, we have many additional feedback tools in our toolbox. Thanks to affordable cloud storage and new technology, we have lots of ways to safely test in production and get feedback about what customers are experiencing as it happens. This enables successful continuous delivery and deployment. Chapter 8 in Agile Testing Condensed, gives an overview of how testing fits into DevOps culture and practices.

 

One team Lisa worked with helped develop a new feature that everyone thought would be extremely valuable to customers. The designers prototyped a UI page for it and got feedback from some existing and potential customers. It involved complex functionality, so even building and testing the “learning release”, or minimum viable product version of it, was time-consuming. The new page was used internally, and everyone in the company thought it would delight the customers.

 

Once the new UI page was in production, the team used a user experience monitoring tool to watch how people used it. They were dismayed to learn that the few users who navigated to the new page spent very little time on the page, clicked a couple of fields, and quickly navigated away. Within a week of releasing the new page, the team realized it needed either to get a major overhaul or be abandoned. That is one way to get quick and valuable feedback from actual users, not internal users whose biases might get in the way.

 

The best way we’ve found to give feedback effectively is to make it as visible as possible. We can test feature ideas with paper prototypes. Automated test results can be sent to a team Slack channel. Debrief your exploratory test sessions by walking through your notes with another person. Lisa’s team could watch the journeys customers took through the UI. Monitoring and observability dashboards let you see production problems with a glance.

 

We’ll leave you with this final thought and would love to hear your responses. Drop us an email, tell us in Linked In or if you belong to the Agile Testing Fellowship, tell us in the Slack channel. What feedback loops do you have in your team and organization?

]]>
Wed, 30 Aug 2023 19:06:56 +0000 https://agiletestingfellow.com/blog/post/provide-and-obtain-feedback https://agiletestingfellow.com/blog/post/provide-and-obtain-feedback
Figuring out the next steps in your team's quality journey with QPAM Since you’re reading this blog post, we hope you and your team are getting engaged in testing activities throughout the holistic testing loop. (And if you aren’t familiar with it yet, please check out our posts on the holistic testing model.) We often hear from practitioners who want to improve their quality practices, but aren’t sure where to start. How’s your team doing now? What should you try to improve next?

 

A good next step is to do a quality practices assessment using the QPAM model. Check out the latest episode of our Donkeys & Dragons video chat to find out more. In this episode, Janet switches from co-host to guest! She and Selena Delesie answer Lisa’s questions about their Quality Practices Assessment Model (QPAM).

 

When you watch this 18 minute episode, you’ll learn how they came up with this new model. They explain how teams can use QPAM to understand where they are in their quality journey. The conversation includes some of the many benefits of QPAM, as well as an overview of their books about it.

 

The “Assessing Agile Practices with QPAM: Enabling Teams to Improve” ebook is available now on LeanPub.

]]>
Wed, 16 Aug 2023 16:18:59 +0000 https://agiletestingfellow.com/blog/post/figuring-out-the-next-steps-in-your-team-s-quality-journey-with-qpam https://agiletestingfellow.com/blog/post/figuring-out-the-next-steps-in-your-team-s-quality-journey-with-qpam
Build a foundation of core practices: Success Factor #5: In 2009 when we published Agile Testing: A Practical Guide for Testers and Agile Teams, the fifth success factor in our list was “Build a foundation of core practices”. While every team should adopt practices that work for its situation, a team without these core agile practices is unlikely to benefit much from agile values and principles.

The practices we highlighted then were: test automation, test environments, manage technical debt, work incrementally, coding and testing part of one process, and synergy between the practices.

We think they are still important, but we put a slightly different spin on them given today’s environment.

Test automation and test environments

Test automation and working test environments have become intertwined in the DevOps world. Test environments can be reframed in the form of build pipelines. We see teams that use build pipelines, have mastered the stable test environment. They have pipeline stages to spin up environments for the automated test suites. When the pipeline finishes with no errors, it automatically spins up a “preview environment” on a cloud platform and deploys a build artefact for further exploring. Unfortunately, many companies still have not embraced these practices and technology, and struggle to create adequate test or staging environments.

                  

Teams that embrace creating their automation as they develop new features, and make it part of their definition of Done, have significantly better success in their releases. One reason for this is the fast feedback provided. (see the section on working incrementally)

Many teams are building observability (o11y) into their product to proactively fix unanticipated production problems and get detailed data on production use. This is a form of automation at its finest, giving fast feedback to teams directly from customer usage.

Observability also helps with keeping automated tests reliable and trustworthy. One of Lisa’s teams had “flaky” tests and could not figure out the causes. They added some instrumentation to the production and test code to capture more log data while automated tests ran. The additional data helped them analyze and fix the “flakiness”.

Managing Technical debt

Teams may consciously take a short cut to meet an urgent business need. They might skip over a refactoring or choose to not automate a test at this time. As long as they plan and do that refactoring or automation soon after, they can still enjoy good product quality and if desired, successful continuous delivery.

Making bad code design decisions, accumulating a huge backlog of serious bugs, and living with “flaky” automated tests is more like “technical guilt”. You must have trust in the code you write to succeed in the long term. If your team is aiming towards continuous delivery or deployment, your first step might be to manage your technical debt.

Working incrementally

Many teams are not able to deploy changes to production frequently, whether that’s for business reasons or because they’re working with older, monolithic code that makes it more difficult. Even in this context, we can get fast feedback. For example, when we first start looking at a feature, we can ask questions that uncover hidden assumptions and build shared understanding across roles.

As we have these early conversations, we can visualize behavior of the new feature, and how customers will use it with techniques like flow diagrams. This helps us identify the simplest thing we can deliver, and what could be added later. We can slice the feature into small, consistently sized stories. Frameworks like example mapping, which we talked about last month, help us dig into each story to understand its value and behavior. Programmers and testers can collaborate to turn these examples and scenarios into executable tests to guide development. That’s where the magic happens – step by step.

Coding and testing as one process

When teams work well together and understand their contribution to the quality of the product and treat coding and testing as one process, it feels like magic. Good practices like test-driven development, make other testing activities easier to accomplish because it builds testability into the product. This includes practices like programmers performing exploratory testing on their own code, doing a desk check or 'show me' immediately after code is finished, and exploratory testing on those stories that need it as soon as possible.  

Janet wrote a blog post on this so we won’t go into more detail here.

Synergy between practices

You may have noticed as we talked about specific practices above, how one practice leads into another, and how closely practices are tied together. Taking one practice in isolation may not have the desired affect. Look at the big picture and consider how all the practices fit together.

 

 

]]>
Wed, 19 Jul 2023 17:37:50 +0000 https://agiletestingfellow.com/blog/post/build-a-foundation-of-core-practices-success-factor-5 https://agiletestingfellow.com/blog/post/build-a-foundation-of-core-practices-success-factor-5
Collaborate with Customers: Success factor #6 As we mentioned last month, we’re doing a series based on our seven success factors from our first book, Agile Testing: A Practical Guide for Testers and Agile Teams. We’re bringing in some new experiences and showing why we still think they are important.

Success Factor #6 – Collaborate with customers

In the summary chapter of Agile Testing, we noted that one of our biggest contributions as testers is helping customers clarify their requirements using concrete examples of desired and undesired behavior. The team can turn those examples into executable tests to guide development from a business-facing perspective.

Many testers excel at learning the business domain (remember what we said last month about “Looking at the big picture”). They also communicate well on a more technical level with the rest of the delivery team. In Agile Testing, we talked about using “Power of Three” conversations – better known these days by George Dinwiddie’s term “Three Amigos” – to build shared understanding among programmers, testers, and product owners (PO). These conversations are especially helpful to do before team-wide iteration planning meetings (IPMs). Fleshing out user stories with the story goal, business rules and examples to illustrate those rules helps everyone on the team start out “on the same page” as they discuss each story. The planning meeting goes faster and there are other benefits such as ensuring all stories are a similar small size.

Today, there are multiple frameworks available to us to make these story readiness discussions more fruitful. Structured conversations and 7 Product Dimensions from Ellen Gottesdiener and Mary Gorman’s Discover to Deliver book promote lateral thinking and help ensure no quality attributes are overlooked. Having the product owner part of these discussions means that you can ask about risks and be able to mitigate those risks early. Tried-and-true techniques such as flow diagrams, state diagrams and context diagrams are still effective. Of course, today we need to draw those together on a virtual whiteboard in a remote video meeting!

An example would be handy right now…

Lisa was part of a team which was struggling with high “rejection rate” – they delivered stories to the product owner, only to have the stories rejected with a comment, “That isn’t what I wanted.” It resulted in long cycle times from when the team started on a new capability and when the customers could start using it. They had pre-IPM meetings, but these did not result in increased shared understanding, they were just hand-wavy discussions.

                                     

When Lisa learned about Matt Wynne’s Example Mapping technique at a conference, she showed it to her team and proposed experimenting with it at the next story readiness (pre-IPM) meeting. Two engineers, a tester, a PO, a designer and other specialist as needed, met to go over the stories planned for the next iteration. The structure provided by example mapping enhanced communication among the different team members. The combination of business rules and concrete examples, which were noted in each story in their online project tracking tool, helped everyone share the same understanding of what should be delivered for each story. When the product owner is not entirely sure of the answers, the examples can be taken to the client or end users to get their opinion.

At the IPM, the whole team discussed each story. Thanks to the example mapping, everyone quickly understood the goal of each story, along with its business rules. More questions came up, of course, but the discussions went much more quickly than in previous IPM meetings. The stories were already “right sized,” which saved a lot of discussion time.

Rejection rate for these stories was half the usual rate. The team continued using example mapping and found cycle time also went down by about half, thanks to this collaboration.

Imagine trying to achieve the same results with written requirements provided by a PO with little or no discussion with the delivery team. Real time collaboration is key, and techniques that help structure the conversations save time and get everyone on the same page.

Collaborating in our new all-remote world

Over the past decade, we’ve seen more and more teams with remote team members, and all-remote teams. Pre-planning and planning meetings are (hopefully) held in video conferences. Using techniques to structure conversations and visualize examples is even more important. Use online tools that let everyone collaborate real-time. These can be simple. For example, Lisa’s team used a Google Slide with pre-populated text boxes so that each team member could type in ideas, then group similar ones together. Google Drawings, Google Jamboard, online mind mapping tools, Google Sheets are just a few tools that are accessible to many teams, and there are many more sophisticated ones like Miro and Mural for teams that can afford them.

Summary

We’re in the continuous world and we need continuous collaboration with customers., We talked about building shared understanding before starting work on new features. We need to continue that collaboration as we test and code the features, showing our customers the new changes early and often. Business stakeholders collaborate with us as we deploy and release new changes, analyze production usage data, learn about problems our end users are having, and feed that information back to decide what new changes to make next.  

]]>
Thu, 18 May 2023 18:51:23 +0000 https://agiletestingfellow.com/blog/post/collaborate-with-customers-success-factor-6 https://agiletestingfellow.com/blog/post/collaborate-with-customers-success-factor-6
The power of conferences NOTE: This post is a special edition copied from our May 2023 newsletter. Normally our newsletters contain content that is exclusively for our subscribers. We had a special request to make this issue available to a wider audience so we are making an exception. Please subscribe to our newsletter at the bottom of the page. 

=============

 

We often address the importance of learning. After all, our second book is subtitled, “Learning Journeys for the Whole Team”. Those journeys involve many different paths. With “conference season” gearing up, let’s look at some of the ways conferences can aid our learning.

 

New ideas for you and your team to try

 

Conferences are a good place to find out about the “leading edge” of software development. Talks and even hands-on workshops using new tools and techniques provide a great opportunity to “try before you buy”. At most conferences, you can browse the expo and see demos and talks by various vendor sponsors. If you look at the programs for upcoming testing conferences, you’re likely to see lots of sessions about the hot topics of today, like machine learning and AI.

 

Sessions where participants set the agenda are also fertile ground for inspiration. Lean Coffee and Open Space sessions attract diverse groups of participants. Some years ago, Lisa’s team was struggling with an overly-long continuous integration build. She proposed an open space session at a conference to share ideas on speeding up the build and getting faster feedback. She took home a list of proven techniques that her team implemented right away, with great results.

 

You’ll get some of the most valuable conference takeaways from the “hallway track”. Informal conversations with people during breaks, at meals, and at social events can turn into big light bulb moments. Look for conferences that provide space for this encourage those conversations.

 

A career boost

 

Obviously, everything you learn from official conference sessions will help you gain new skills and further your career. Just as importantly, you will meet new people and hopefully (depending on the conference) become part of a community. These new friends will be there to support you long after the conference. The very first agile conference that Janet attended was very different from the testing conferences she had been to before. She met many new people who encouraged her to keep in touch. To this day, she passes ideas by some of those folks she met there.

 

Remember that the conference speakers are generally friendly, approachable people. Don’t be afraid to introduce yourself and ask questions, whether it’s after their session, during a break or at a social event. Lisa worked up the courage to get to know presenters and kept in touch with many of them. Many times, she’d learn they were coming to her area, and she’d offer to pick them up at the airport or meet them for coffee. And many times, they would be willing to visit her team or speak at her local meetup group. The learning keeps going!

 

Your growing network of friends will be a great resource for future job opportunities. Lisa has found several new jobs through people she met at conferences. Personal recommendations are the best.

 

A great way to increase your visibility in the community and put something special on your CV is to become a conference speaker yourself. Presenting at a conference is also a great way to learn because you put effort into making sure your thoughts are clear and you know your subject. If you don’t have an employer who can pay your way to a conference, speaking at it will get you there for free.  Many conferences compensate speakers for their travel and pay them to speak, although it may not be much. The idea of speaking is intimidating, but the benefits are endless. Look for conferences that have a “New Voices” track where they may have experienced speakers to help you.

 

Volunteering is another way to get to a conference on a tight budget. It’s a great way to meet people and get more out of the experience.

 

Getting the most benefit from a conference

 

You’ll invest a lot of time and possibly money to participate in a conference. You can take some steps to get the most out of it.

 

Choose your conference carefully. Ask us and other people for recommendations. Check out what people say about them on social media and in blog posts. Look for conferences with a diverse group of speakers, conferences that enable a diverse range of participants to attend. Of course, you’ll want one whose program covers topics you want to learn. You can find a list of upcoming testing conferences at https://testingconferences.org. DevOps Days are held all over the world and the Agile Alliance maintains a calendar of upcoming agile-related events.

 

Once a conference gets going, the time zips by! Since connecting with people is so important, you may want to reach out to one or more people in advance. Some conferences have their own Slack workspaces where you can chat with people. Make plans to meet up with them at the conference, at a specific time like breakfast or lunch. If the conference starts off with an icebreaker party or activity, take advantage to meet new people. They will be familiar faces for the rest of the conference. Early morning Lean Coffee sessions are another great place to get to know new folks – and get inspiration to help you choose sessions the rest of the day. We love hosting these sessions at Agile Testing Days conferences so if you have the chance, join us in Chicago later this month.

 

Conferences have a variety of networking opportunities. Lisa likes to join conference-organized walks outside the venue. And if the conference hasn’t organized them, she often organizes her own and gets a few people to go along. Janet tends more to find someone and chat in a corner somewhere. One time, she did some pairing with another attendee sitting on the floor in a hallway.

 

After the conference, keep up with at least a couple of the people you met. If you need help with something you learned at the conference, or your team is struggling with some blocker, reach out to those people. The people in your network can support your continued learning. When Lisa attends a workshop and wants to try out what she learned, she sometimes asks another participant to be her “accountability buddy”. They each set a goal to try something they learned within a month and check in with each other at the end of the month.

 

As you can tell, we are both fans of conferences. They offer so many possibilities for learning, professional growth, and fun.

 

]]>
Thu, 04 May 2023 19:44:54 +0000 https://agiletestingfellow.com/blog/post/the-power-of-conferences https://agiletestingfellow.com/blog/post/the-power-of-conferences
Don't forget the big picture We’ve decided we are going to do a series based on our seven success factors from our first book Agile Testing: A Practical Guide for Testers and Agile Teams. We’ll bring in some new experiences and show why we still think they are important and what we’d change if we rewrote them.

The seven success factors are: 

      1. Use the whole-team approach

      2. Adopt an agile testing mind-set

      3. Automate regression testing

      4. Provide and obtain feedback

      5. Build a foundation of core agile practices

      6. Collaborate with customers

      7. Look at the big picture

We start the series looking at #7.

Success Factor #7 – Look at the big picture

In the summary chapter of Agile Testing, we talked about looking at a product from a customer’s perspective and thinking about all the different types of testing that is needed.  In our second book, More Agile Testing, we introduce the idea of thinking in layers. The system has many features, a feature has many (user) stories. The stories are testable, but often teams forget that features are really the capabilities that the customer wants. We need to test at the feature level as well as the story level, considering how people will use that functionality in production.

When we think about building quality in, we need to start with understanding the feature. Asking questions like: What problem are we trying to solve for the customer? For the business?” Have you ever experienced creating a great new feature that you are sure the customer will love, only to have them say, “This isn’t what I need?”

A story Lisa likes to share is about one of her teams that worked hard to deliver the first “learning release” of a feature many customers had requested. They tested extensively to make sure the information appearing in the UI was correct. However, once the feature was released, only a handful of customers even gave it a try. From production usage data, it was clear that people couldn’t even understand what the new UI page was supposed to do for them. The team had not adequately tested the usability of the page design. They focused on the functionality and forgot the bigger picture.

                               

Testing activities should also include looking at the system as a whole. Before your team starts working on a new feature, discuss questions such as: What impacts might this feature have on the rest of the system? Do we have existing functionality that we need to change? What quality attributes do we need to consider? Accessibility, or security, or any other constraints your application may have. Start thinking about how you are going to create your test automation at this point – for example, Will you have the right tools? Is there a good way to layer the automation so that much of it can be done by the programmers.  Also consider what telemetry you might need. What information do you want to collect when your application is in production? What events need to be logged for monitoring and observability?

Once you understand the feature, then you can slice it into small testable stories, thinking about the constraints with every story. Practice acceptance test-driven development (ATDD) or behaviour-driven development (BDD), guiding your development with examples and testing each story as it is implemented. The developers will be testing as well, using practices like test-driven development (TDD).

After all the stories are completed for a particular feature, don’t forget to go back, and perform exploratory testing on the whole feature. Use tools like the agile testing quadrants to help remind you about what tests you may have forgotten.

Janet had an experience where designers created mock-ups for a website for her to give feedback on. She looked at the content, how the page looked and tried hard to understand how it linked together. Some things were fixed early, but when they were implemented, she realized there were a lot of other issues that she didn’t find until she could touch the different pages.

Once those were fixed, the team thought they were ready for production - until the developer asked her a question about how the users would do a particular action. Janet realized that she had been testing only from her own point of view. She explored the new design some more, using the different customer personas, and found some very interesting bugs.

Summary

Each story is part of a feature, and each feature is part your product with potential impacts on other features. Test the story, test the feature, and explore how that feature works in the context of the entire product. Remember the big picture! This news story is a great example of what happens when you only test the small chunks. https://www.engadget.com/2020/02/29/boeing-starliner-failed-first-flight-report/

]]>
Wed, 12 Apr 2023 17:24:41 +0000 https://agiletestingfellow.com/blog/post/don-t-forget-the-big-picture https://agiletestingfellow.com/blog/post/don-t-forget-the-big-picture
Visualizing quality, Part 2: Mitigating risks Let’s look at a couple of visualizations that help teams identify and mitigate risk. In last month’s blog post, we described Rob Meaney’s visual for making problems visible and sparking discussions about them. Rob has another useful model that his teams use for identifying and mitigating risk. We’re obviously fans of models with quadrants and triangles, and this has both! The larger quadrants relate to the agile testing quadrants. The lower left quadrant is about mitigating risks around regression failures. These tests need to be fast and reliable, and they are less expensive to write and maintain.

                  Rob Meaney's risk quadrants

The top left quadrant has tests that confirm that features meet acceptance criteria. The focus should be on fast, reliable feedback at a minimum cost. For mitigating risks of problems that automated tests can’t find, the top right quadrant gets humans involved in the testing. The bottom right quadrant is concerned with making sure features work after they’re released. This is the contemporary approach to “testing in production”, monitoring and observing production use to catch problems as quickly as possible.

The small triangle at the bottom of the quadrants represents mean time to recovery. How fast can we detect problems in production and recover from them? We need to instrument our code for observability, so that we can quickly diagnose and fix failures.

We’ve found several of Dan Ashby’s  models extremely effective as well.  We encourage you to read some of his ideas. The one we share today is from his blog post: https://danashby.co.uk/2022/07/22/8-perspectives-of-quality-a-model/ .

Dan looks at quality from different perspectives, a similar approach to the one as we described last month, but from a different angle. He specifies four perspectives:

     1.  External quality – looking it from a product perspective by the users and customers

     2.  Internal quality – this is from product perspective by the organization and team

     3.  Team quality – is a process perspective about how well the team can deliver

     4.  Organizational quality – a more social perspective on how well the organization can support the team and the delivery of the product 

Dan defines three stages of the product as columns in his model, the perspectives of: ideation, implementation, and support and maintenance

 

               Dan Ashby's quality model

This model enables teams to talk about how different stakeholders of their products value with several aspects of quality. It illustrates the fact that quality must begin at an organizational level. A company that doesn’t understand the value of quality and why they should invest in it isn’t likely to deliver the level of quality that customers value.  We encourage you to read Dan’s blog post for all the details. He also provides a visual that overlays different testing activities into the model.

Last month, we shared two excellent examples of visual models from Rob Meaney that help us talk about many aspects of testing and quality with our teams. Where are the biggest risks, where should we focus our testing? Asking core questions like “How will we test this?” lead to proactive risk mitigation and happier customers. Dan Ashby’s model gives us a way to look at quality from different perspectives and plan our testing activities accordingly.

Looking at quality from many perspectives and creating a form of visualization enables teams to have those necessary conversations to continue to improve.

]]>
Tue, 28 Feb 2023 14:46:56 +0000 https://agiletestingfellow.com/blog/post/visualizing-quality-part-2-mitigating-risks https://agiletestingfellow.com/blog/post/visualizing-quality-part-2-mitigating-risks
Visualizing quality, Part 1: Making problems visible We find so much benefit in visual models to help us think about agile testing and spur conversations with our team. Over the years, we have come across many, that have helped us have a conversation with our teams. In Part One of this two-part blog series, we will share some ways to visualize quality that we learned from Rob Meaney and Dan Ashby.

This first visualization we show you is one that Rob tweeted about a few years ago. Each time his team saw a test fail or found a bug, they wrote it on a sticky note and mapped it against a Y axis that indicated their level of understanding of the cause, and an X axis that indicated on whom it had the most impact.

                    white board with sticky notes

For example, when a unit test failed, the test was specific to one small area of the code, so the cause was easily understood. It impacted the programmers who committed the change that caused the failure – they needed to fix the test. That placed unit tests at the top left of the map.

An example at the other end of the impact and understanding scale is when the team discovered, via their production analytics techniques on the bottom right of the map, that the conversion rate of free trial customers converting to paid subscriptions has decreased by a large percentage. They didn’t understand the cause immediately or its impact to the business. The team saw increased error rates in their analytics. Their release flagging technique, in the lower left of the diagram, could give clues as to which change might have started causing problems. They could look at logs, using the logging techniques represented in the diagram at the top in the middle, to see what the errors look like. 

All these techniques helped them zero in on the area of the code where the problem was, so they could investigate and fix. They moved from knowing the impact of an issue to identifying its cause. Shortening this process meant they could recover quickly from failures. 

Rob explained that the big takeaway from this diagram was that the team focused their efforts in the top right: building techniques that allow them to understand both impact and cause at the same time. They used techniques such as high cardinality observability - for example, having full data about a specific event performed by a specific user.
 
As the testing coach, Rob asked the team questions to help them deepen their understanding of problems and find ways to address them. What should always be true? What should never be true? Thinking about these helped developers evaluate whether the new change they were about to check in would behave as desired, and whether it might break some other part of the app.

The team explored quality attributes like data integrity, and how these contribute to their ability to understand problems and assess their impact. They found new ways to monitor data about production use and trigger appropriate alerts. They experimented with techniques to shorten feedback loops to avoid negative impacts on customers or the business.

The visual map doesn’t solve problems, but it helps the team find ways to better understand failures and their impacts. We have found that the first step to identifying and solving a problem is to make it visible! We encourage teams to use this idea to develop their own way of visualizing problems as a way to start improving.

]]>
Mon, 30 Jan 2023 20:59:00 +0000 https://agiletestingfellow.com/blog/post/visualizing-quality-part-1-making-problems-visible https://agiletestingfellow.com/blog/post/visualizing-quality-part-1-making-problems-visible
Quality - Different Perspectives Over the years Janet has done many talks and keynotes about quality, and what it means – from a process perspective as well as a product perspective. Often, teams and organizations think about quality with a very narrow view.

Jerry Weinberg described quality as “value to some person”. While this is true, it doesn’t help us evaluate our product quality to be able to release it to our customers with confidence.

5 quality approaches – David A. Garvin

Janet was introduced by Isabel Evans to the idea of five different approaches to quality from David A. Garvin (1984). Looking at the different approaches made us realize that perhaps, we can expand our views and look at quality differently. The five approaches to quality he talks about are:

  • Manufacturing-based – think practices to supply / develop the product
  • Product-based – it’s about the features, the quality of the attributes
  • User-based – this aligns with Jerry Weinberg’s definition, is the user satisfied?
  • Value-based – defined in terms of cost and price; think trust or safety
  • Transcendent – universally recognized, but hard to quantify; it’s about the emotion a product evokes

                                 

                                       Garvin's 5 quality approaches

Not every product we build needs to satisfy each one of these approaches, but if we start to think about quality a bit differently, we can learn to measure it better. We will start to understand what is “good enough”. 

Contemporary quality engineering – Anne-Marie Charrett

Another viewpoint of quality has been supplied by Anne-Marie Charrett She talks about “contemporary quality engineering”, by which she means building quality into our products, rather than an older view of “quality engineering” which was adherence to a specific process.

Anne-Marie helped us start looking at quality from a different perspective by coining a new term: Qualitability. She uses it to describe how easy or hard is it to know (and see) the state of quality of your product at any given time. One way to gauge the state of quality is by using metrics. It can be difficult to come up with metrics that really measure whether your team is achieving its quality goals. Anne-Marie advised that the delivery team should choose the metrics, based on business outcomes and on what they want to learn about their product. Make quality visible.

                                    

                                  Emergent Quality diagram by Anne-Marie Charrett

Quality metrics

In one team Lisa worked in, they chose metrics based on how new customers used their product (a test automation tool). For example, an effective automated regression test needs to have assertions, so they looked at the average number of assertions customers put into their tests. If tests lack assertions, it may mean that the way to create them is not obvious enough to people using the tool. That’s reducing the level of quality for everyone. This metric is probably not one than many other teams would use, but it worked for their context.

We may need to rethink the list of quality attributes we use. For example:

Delivery frequency:  How do we break features down into smaller pieces, so that we can get small changes to the customer more often without negatively impacting the business.

Value: Is what we deliver valuable to the customer?

Feedback: Does our process give us quick feedback on the quality and value of changes?

Recoverability: This is not new, but for many contexts it changes its meaning from “Can we recover to a known state” to “How can we get a fix out very quickly without impacting the business”.   Feature toggles, blue-green deployment, robust and fast deploy pipelines all contribute working towards this attribute.

Quality expectations

When Lisa’s team experienced more regression failures in production than they wanted, they brought in a tester on a consultant basis who spent several weeks improving the automated UI regression test suite. At the end of his engagement, he gave a terrific presentation to everyone in the company that included the most critical bugs he’d found, as well as his ideas for new features that would improve the product. That immediate visibility led to the team implementing some of the features quickly.

Expectations of quality change as you experience different levels of quality. Like buying a new car, if you never had it, you won’t miss it. For example, when Janet bought a new Mini-Cooper, it came with a heads-up display, and now she wouldn’t think of buying a car without it. If we look at Lisa’s story, although her teammates had suggested some of the same features as the consultant, hearing those ideas from a new perspective, from someone who was also a target customer of the product, was more compelling.

Software testers are often viewed as a safety net, the last line of defense to spot quality issues before new features are deployed to production. For a long time, the focus was on trying to find all the bugs after coding was complete. With agile development, focus shifted to bug prevention, but we still try to detect bugs before release. Now that many teams deploy new changes to production so frequently, it becomes more important to find ways to speedily detect production issues and deploy fixes within a matter of minutes. Old challenges don’t go away, and we have new ones to embrace.         

Ask the hard questions

We really like Anne-Marie’s metaphor that “Software testing is the headlights, quality is the journey, and business outcomes are the destination.”  We need to tie what we build into business outcomes, and that starts by thinking quality first. Ask the hard questions at the very beginning of the cycle to be sure we are building the right thing, and it is solving a business problem. Also recognizing that quality isn’t only about the product quality - we also need to consider everything that goes into making our product - the pipeline, processes, practices and the people.

There are many different ways of looking at quality, and the discussion is becoming more mainstream as teams and organizations struggle with what it means to them and their products.

]]>
Tue, 03 Jan 2023 18:36:37 +0000 https://agiletestingfellow.com/blog/post/quality-different-perspectives https://agiletestingfellow.com/blog/post/quality-different-perspectives
Sustainable Pace To paraphrase Elisabeth Hendrickson’s “agile acid test”, modern software development is about delivering small chunks of value to customers frequently, at a sustainable pace. That “sustainable pace” encompasses all the good development practices that allow teams to perform well consistently. Practices like test-driven development, pair and ensemble programming, continuous integration, acceptance test-driven development – just a few of the practices seen as part of “agile” since the days of eXtreme Programming.

Some organizations have embraced agile practices and enjoyed a sustainable pace for more than two decades. And yet, we still see many software organizations fail to understand that working at a sustainable pace is key to delivering valuable changes to customers frequently and predictably.

The slippery slope

Recently, Janet was working with an organization whose master backlog was long, and the clients all wanted their features to be built – you’ve got it, by yesterday.

Everyone in this organization means well. The product people were trying to prioritize. But there was still more coming into the teams than they could handle without sacrificing quality. One of the team’s choices were to work longer hours to meet their “commitments”. (It’s worth noting that Scrum leaders abandoned the idea of “commitment” in favor of “forecasting” years ago).

Our question is to you is – how long do you think this team can keep going? How long can they maintain the long hours, the stress levels?

There is no easy answer to this quandary. Company leaders want their development organization to crank out features so the business can grow. They often pressure teams to meet unrealistic deadlines. Sometimes the teams put this pressure on themselves, keen to prove their value and make business stakeholders happy. Teams often think they have no choice but to keep working that hard.

Step back

Is your team falling into the trap of working extra hours, and skimping on good practices to “save” time, only to face more impossible deadlines while accumulating technical debt? It’s a good idea to take a step back and look at your process and observe what is happening.

                                            

People on overworked and stressed-out teams may feel powerless to push back on unrealistic demands. Yet, often we have more influence than we think we do. For example, the team could simply agree that they will not work more than 40 hours (or whatever your standard work week is). This can highlight how much the team has been overworked. Often, organization management does not realize how much extra teams are having to work to get things done.

In one team Janet consulted with, management told the teams to put 40 hours into the timesheet no matter what they worked. They were trying to simplify it for the teams but didn’t realize they were hiding a problem.

Finding a path to sustainable pace

If you find yourself in a similar situation, look for ways to make the overtime visible. You could talk to the “powers that be” and suggest an experiment to capture “real” hours worked and make it very visible to everyone. Maybe you could visualize it on a chart, like those thermometers that capture how much money is raised towards a goal, showing how many hours the team has put in each week.

Retrospectives and other working sessions could be an opportunity to design experiments to try and address non-sustainable pace. One approach that has worked for our teams includes learning to slice stories into small, consistently-sized increments – where each story can be finished, including testing, in two days. That way, if a story “blows up” and takes twice as much time as expected, it’s still not a lot of time. Along with this, the teams agree to be disciplined in limiting work in progress to, say, two or three stories at a time. This helps the team achieve a predictable, consistent cadence, which makes the business stakeholders happy.

If you’re a manager whose team is often working nights and weekends, find ways to educate the business stakeholders that their software organization is in a death spiral that will end with projects grinding to a halt. Show them how technical debt and opportunity cost are hurting the bottom line. Budget time for the team to learn and adopt good development practices that will allow them to avoid waste and re-work and deliver small changes at a frequent cadence.

Nurture a learning culture

Lisa worked on a team that had been failing to release anything to production for month. The company leadership hired an expert agile coach and allowed the team time to determine the best way forward to learn the skills they needed. As they learned basics like continuous integration and test-driven development, they only delivered small changes every two weeks – but they did deliver something! Allowed to invest the time in continuous learning and using each retrospective to plan the next step forward, they soon became a high-performing team. The business could depend on their cadence of releasing new changes and had a highly competitive product to sell.

If you’re part of a team that is headed for trouble trying to keep up with an impossible workload, talk about it in your next retrospective. As with any obstacle, brainstorm small experiments to try as “baby steps” to allow for more a more sustainable approach. See if your team can start avoiding overtime and watch for the benefits of following a sustainable pace. Organizations cannot afford the effect of not having sustainable pace which is burnout and turnover – a very expensive result for everyone.

                                                              

]]>
Thu, 20 Oct 2022 20:38:14 +0000 https://agiletestingfellow.com/blog/post/sustainable-pace https://agiletestingfellow.com/blog/post/sustainable-pace
Measuring to improve quality Quality is hard to define and hard to measure in a meaningful way. And there are many aspects of quality. Process quality is about how well a team creates and delivers their product. Product quality is what the customer cares about. It’s all important.

 

Organizations that continually improve their product and process quality need data. Business leadership wants data about how their teams are performing. They also want to know if their customers are happy. Measuring quality and performance is tricky. Metrics like bug counts or lines of code written are often meaningless and can be gamed. For example, we’ve seen teams say that they cannot put code into production with critical or high severity bugs, so someone decides that the bugs really are medium, because they have work arounds.

 

A trend we’ve experienced in the past few years is that organizations are adopting the metrics identified by the Accelerate State of DevOps survey report by DORA that correlate with team performance. The Accelerate State of DevOps survey has provided academically rigorous research that gives us reliable data on the most effective ways to deliver software.  Using the results and comparing them from year to year, Google’s DevOps Research and Assessment team (DORA) identified five key software delivery performance metrics to help teams continuously improve. These metrics mostly relate to process quality but can influence product quality as well.

                                       

 

Lead time for changes:

As defined in the Accelerate report, this is the amount of time that elapses from when a new change (code or configuration) is committed to the repository, to when it is successfully running in production.

 

Lead time reflects how fast teams can get feedback from production use. The survey also showed that tester-developer collaboration reduces lead time for changes. This metric reflects process quality – shortening the feedback loop. Continuous integration shortens this time. Continuous delivery and deployment shorten it even more.

 

Deployment frequency:

The deployment frequency measures how frequently and consistently an organization deploys to production. Practicing continuous deployment, means deploying small changes frequently to production. Elite-performing teams deploy multiple times per day.

 

In some business domains, for example, safety-critical domains like medical or transportation, customers don’t want frequent changes. Even with safe release strategies such as feature flagging, the perceived risk is too high. Organizations can still maintain practice continuous delivery having deploy-ready artifacts, but not deployed to production. Focusing on finishing small changes with a quick cadence reduces risk.

 

While this measures an organization’s process quality, product quality is directly impacted by it. For example, if new code changes continually cause regression failures, the time to identify and fix them will affect deployment frequency, which can affect a customer’s perception of product quality.

 

Time to restore service:

Time to restore service is the average time a team or organization needs to restore service after a severe incident or a defect that impacts users, such as an unplanned outage. Longer times to restore service means unhappy customers, who perceive poor product quality. There was an outage by one of the main services of cell phones and internet in Canada during July 2022. Major businesses were unable to do their work. People like Janet were inconvenienced because she couldn’t use her google maps to find a location. Customers – all kinds, suffered. The reason given was a software update. It took the company over 12 hours to even restore the most basic cell service.

 

This metric can affect a wide range of organizational practices. The application or service needs appropriate telemetry so that failures are identified quickly. The code base must be easy to understand and update and be protected by a good safety net of automated regression tests. The deployment pipeline or delivery workflow must finish quickly to get fixes out to production. The team needs good working agreements to respond to production issues. Many different factors go into preventing customer pain.

 

Change Failure Rate:

The percentage of changes to production that result in degraded service, e.g., service impairment or outage that require remediation such as a hotfix or rollback, is the change failure rate. Teams that deploy frequently may have a higher number of failures, but if their change failure rate is low, their overall success is greater. For example, if they deploy 5 changes a day, that means 25 changes in a week. If 5 of those changes fail, the rate is 25%.  If a team deploys only once a week, and that change fails, they have 100% failure rate.

 

This metric reflects both product and process quality. The synergy of combining change failure rate with time to restore service is powerful. Teams that spend a lot of time fixing problems have less time to devote to new features. All the leading development practices that help teams produce maintainable, testable, operable code, building quality in and testing effectively, lower the change failure rate.

 

Reliability:

Reliability is the operational performance and a measure of modern operational practices. It includes quality attributes such as availability, latency, performance, and scalability. The DORA State of DevOps survey asked respondents to rate their ability to meet or exceed their reliability targets. Teams see better outcomes when they prioritize operational performance. They achieve their service level objectives, they use automation appropriately, and they are prepared to respond to production problems quickly. The survey found that teams doing well with site reliability engineering practices performed better in other areas and had better business outcomes. These benefits apply to all levels of team performance.

 

Using the DORA metrics

Elite-performing teams have a competitive advantage, and the survey results show the number of teams in the elite category growing fast each year. Teams can use these metrics to get a baseline of their current performance and identify what they want to improve next. The data helps them see what’s holding them back and helps them measure experiments to overcome those constraints.

 

As with any kind of measurement, consider your context. Look at the big picture drawn by all five metrics. They can help with finding the right balance of speed and reliability. Use these metrics together with others to guide setting goals and designing small, iterative experiments to work towards them. 

 

Sources:

 

2021 State of DevOps report: https://cloud.google.com/devops/state-of-devops

 

“How DORA metrics can measure and improve performance” by Ganesh Datta, DevOps.com,  https://devops.com/how-dora-metrics-can-measure-and-improve-performance

 

 

 

]]>
Wed, 10 Aug 2022 19:07:03 +0000 https://agiletestingfellow.com/blog/post/measuring-to-improve-quality https://agiletestingfellow.com/blog/post/measuring-to-improve-quality
Learning and Adapting in the Holistic Testing Model Agile teams continually learn and apply those learnings to adapt their product and process. There’s often a tendency for teams to release a new feature and then move on to building the next new feature. This means missing a huge opportunity to learn from how customers experience that new feature in production, and whether it solves their problems.

                             

                           holistic testing model with learning emphasized

As the holistic testing infinite loop cycles around the right side and back towards the left, we use the information about production usage to drive changes that will solve customer problems. A significant production outage might be followed up with a retrospective (some people call these postmortems, but we hope nobody died). We recommend that teams use visual collaboration tools as they explore issues like this. Root cause analysis tools such as fishbone or Ishikawa diagrams (also called fishbone diagrams) may be helpful. Learning outcomes can be used in the discovery stage to come up with ideas that address the root problems.

 

We can learn much more from production usage observations besides outages and system unavailability. For example, analytics can tell us how many customers tried out a new feature. Today’s tools can even show us where customers struggled with a user interface. We can analyze data for our service level indicators to see if objectives and service level agreements were met. Use retrospectives and brainstorming meetings to understand these better and prioritize the most important challenges to address.

 

A technique that has worked well for Lisa’s team is using “small, frugal experiments”, something Lisa learned from Linda Rising’s  talk at Agile 2015 (you can see the slides, if you are an Agile Alliance member). Once your team has identified the challenge you want to address and dug into the associated issues to understand the problem better, design one or more small experiments to make progress towards that goal or make that problem smaller. These experiments should only last up to a few weeks, so that if they don’t work, you haven’t invested much time and you’ve learned something from them I any case.

 

We like to use this format for a hypothesis:

                     

      We believe that <an action that we are going to take>

      Will result in <some aspect of progress towards addressing a challenge or achieving a goal>

      We’ll know we have succeeded when <a concrete measurement that will show progress towards the goal>

 

For example, let’s say our team has seen a high number of 500 errors in production on our web-based application. A 500-error means reduced availability, so our system availability is 99.7% instead of our objective of 99.9%. We investigate further to be sure that the 500 errors were a significant cause of some downtime. We might have a hypothesis such as:

 

           We believe that instrumenting our code to capture all events leading up to 500 errors

            Will lead to faster diagnosing and fixing 500 errors in production

            We’ll know we have succeeded when our availability metric is up to 99.8% within three weeks

 

If our availability metric wasn’t increasing within three weeks, then we’d know there might be another cause to our downtime, or we need to focus on preventing the 500 errors instead. We can try more small experiments.

 

Another example: Our analytics show that customers are slow to try out new features that we release or do not use them at all. If we couldn’t get access to our customers to ask why they weren’t using the new features, as a team, including designers and product owner, we decide to incorporate usability testing before releasing new features. Our hypothesis:

 

          We believe that collaborating with designers to do usability testing of each new feature before release

           Will lead to more customers trying out new features right away

           We’ll know we have succeeded when 20% of customers try our next new feature within 48 hours of release

 

There are many ways to learn from production usage and guide future product changes. Getting the whole team, including testers, programmers, designers, product owners and operations specialists, collaborating to learn is key. Whatever method you choose to continually improve, it is important that you have some way to measure progress and know when you have succeeded.

 

 

 

]]>
Fri, 01 Jul 2022 17:36:36 +0000 https://agiletestingfellow.com/blog/post/learning-and-adapting-in-the-holistic-testing-model https://agiletestingfellow.com/blog/post/learning-and-adapting-in-the-holistic-testing-model
The difference between monitoring and observability We’ve had people ask us, “Is observability really about monitoring”, so we decided that was a good topic for this month. This goes along with our previous blog posts about the different stages in the holistic testing model and how testing fits in the cycle. 

                

Both monitoring and observability use telemetry (measurements for data collection) from an application’s code to understand production system behavior. However, there are important distinctions between the two.

Monitoring is about looking for behavior that we expect. We collect log data, and then use monitoring tools to aggregate it, analyze it, and produce dashboards and alerts. We compare the actual data with what we expected. Monitoring is about predictable failures; it lets us see deviation from expected behavior.

Observability is about the behavior that we didn’t expect or couldn’t anticipate. Today’s complex systems fail in complex ways. Stuff happens – and we need to ask questions that we didn’t expect to ask. One of the best ways to know if you have observability is to answer this question: “Do you have to add new instrumentation to the code and redeploy it to diagnose a problem?” Practicing observability means being able to diagnose a problem without that step.

We’re borrowing a visual from James Lyndsay with a slight nuance switch to show the difference.

              Venn diagram showing imagination (observability) with implementation (monitoring)

Note: The original Venn diagram is available as a download from https://www.workroom-productions.com/why-exploration-has-a-place-in-any-strategy/. We encourage you to read it.

Many teams have used monitoring tools for many years but find that their customers may still feel pain even as their monitoring dashboards show a healthy picture. Hopefully, we did all the important testing before releasing a change anticipating what might happen in production, and we might even have done chaos engineering to explore different types of failures in a controlled way.

From experience, we know we cannot think of everything, but we know that we have to be ready for anything. That is we capture all the information we possibly can and use specially designed tools to analyze it quickly – to diagnose problems quickly. We like to think of it as an early warning system. That is what we call observability.

In her new Test Automation University course, Introduction to Observability for Test Automation, Abby Bangser includes a terrific summary of the monitoring vs. observability. Some important highlights are:

  • With monitoring, we have a way to track, identify and lock in behavior across a wide diversity of systems and we can do this in a fairly standard way.
  • Monitoring can consistently alert on changes to the baseline.
  • Observability aims at supporting the unknown.
  • An observable system is one that can answer new, unique and complex questions without delay.
  • Observability provides specifics to triage and creatively explore even highly specific impact bugs.

You can get in-depth information about observability in Abby’s course, including the origin of the term. Observability is one important tool among many to learn how our customers use our product, what pain points they have, what features they might still need. When we build our software features, we need to build in the telemetry for monitoring, observability, and analytics. All of this enables us to respond quickly to production issues, no matter how many or how few users feel the impact.

 

]]>
Mon, 23 May 2022 23:20:35 +0000 https://agiletestingfellow.com/blog/post/the-difference-between-monitoring-and-observability https://agiletestingfellow.com/blog/post/the-difference-between-monitoring-and-observability
Build quality in with shared understanding, using the Holistic Testing model In June, July and August of 2021, our blog posts were about engaging the whole team early in the discover and planning stages of the loop. We had several suggestions for practices that can help uncover hidden assumptions.

In this blog post, we look at quality within the ‘understanding’ stage of our holistic testing loop.

Lisa has often had this experience: The development team discusses a story during the iteration planning meeting, gives it an estimate, and starts working on it. They do a great job of coding and testing the story using good practices and delivers it to the product owner for acceptance. The product owner rejects it with a puzzled comment: “This isn’t what I asked for.” Grrr! So frustrating! The development team understood the story one way – the product owner understood it a totally different way. They failed to share the same view of the outcomes.

When teams work together with product folks and others in the discovery and planning activities, they start to build shared understanding. Once the high-level picture is established, it’s time to encapsulate that shared understanding in artifacts that will helps build the right thing. In our experience, there are a few factors to succeed with this effort.

Conversations among people with diverse perspective

In a recent episode of the “Making Tech Better” podcast, Lou Downe referred to Melvin Conway’s work, including this striking thought: “The quality of software is directly related to the quality of conversations we have”. Lou said collaboration is a privilege. Teams need to find ways to have conversations that often circumvent their organizational structures.

For example, the delivery team may use different terminology than people in other teams such as marketing, data and design teams, so the delivery team may need to collaborate with them, making the team knows how a particular feature is valuable to the customer, what it should look like, and how it should behave. There needs to be a safe space to talk with people in other parts of the organization.

Structuring the conversations

Once the team has committed to enabling cross-team collaboration, they can take advantage of the many frameworks and tools that enable them to get the most value from these conversations. The business rules and concrete examples that define how a capability should work can be turned into business-facing tests that guide development.

One practice that forms a great foundation for shared understanding is what we call Power of Three, or as George Dinwiddie dubbed it, Three Amigos. Before each team story planning workshop, a product person, a programmer, and a tester may meet to go over the proposed stories. These days, with our complex systems, we often need more folks, maybe up to four to six – a designer, a data expert, a marketing specialist, an operations specialist – whomever has valuable insight to the stories being discussed.

Keep these discussions quick and focused with a framework such as example mapping by Matt Wynne. Other visual collaboration tools such as mind mapping or simply using virtual sticky notes also help.

It’s important to ask questions. We like to use the “7 Product Dimensions” from Ellen Gottesdiener and Mary Gorman to think of good questions related to different quality attributes – along the lines of, “Where will customers be when they use the product? What interface or device will they use? Do we have test data? How many concurrent users do we need to support?”  Asking these types of questions help to minimize the “unknown unknowns”.
         

Teams should also discuss risks that they can’t fully mitigate through testing. What events should be logged in the new workflow, to allow identifying and responding to issues once the feature is in production? What alerts and dashboards would be helpful? The answers add to shared understanding of the technical implementation.

The key is agreeing on the purpose of each story, its main value to customers, and specifying the business rules, with each rule illustrated with concrete examples. As shared understanding grows, the conversation will also uncover missing stories or ones that are too large.

These conversations may also produce low-fidelity prototypes, flow diagrams and other visuals that can be used to explain the desired outcomes to other team members and stakeholders outside the team. Last week, during a course that Janet facilitated, some of the participants were amazed at how much information they were each missing when they such a visual exercise. Share the outcomes with the rest of the team so everyone starts from the same level of knowledge when you have your team-wide planning discussion.

Guiding development together

Working together, programmers, testers and product owners can turn the business rules and examples from workshops into executable tests, using a domain-specific language that people outside the technology team can also understand and review. These tests form the basis for acceptance-test or behavior-driven development, where the business-facing tests guide development. They help to build the right thing the first time we try.

Using this holistic process of getting a cross-section of perspectives together to build shared understanding and produce artifacts that let us know what to code is a proven way to avoid re-work and waste due to stories getting rejected by the product owner or customer. Teams with strong shared understanding of what they’re going to deliver enjoy shorter cycle time. Looking further along the holistic testing loop, we can see that changes will get to production sooner, allowing for much faster feedback.

]]>
Sat, 09 Apr 2022 15:40:34 +0000 https://agiletestingfellow.com/blog/post/build-quality-in-with-shared-understanding-using-the-holistic-testing-model https://agiletestingfellow.com/blog/post/build-quality-in-with-shared-understanding-using-the-holistic-testing-model
Holistic testing: What it means for agile teams Since Janet introduced her holistic testing model, we’ve had several practitioners tell us, “This describes what we do so well! Thank you for explaining this – I can use this model to explain to others!” This approach resonates with people on teams who have learned ways to deliver small chunks of value to their customers frequently, at a sustainable pace.

 

Software products are growing more complicated, more complex, embracing technologies that let us understand and solve our customers’ problems. We want to do as much testing as we can when build new capabilities. We also need to take advantage of newer technologies that let us learn about those problems our customers encounter and respond with new solutions quickly. 

 

This is our definition from a few years ago, crafted with input from many in the testing community, and we value what we labelled “agile testing”.

 

Collaborative testing practices that occur continuously, from inception to delivery and beyond, supporting frequent delivery of value for our customers. Testing activities focus on building quality into the product, using fast feedback loops to validate our understanding. The practices strengthen and support the idea of whole team responsibility for quality.

 

We believe in the same ideas and have continued to learn over the years, and share those ideas. There are many teams who practice agile, DevOps, or whatever name you want to call it, and don’t really think about testing. Many folks start their first job and have never experienced waterfall – only agile, so it becomes less important to differentiate between agile and waterfall.

 

“Holistic testing” is a more comprehensive term to encompass the feedback loops. The “whole team” today can include UX designers, programmers, testers, as well as site reliability engineers. All members of a delivery team are thinking about testing from the beginning of the cycle, including how we should instrument our code to provide information about how it’s really behaving in production. Many teams are watching dashboards, alerts, digging into huge amounts of data to identify and quickly resolve issues. We’re not only concerned with the “average” customer experience. We want to ensure that all customers are having a good experience.  

 

This holistic testing model is a way to think about testing throughout the development cycle, and was inspired by Dan Ashby’s “We test here” model.

                             

The left side of the loop is about building quality into our products. The right side is testing to see that we got it right and adapting if we didn’t. The examples used in this diagram are just that – examples. We feel that this model balances testing early with testing after code is built.  

 

The conversation starts with what level of quality do we need, and then what kinds of testing do we need to have to support that level of quality.

 

We continue to adapt the model, but for now, it encompasses testing activities as we see it. We’d love to hear how you test differently, and how this model might help you visualize the types of testing you do in your product. Every product team has a slightly different context, so choosing what types of testing you do, or how much you do will be very specific for your team.

 

With this model in mind, we are rebranding our course - from "Agile Testing for the Whole Team" to "Holistic Testing: Strategies for agile teams". We're hoping the shift will help people think more about building quality in, and how testing supports that effort. 

 

Download our free mini ebook, Holistic Testing: Weave Quality into Your Product, to learn how your team can apply the model to build an effective testing strategy. (Just scroll down a bit on the home page and click on the Download button - no personal information required!) We've received positive feedback from many organizations who've used a holistic testing approach to build quality into their software products. 

]]>
Sat, 26 Feb 2022 17:22:18 +0000 https://agiletestingfellow.com/blog/post/holistic-testing-what-it-means-for-agile-teams https://agiletestingfellow.com/blog/post/holistic-testing-what-it-means-for-agile-teams
Testing in DevOps Testing is the heart of DevOps. In our last blog post, we talked about DevOps – vocabulary, building relationships, and the deployment pipeline. This month we get more into the testing aspect of DevOps.

Test automation is a big part of being able to release consistently and often, but there is so much more to delivering small chunks of value frequently. When a change is committed to your team’s source code repository, think about what steps it goes through to get to production. These steps form your deployment pipeline. Some may not yet be automated, and some may not be appropriate for automation. In the simple diagram below, we show some of the tasks that might require human intervention.

                                  

We like to use the term “human-centric” for the testing activities that a person performs. Some of these could potentially be fully automated, such as deploying to a test environment. A team may prefer to leave some decisions such as pushing the deploy button up to a human. Some testing activities, such as exploratory testing, need our human brains, senses and intuition, although we may use tools to assist us such as recorders or data generation scripts. It’s important to visualize the whole deployment pipeline, including the human-centric activities.

The holistic testing loop

Testing in DevOps starts at the very beginning of a new feature – when it’s first identified. You can check out Janet’s blog post on holistic testing to see how she’s adapted the infinite loop for testing specific activities. In this post, we concentrate on the right-hand side of the loop below – the part in yellow.

                   

                                     

Test Suite Canvas

When we automate our tests to speed up the deployment pipeline, we should consider the best way to split up the tests, so they run effectively. Ashley Hunsberger has developed a Test Suite Canvas which we have found to be extremely useful when looking at different test suites to know what they are used for, their benefits, who takes responsibility for investigating failures and maintaining tests, and more. Ashley has written about how to use this canvas in this article about feedback in deployment pipelines. One of the most overlooked areas is the data. Do you know how or where you are getting it from, and how is it managed?

                       

Once you know what your automated test suites are, decide the environment where they should run. For example, unit tests should run locally on every developer’s local machine before check-in, and then run automatically on each build after each new commit. They likely won’t run again until the next check-in. A smoke test suite will run on the first deployment to a development environment to make sure that the most important features work correctly, and perhaps someone can log into the system. The smoke test suite might run on any test environment with a new build or run regularly on a staging environment or even on production as a health check if it can run safely. Take care to ensure that each automated test uses reliable test data that isn’t changed by other testing.

Testing quality attributes

At the beginning of the holistic testing loop, the team identifies risks and quality attributes that need to be tested. During development, programmers design the code for ease of testing. For example, they may put in hooks to help test loading time of a page. However, testing quality attributes often cannot be completed until after the code has been written and deployed to a test environment. Accessibility testing, for example, can have some automated components, but ultimately, it means a human being making sure they can access the information.

Your team likely has (or will have, as you proceed along your automation journey) other automated test suites which might include performance, load, and security tests. There also might be human-centric exploratory testing that supplements these automation efforts.

Your team may also have determined that recoverability is important, so you’ll have a playbook to try many different scenarios. Some scenarios may be automated, but many will need human interaction.

There are too many quality attributes to address in a blog post, and each team’s context is different. The important thing is to remember to have to conversation at the beginning to determine your constraints and talk about how to test with each and every story and with every feature.

Testing in production

Testing in production is becoming more common now that it can often be done safely, using techniques such as feature toggles, canary releases, and blue/green deploys. If this isn’t an option (some business domains, for example, do not permit any of these techniques for security reasons), teams can closely monitor and observe production usage and use that information to guide development and testing.

Some companies practice chaos engineering which was pioneered by Netflix. Wikipedia describes it as the discipline of experimenting on a software system in production to build confidence in the system's capability to withstand turbulent and unexpected conditions. This type of testing in production requires a robust infrastructure and fail safes. Chaos engineering can also be conducted on staging environments, providing useful information with lower risk.

We want to do all the testing activities that we can in our test and staging environments, although we can never replicate production usage in those environments. Testing safely in production, and learning from close study of production use, are key for the complicated, multi-service, distributed systems that so many organizations need today. One of our go-to resources for testing in DevOps is Katrina Clokie’s book “Practical Guide to Testing in DevOps”. Lisa’s website has a list of additional books, videos, blog posts and more for testing in DevOps. With skills that enable us to ask good questions, identify and analyze risks, and explore, we testers add so much value on the right side of the DevOps loop. We can help our teams take full advantage of these vital testing activities.

 

]]>
Sat, 08 Jan 2022 18:14:28 +0000 https://agiletestingfellow.com/blog/post/testing-in-devops https://agiletestingfellow.com/blog/post/testing-in-devops
Demystifying DevOps  Every time we think that something is common, we realize that there are many misconceptions about a term – such as DevOps. It sounds like it's shutting out testing, and discussions about it often involve only the technological aspects like continuous integration tools and deploy pipelines. In reality, testing is at the heart of DevOps, and testers are often in a key role to help your team build a DevOps culture.

                       

Learn the language

The term "DevOps" wasn't coined until around 2009. It sounds like it's only about people who write code and people who maintain the production application working together. But it's really about everyone on a software development team working together with everyone involved in system administration, production monitoring and support, as well as business stakeholders. It's based on agile values, principles, and practices.

DevOps is a culture, in which everyone on the delivery team collaborates to create and maintain the infrastructure for continuous integration (CI), test and production environments, as well as the deployments to those environments. It isn't a role or a team, though an organization may have DevOps specialists or a platform team to help people grow the skills and learn to work together. It's all about supporting more consistent delivery to production, as well as
continuous delivery (CD) and testing. 

Learn the terminology around DevOps and continuous delivery. The books
A Practical Guide to Testing in DevOps by Katrina Clokie   is a great place to start.

Build relationships

If you have testers on your team, one of their super powers is the ability to bring the right people together for a conversation. To promote a DevOps culture, think about the people on your team or others who are involved in delivering your product, but that you don't work with every day. Get to know them. Ask them to lunch, engage them in casual social conversation, put candy on your desk and encourage people to come enjoy it while they chat with you. Ask someone on the operations team to come give your team an overview of the continuous integration tool. Offer to share some new testing techniques with a database expert.

For a lot of us, this is way outside our comfort zone. But we're humans, we need to get to know each other so we can work together well. Build those bridges and help people who aren't testing specialists learn ways to build quality into the product.

Use visuals

Visualizing a process is a great way to learn about it. As you have conversations with people in various roles, draw on whiteboards, use sticky notes.

                                         

Ask a cross-functional group of people to get together and draw your application's architecture on a whiteboard or use sticky notes/index cards on a table so the whole team can see it easily. Talk about the riskiest areas and whether those are covered adequately with testing. If you are working remotely like most of us these days, use an on-line tool to write/draw the steps of your team's production deploy pipeline. Look for ways to shorten feedback loops or strengthen fragile test suites.

Encourage a joint retrospective between members of your delivery team and those in the operations team. Identify the biggest problem that might be affecting your efforts to move to continuous delivery (CD). Even if your team is not ready for CD, you can do many things to make it work in a more effective manner.

Think of ways to make a problem visible. A team Lisa worked on put up a flashing police light that is triggered when the build pipeline fails, to get the team into a good habit of noticing failures and fixing them right away.  Set a measurable improvement goal for that problem, and design experiments to try to move towards that goal.

Check our website for public course offerings

Remember to visit our website's training page to see upcoming public courses around the world. These courses are taught by the highly experienced practitioners who have joined our Fellowship as trainers.

If you're interested in an in-house training course for your own team, please check
Training Providers in your area for more information.
   

]]>
Mon, 29 Nov 2021 22:55:00 +0000 https://agiletestingfellow.com/blog/post/demystifying-devops https://agiletestingfellow.com/blog/post/demystifying-devops
Asking for help Opening the hood

The story starts with Janet at a Studebaker meet…  for those who don’t know what a Studebaker is, it’s an antique car, and her husband Jack, has a ’62 GT Hawk (picture below).  During the meet, they took part in a run – a drive through a city usually ending some cool place – this time an excellent car museum. The museum tour ended, and they got in their car to drive back to the hotel, but the car wouldn’t start. After several attempts, Jack got out and opened the hood (bonnet) to look inside.

                           

Several other Studebaker enthusiasts immediately gathered round. They offered helpful advice, but then got down to really troubleshooting the problem to offer constructive advice. The car ended up spending the night at the museum, but they had an action plan for the next morning.

While watching the troubleshooting experience, Janet realized that there were similarities to how we solve problems within a development team. She also noticed some differences that we could incorporate as well.

Asking and offering help - visibly

For example, opening the car hood was a very visible way of displaying “I’m in trouble, I may need help.” In our development teams, that may translate to someone expressing their need in a daily stand-up. Or perhaps putting some other visible clue up, maybe a big colored card on the team project board or a stuffed mascot on top of their monitor – teams could be quite inventive with this idea.

There is another side to this equation -- people have to offer to help. Each Studebaker member knows that it might be them stranded the next time, so helping someone in need is a way to pay it forward. They have a vested interest in making sure that the cooperation (mutual self-preservation) exists.

Think about that. In our development teams, if every team member felt that way about helping and sharing their knowledge freely, and each person receiving that help took the time to learn and appreciate, how much trust would be gained within a team?

                                 

We have seen people afraid to accept help … for many reasons. They may feel it makes them look foolish, or perhaps they feel it’s a criticism on their work. It takes a lot of work for new teams to get small working agreements (spoken or unspoken) in place to prevent that attitude, but we think it is something we need to strive for.

The next morning, Jack went to the swap meet (where they buy and sell used parts at the car show) and bought the parts he needed. In the meantime, one of the other members called to offer his tools – that showed a lot of trust. Have you freely shared your tools with your teammates? Ones that maybe you feel make you special?

Going outside your job description

Lisa had a similar experience on her epic move from Colorado to Vermont a few years ago. She and her husband rented a large motor home to transport their cats and dogs. The second day on the road, the coach battery failed. The rental company advised to go to an AutoZone auto parts store to get a replacement. AutoZone employees only sell parts – they do not work on vehicles. Yet they went to a lot of trouble to find special tools needed to remove and replace the battery. They couldn’t be paid for their labor – they have no way to charge for it! They simply wanted to help someone in need get back on the road.

If you can help someone, maybe even outside your own team, with something that you don’t even get paid to do, you can help someone overcome a huge roadblock. Have the courage to ask for help, and don’t be afraid to offer help even if it’s not your responsibility. That’s how teams keep moving along so they can provide value for their customers.

There is a Ted talk by Margaret Heffernan that talks about this type of social connectedness that we encourage you to watch. She explains how really productive teams learn to work together and have a culture of helpfulness.  https://www.ted.com/talks/margaret_heffernan_forget_the_pecking_order_at_work

 

]]>
Thu, 16 Sep 2021 21:07:45 +0000 https://agiletestingfellow.com/blog/post/asking-for-help https://agiletestingfellow.com/blog/post/asking-for-help
Explore more than your software Last month, we talked about visualizing dependencies. This month we want to go into more detail about using exploratory test techniques for more than exploring your new features in your product.

 

In her book, Explore It! Elisabeth Hendrickson defines exploratory testing as “Exploratory testing is simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next.” 

 

We like that explanation since it helps us find surprises, or implications of interactions that no one ever considered, or perhaps even the misunderstandings about what the software is supposed to do. To us, exploratory testing is the difference between wandering at random (lost?) and exploring thoughtfully (to gain insight).

 

                           

 

Can we use these techniques to explore ideas instead of an existing product? For example, if you are coming into a team as a new member and don’t know anything about the product, instead of jumping right into exploring the product, first try exploring the ecosystem it belongs to. Maybe create a context diagram exploring inputs and outputs into pieces of the system, who (or what) is using it. Janet finds that a context diagram lets you design better exploratory test charters to question the system.   

 

Elisabeth’s format for creating charters is:

 

              Explore [the target]

              With [what resources are available]

              To Discover [information]

 

This type of format lets us focus what we want to concentrate on – it’s a form of deliberate discovery. For example, pretend we need to build an app to help us organize a move from one house to another.  Lisa moved from Colorado to Vermont a while ago, so we’ll think of her as the customer in this example. One of the features needed, was the ability to create an inventory of everything to be moved and then assign it a box. Before we start building this feature, we need to think a bit more about it.

 

             Explore [the inventory feature]

             With [possible items]

             To Discover [if all items can be labeled in this way]

 

We have now limited our scope to something quite specific. The questions we ask now to the product owner, or perhaps other users will focus on that aspect.  When Janet questioned Lisa on what types of items she had to pack, Lisa started with her house … which made sense.  Janet then asked about other buildings – Was there anything there that needed to move? We quickly discovered that donkeys don’t fit into boxes, so there was a fundamental flaw in our inventory process. We learned something by focusing on something very specific that was critical to our feature and exploring those ideas.

 

We talked about another technique to explore your requirements in our last video chat - is example mapping. We also suggest you read Matt Wynne’s article on the subject. https://cucumber.io/blog/2015/12/08/example-mapping-introduction

 

Put on your explorer hat, gather some of these tools to help you, and see what you can learn about your team’s new product and feature ideas!

]]>
Thu, 12 Aug 2021 17:44:10 +0000 https://agiletestingfellow.com/blog/post/exploring-more-than-your-software https://agiletestingfellow.com/blog/post/exploring-more-than-your-software
Visualizing Dependencies Last month, we started talking about testing early: identifying assumptions and risks, creating testable stories using flow diagrams. There are many other types of visualizing tools such as state diagrams, empathy maps or user role maps.

Janet believes one of the biggest areas that cause agile teams to slow down, is not recognizing dependencies early. Often a team doesn’t realize until they start to code or test, that they need another piece. We know you can’t identify everything up front, but there are some tools we can use to help us.

In this post, we’ll cover visualizing dependencies using context diagrams and dependency mapping.

Context diagrams

We’re pretty sure most of you use context diagrams whether you call them by that name or not.  It is a diagram about how your application interfaces with others – humans, machines, APIs, or even other systems. You start by identifying the external entities with which the system interacts. Next, determine the flow of information between your system or application and those entities, looking to see which way the information flows – one direction, or is it bi-directional. For example, in this diagram for school administration, the focus was on bussing children to and from school. You can easily see the relationships and you can ask questions and determine what is missing.

                                                     

Dependency Mapping

Lisa has done a few workshops where participants did simulations of different organizational structures. They used different mapping techniques to visualize those organizations and how they helped or hindered delivering the value customers wanted. Participants said they found dependency mapping the most useful.

To start thinking about dependencies, choose the core piece of your product and draw it on a whiteboard or flip chart (virtual if needed). Then, represent all the pieces of the system that make that core piece work as circles, overlapping each other like a Venn diagram to show touch points. Circle size represents the impact of each piece. Color code them to show which team “owns” each one.  Visualizing the dependencies often reveals bottlenecks that need to be addressed and helps identify the key people who can help. The diagram below shows a simplified map.

                                     

Dependencies can be represented within your application in different ways. This diagram shows some of the major components of our Agile Testing Fellowship (ATF) website and how they interact.

                          

You can also look at individual features to see what impacts they may have on other parts of the system. If you can identify that a particular feature touches more than one team, that is a signal that you need to work with those other teams early to mitigate the risk.

Here is a link that might help get you started about talking about dependencies.

https://www.planview.com/resources/guide/what-is-agile-program-management/agile-teams-dependency-management-visualization/

 

 

]]>
Wed, 21 Jul 2021 19:21:00 +0000 https://agiletestingfellow.com/blog/post/visualizing-dependencies https://agiletestingfellow.com/blog/post/visualizing-dependencies
Engage the Whole Team Early! We’re going to use Janet’s latest model about holistic testing, to talk about specific practices that we use in different stages of the lifecycle. In this post, we extract the left top part of the infinite loop, and talk about testing early – in discovery and planning.

 

                               

 

There are many different techniques to bring diverse team members together to talk about testing and find ways to deliver better outcomes for our customers. Because every organization and team have different contexts, we hope this look at testing can help you think about your own team’s test ecosystem.  

 

Product managers engage in discovery activities to determine what value a particular feature offers to their customers and the organization, and what that might look like. This works better when delivery team members get involved, asking questions, and offering suggestions based on their own experiences.

 

Visual tools like mind maps can help to visualize how big the feature might be? What does it consist of? Keeping the implementation details out of this discussion helps focus the discussion on business value. Once these aspects of a feature are visible, teams and the product manager can make decisions about what is immediately needed, and what might be pushed out to a later release. We can test this artifact by questioning some of the assumptions behind the ideas.

 

In the planning stage, we can get into more detail. Identifying risks is a big part of the value added here. We talk in more detail in our Donkeys and Dragons video chat #4 on our YouTube Agile Testing Fellowship Donkeys & Dragons channel if you want to hear more.

 

Testing early includes testing assumptions. We all have biases based on our own experiences and make assumptions. Janet wasted about an hour last week based on an incorrect assumption. Fortunately, she realized her error and sent her client a quick email to check that assumption. She could have wasted a lot more without that simple test.  If you think you have a different impression of a feature or story, ask the question. Be brave.

 

In planning, teams break features into stories – smaller bits that are easier to digest. However, often these stories are not testable. This leads to stories that need to wait for other stories to be complete before they can be tested. This results in delays, bottlenecks, slow feedback. Janet worked with a team that identified all these issues in one of their retrospectives and wanted to address these very real problems. Janet suggested they take their next feature and try using a flow diagram to visualize what it might look like. This flow diagram showed how many complexities they would need to address, so they identified the core – a slice through the application (or as we like to call it, the steel thread). Often this is the happy path. As they continued to identify the added complexities (new stories), Janet got some push back from the programmers.

 

                                   

 

They said, “But that’s not how we code”. We always do the configuration first, but now you are asking us to go into that file for every story.  Janet brought them back to why they were doing this experiment – the retrospective issues, and after some discussion (some heated), they realized that this practice enabled the testers to give them faster feedback, so they agreed to try.

 

Small testable stories are the key to being able to complete the stories in a timely way. In the story above, the team never went back to component level stories. They quickly realized the importance of working with the whole team to slice the stories.

 

Another practice that is useful during planning is to visualize dependencies using dependency mapping – We’ll make a separate blog post about that next month. 

 

Every testing activity we do is for the benefit of our delivery team and our customers, whether it’s automating repetitive tasks or creating a test strategy. Experiment with techniques that bring your whole team (or a representative cross-section, if your team is too large) together to learn what valuable feature you can build for your customer in small increments, getting fast feedback as you go.  Visualization activities like dependency mapping and structured ways to explore requirements such as mind maps or flow diagrams are just two examples. The key is getting people with diverse viewpoints and skill sets talking, drawing and experimenting together.

 

]]>
Thu, 24 Jun 2021 20:49:59 +0000 https://agiletestingfellow.com/blog/post/engage-the-whole-team-early https://agiletestingfellow.com/blog/post/engage-the-whole-team-early
Testers as Consultants Over the years, we’ve given talks and listened to others talk about the subject of testers being coaches or consultants. We believe there is a difference between the two.

 

A coach works with a single person, or a team, helping them to recognize new ways of working by listening and asking questions. A coach will never advise you, or tell you what you should do or how you should do it. Toby Sinclair has shared a lot of information about coaching in general.

 

A consultant works with leadership, assesses a situation, recommends different ways of working, shares information, works with more than one team, and sometimes acts as a trainer. Janet consults with organizations that want to improve their testing and quality processes and get help with transitioning to agile. Lisa acts as a test consultant on her own team, helping developers, designers, product owners and customer support people grow their testing skills. Sometimes she works as a coach helping someone explore new ideas.

 

                         

 

Most people we know work on teams that don’t have enough testing specialists. When a tester is spread too thin, they can’t add as much value if they only do what we might consider traditional “tester” activities. When teams believes that the whole team is responsible for quality, they will learn the skills necessary to help with the testing activities. 

 

Acting as a test consultant may help you contribute in different ways. You’re promoting the whole team approach to testing and quality by helping everyone learn useful testing skills and practices. You can transfer your testing skills to others. You can design experiments to improve quality and bring in new techniques such as example mapping or tools to help the team.

 

Some testing skills you can teach to other team members are exploratory test skills (read about Lisa’s experience), test planning, creating scenarios, looking at test coverage, or even creating test data. Take the time to learn facilitation skills and experiment and try what works for your team. A consultant supports and helps the teams stand on their own when they leave.

 

--------

All public classes for “Agile Testing for the Whole Team” are listed on the training page. You can find when and where (or if it is remote), as well as the language in which it is being taught, and how to register.

You can also contact us if you want an on-site course or check to see the Training Providers closest to you.

We also provide a monthly newsletter with new material – please subscribe if you are interested.

]]>
Wed, 19 May 2021 02:14:30 +0000 https://agiletestingfellow.com/blog/post/testers-as-consultants https://agiletestingfellow.com/blog/post/testers-as-consultants
Share your stories! Janet has taught our agile testing course all over the world.  Although she has a timeline and knows approximately how much time each section should take, Janet finds that most classes don’t stick to the same times. Each class is different because the people are different. They come from different backgrounds, have different life experiences, and approach things from different perspectives.

One class (in-person, pre-Covid) seemed to take longer than normal in every single section, so Janet felt a bit rushed at the end. One of the things Janet likes to do is retrospect after each class trying to decide what went well and what could be improved.

This class had participants that were hoping to become instructors, so the class participants overall had more ‘experience’ than the normal classes she teaches. As a result, there was more experience to share and more stories to tell.  However, Janet was worried about the participants feeling rushed at the end of the third day, so she sent an email and asked for more feedback from them.

One comment from a participant was:

“In my opinion the additional stories were more valuable than the exercises because they demonstrate that I have the same problems in my projects as other people, and they give me an insight in additional solutions.”

        

Each person learns differently, but sharing stories enables us to remember the lessons learned. Janet likes to start her stories with phrases like “In my experience” or “This one time in …” or anything else that helps frame the context. People can then extract what is meaningful for them.

Lisa shares the stories she’s experienced in her own work. If you check out her blog posts, you can read about how she facilitated remote retrospectives, or the time she helped developers learn to do exploratory testing, or even things she’s learned from her donkeys. Lisa recently started a new job, and will soon be telling stories about doing ensemble (aka mob) programming, and how that’s helping her to revive her coding skills as well as sharing her testing expertise.

You too have stories. You have your own unique experiences to share. Each story you tell, might help someone else. You may think people have already heard similar stories, but you might explain something in a way that really gets through to some people. One thing to remember, is that it is your experience, and it may not be true for everyone.

-------

Check the website for scheduled classes near you!  Or register for our newsletter for original content which comes out monthly. 

All public classes for “Agile Testing for the Whole Team” are listed on the training page . You can find when and where (or if it is remote), as well as the language in which it is being taught, and how to register.

You can also contact us if you want an on-site course or check to see the Training Providers closest to you.

]]>
Thu, 22 Apr 2021 16:24:13 +0000 https://agiletestingfellow.com/blog/post/share-your-stories https://agiletestingfellow.com/blog/post/share-your-stories
Face to Face Collaboration Our last blog post was about the importance of asking questions when moving towards continuous delivery environment.

Asking questions is important, but sometimes how you ask them is equally important. Janet shared one experience in a blog post when she thought she had all the answers. After all, she used email and instant messaging to talk with the team, so they must have exchanged enough information.

If you don’t feel like reading the whole post, the short story is that Janet assumed one thing and thought she had understood the problem – her confirmation bias showed up very clearly (afterwards). Only when Janet, the customer and the developer were on a video call and the customer walked them through the example and explained her concerns, did they really understand the whole issue.

Distributed teams are challenging, and when you are on different time zones, even more so. It is important to get the whole team together to have a shared understanding of the problem – whether it’s creating a new feature or diagnosing a bug. Don’t be afraid to ask for a conference or video call when you have questions, or you think you didn’t understand the answer. Sometimes, even if you think you know the answer, it pays to have a quick face-to-face conversation showing examples, and sharing your screen.

Face-to-face conversations don't happen naturally in remote teams - so make the effort. It's worth it. 

 

Check the website for scheduled classes!

All public classes for “Agile Testing for the Whole Team” are listed on the training page . You can find when and where (or if it is remote), as well as the language in which it is being taught, and how to register.

 

You can also contact us if you want an on-site course or check to see the Training Providers closest to you.

]]>
Fri, 19 Mar 2021 15:52:21 +0000 https://agiletestingfellow.com/blog/post/face-to-face-collaboration https://agiletestingfellow.com/blog/post/face-to-face-collaboration
More questions testers can ask about delivering features In our January blog post, we talked about the importance of asking questions about new features and building shared understanding among the team. Shared understanding should extend beyond feature behavior to other concerns. For example, once we have built a feature, how do we get it into the hands of our customers?

 

Towards continuous delivery

 

Many agile teams are practicing continuous delivery (CD) or are working towards implementing it. The goal is to get changes and new features into production safely, quickly and sustainably. The backbone of CD is the deployment pipeline: all the steps a change needs to go through from the time it is committed to the code repository trunk or master, to the time it is deployed to production. There are many questions we can ask about our deployment pipeline to make it better.

One question testers or business stakeholders might ask is, "When we need to fix a critical production problem, how long does it take from when the programmers check in the code for the fix, to the time it's actually in production?" This is a great time to start drawing on a whiteboard or lining up cards on a table with team members who understand the process. Does the ‘hot fix’ go through the same automated tests as a normal code change? What about exploratory testing? What pieces are automated and what requires human intervention? Do we skip some steps to save time?

 

Understanding your team's pipeline to production

 

Of course, you also want to learn about the normal steps each code change goes through in your team's pipeline. Ask a programmer or operations specialist on your team to walk you through the process. The pipeline can consist of both manual and automated steps. For example, when a code change is checked in, the first step might be static code analysis to see if the code meets team standards. The next step might be running automated unit tests. Manual steps might include exploratory testing. Deployments to test environments could be automated or require human intervention. 

 

                               testing your deployment pipeline;  Agile Testing Fellowship

Fast feedback is important, so ask what your team is doing to speed up the automated regression tests and deployments to test environments. Can you do functional, performance and security testing in parallel? What can we do in a virtual environment? If so, what does that require? If your process includes user acceptance testing, do the users have their own environment and is it done in a timely manner? Are there tests being done by testers or developers that are candidates for automation? You can help your team prioritize efforts to speed up those feedback loops. 

 

Testing the pipeline

 

Another important question is "How has the pipeline been tested?" A common misconception is that pipelines are simple and execute the testing, but don't need testing themselves. In fact, automated portions of pipelines include configuration files and scripts that determine when and how one step triggers another step, how team members are notified of results, and how artifacts are stored or deployed. Like any software, we need to test to make sure it does what we want.

We may hear statements like "DevOps teams are in charge of pipelines," but don’t forget that testing is an integral part of DevOps. Teams benefit when all roles, including testing, engage in improving pipelines to production. Just like our feature code, pipelines require analysis and testing to validate and optimize them. It's one more area where, as testers, we can work with our teams to mitigate risk.

 

Check the website for scheduled classes!

 

All public classes for “Agile Testing for the Whole Team” are listed on the training page . You can find when and where (or if it is remote), as well as the language in which it is being taught, and how to register.

 

You can also contact us if you want an on-site course or check to see the Training Providers closest to you.

 

]]>
Tue, 16 Feb 2021 17:48:21 +0000 https://agiletestingfellow.com/blog/post/more-questions-testers-can-ask-about-delivering-features https://agiletestingfellow.com/blog/post/more-questions-testers-can-ask-about-delivering-features
Ask questions for understanding! Welcome to our first Agile Testing Fellowship blog post! Here we will share ideas from us (Janet and Lisa), instructors as well as featured guests. We hope to have a wide variety of topics that will interest you. We decided to kick of this site with a post about asking questions – a skill we can all keep improving.

Ask questions for understanding!

Many teams kick off new projects or new feature sets as they kick off a new year. It’s often a time for re-configuring organizations and trying new experiments. These new beginnings are a great opportunity to help your team build shared understanding about the quality you want to deliver to your stakeholders.

Picture yourself in a planning meeting for a new epic or feature set. After your product owner or manager describes the new capabilities your team will need to deliver, ask yourself if you understand the business goal this new project will attain. If it’s not clear, ask the question yourself: “What is the goal of this new capability? Is it solving a problem for our business, for our customers?” Another great question to ask is “How will we know if this new feature is successful? What can we measure? How soon after releasing can we evaluate whether it’s meeting our goal?”  The resulting discussion will help your team focus and plan.

 

                                dog raising its hand

Asking questions is a big part of testing. When you have questions, it’s likely that other people in the room have the same questions, but for some reason they’re reluctant to ask. If someone uses a term you don’t know, it’s likely others also don’t know it so ask. Don’t be afraid. For example, Lisa’s teammates were reporting experiments with different languages for a new front end and kept referring to static and dynamic typing. She asked for clarification, and probably wasn’t the only person in the room glad to get it.

Never be afraid to ask questions. Your team members will most likely be happy to explain. You may even trigger a conversation that reveals hidden assumptions and helps the team get on the right track.

]]>
Mon, 25 Jan 2021 00:09:17 +0000 https://agiletestingfellow.com/blog/post/ask-questions-for-understanding https://agiletestingfellow.com/blog/post/ask-questions-for-understanding