In Pursuit of Quality: Shifting the Tester mindset

Last time, I wrote a book review on Lean Analytics. Towards the end of that post, I lamented that I see a lot of testers in my neck of the woods trying to map their old way of thinking into what’s coming next. Several folks (Individual contributors and managers of same) have come to me wondering why should test move into this world of “data crap” and why is how they have been previously operating so wrong now. It is my hope today to explain this out.

But first, before continuing, I’d like to try something new and offer you a poll to take.

Please consider the following:

So which did you pick? Over time, it will be interesting to me to track how people are viewing this simple comparison. I have been doing this example for almost a year now. When I first started it, about 1 in 2 testers polled would select the bug-free code. Whereas with testers I talk to lately, about 1 in 3 will select it. I definitely view this as a good sign and that folks are starting to reflect on these changes and adapting. My ideal world is that 1 year from now the ratio is closer to 1 in 10.

Why is this poll so hard for folks?

Primarily, it is due to our training. Test was the last line of defense – a safety net – needed to assure we didn’t do a recall when we released product out to manufacturing. When I first started in the software development world, 1.44 floppy disks were the prevailing way customers installed new software on to their system. Windows NT 3.1, as example, required 22 of them. It was horrible. Installation of a new machine would take the better part of the day, disks would be asked for out of order, and lastly, people would often get to the end of the install to discover that a setting they were asked for at the very beginning was wrong and that it was easier to just redo install than to hunt through the manual to figure out how to fix it after the install.

Customers who got their system up and running successfully and found a major bug afterwards would be quite sore with us. Thankfully, I have not heard of this is quite some time, but back then, Microsoft had the reputation of shipping quality in version 3.0. There was a strong and successful push within the company to get our testers trained with a singular mission: find the bugs before our customers do and push to get them fixed. I was proud to state back then that Microsoft was the best in the world at doing this.

The problem I am attempting to address is the perceived value loss in Test’s innate ability to prevent bugs from hitting the customer. A couple of months ago I presented to a group of testers and one of the questions asked “All of this reacting to customer stuff is great, but how can we prevent bugs in the first place?” Thankfully, someone else answered that question more helpfully as my initial response would’ve been “Stop trying to do so“.

The core of the issue, imo, is that we have continued to view our efforts as statically valuable. That our efforts to find bugs up front (assuring code correctness) will always be highly regarded. Unfortunately, we neglected to notice that the world was changing. That, in fact, it was more dynamic: Our need to get correctness right before shipping was actually tied to another variable: Our ability to react to bugs found by customers after shipping. The longer the time it takes us to react, the more we need to prevent correctness issues.

“Quality redefinition” – from correctness to customer value

A couple of years ago, I wrote a blog, Quality is a 4 letter word. Unfortunately, it seems that I wrote it well before it’s time. I have received feedback recently from folks stating that series of posts were quite helpful to them now. One such person had read it then and had a violent allergic reaction to the post:

“Brent, you can’t redefine quality”.

“I’m not!”, I replied, “We’ve *always* had it wrong! But up until now, it’s been okay. Now we need to journey in a different direction.”

While I now refer to the 4 pillars of Quality differently, their essence remains the same. I encourage you to read that post.

The wholeness of Quality should now be evaluated on 4 fronts:

  • Features that customers use to create value
  • The correctness of those features
  • The extent to which those features feel finished/polished
  • The context in which those features should be used for maximum value.

Certainly, correctness is an important aspect of quality, but usage is a significantly greater one. If you take anything away from today’s post, please take this:

Fixing correctness issues on a piece of code that no one is using is a waste of time & resources.

We need to change

In today’s world, with services lighting up left and right, we need to shift to a model that allows us to identify and improve Quality faster. This is a market differentiator.

It is my belief that in the short term, the best way to do this is to focus on the following strategy:

    • Pre-production
      • Train your testers to rewrite their automation such that Pass/Fail is not determined by the automation, but rather, leveraging the instrumentation and data exhaust outputted by the system. Automation becomes a user simulator, but testers grow muscle in using product logs to evaluate the truth. This set of measurements can be directly applied to production traffic when the code ships live.
      • Train your testers to be comfortable with tweaking and adding instrumentation to enable measurement of the above.
    • Next, move to Post-production
      • Leverage their Correctness skillset and their new measurement muscle to learn to understand the system behavior under actual usage load
      • This is an evaluation of QoS, Quality of Service. What you want Testers learning is what & why the system does what it does under prodtraffic.
      • You can start here in order to grow their muscle in statistical analysis.
    • Then, focus their attention on Customer Behavior
      • Teach them to look for patterns in the data that show:
        • Places in the code where customers are trying to achieve some goal but encountering pain (errors, crashes, etc) or friction (latency issues, convoluted paths to goal, etc). This is very easy to find generally.
        • Places in the code where customers are succeeding in achieving goal and are walking away delighted. These are patterns that create entertainment or freedom for the customer. Unlike the above, this is much harder to find, it will require hypothesis testing, flighting, and experimentation, but are significantly more valuable to the business at hand.
      • Being stronger in stats muscle will be key here. Since Quality is a subjective point of view, this will force Test away from a world of absolutes (pass/fail) and into one of probabilities (likelihood of adding value to customers vs. not). Definitely, it is wise to befriend your local Data Scientist and get them to share the magic. This will help you and your team to scale sustainably.
      • This is an evaluation of QoE, Quality of Experience. What you want Testers learning is what & why the Customers do what they do
    • You will then want to form up a dynamic set of metrics and KPI’s that capture the up-to-date learnings and help the organization quickly operationalize their goals of taking action towards adding customer value. This will generate Quality!

Lastly, while executing on these mindshifts, it will be paramount to remain balanced. The message of this blog is NOT that we should stop preventing bugs (despite my visceral response above). Bugs, in my world view, fall into 2 camps: Catastrophes and Other. In order to have Quality high, it is critical that we continue to work to prevent Catastrophe class bugs from hitting our customers. At the same time, we need to build infrastructure that will enable us to react very quickly.

I simply ask you to consider that:

    As the speed to which we can react to our customers INCREASES, the number of equivalence classes of bugs that fall into Catastrophe class DECREASES. Sacrificing speed to delivery, in the name of quality, makes delivering actual Quality so much harder. Usage, now, defines Quality better than correctness.

Ken Johnston: a friend, former manager, and co-author of the book “How we test at Microsoft” recently published a blog on something he calls “MVQ”.    Ken is still quite active on the Test Conference scene (one next month), but if you ever get the chance to ask, ask him “If he were to starting writing the Second Edition of his book, how much of the content would still be important?“. His response is quite interesting, but I’ll not steal that thunder here. J

Here’s a graphic from his post for your consideration. I think it presents a very nice balance:

Thank you for reading.

Book Review: Lean Analytics – great primer for moving to DDE world

 

So O’Reilly has this great program for bloggers, called the Reader Review Program. They will let me pick out a book of my choosing & read it for free, as long as I write up an honest review of the book here on my blog site. Because I know that I will eventually posting reviews here, I will be picking books that I think might have value to the audience that is following me. This is my first foray into this model. Right now, I think it’s an “everybody wins” thing, but I will pay heightened attention to how this affects the integrity & reputation of the site. Since I am generally reading 5-10 books at a time, I highly doubt that I will post blogs like this more than once or twice a year. Your feedback is welcome.

 

Lean Analytics by Alistair Croll & Benjamin Yoskovitz; O’ Reilly Media

    The title above will take you to OReilly’s site so you can delve further if you choose.

 

Review

    As the title suggests, Lean Analytics is a solid combination of two powerful movements in the Software Engineering world, Lean Agile and Data/Business Analytics. While there are several books out there discussing the need for data science and growth in statistics, this book really covers the What, How, and Why for using data to drive decision making in your specific business. Without being too technicial or academic, it introduces readers to techniques, metrics, and visualizations needed for several common business start-up models in operation in today’s world.

    

    I am ***REALLY*** fond of the Head-First Series of books and that is just about the only thing that could make this book better. After The Lean Startup this is probably the most useful book for those trying to iterate fast in today’s software engineering world. I found the information to be very straightforward and easy to follow. While I think the authors really tried to cram everything they could into the book (at times, making it read awkwardly), they introduce you to practical examples of how to use the material and when.

 

Several sections of the book are quite good… looking at some lightweight case studies of startups and the analytics they used to navigate muddy waters. The book tries to make all types of software business accessible. Ranging from how to categorize the growth phase of your company, what things to use during your value phase, what analytics are appropriate for various types of companies (mobile apps versus SaaS e.g.), and even how to operate within enterprises. As a result, though, the depth at times can be lacking but if you are looking for a breadth book that covers all of the basics this one might be good for you. Reading it is one of the reason I have decided to start my Masters in Analytics. With more information in the case studies, and more examples of actual data to look at and suggestions on how to avoid false metrics and gives guidance on what to look for.

 

One of the struggles that I am seeing at my place of employ is that Test is shifting away from automation roles and into data pipeline roles. This means we are just changing the way in which we deliver information so that others can analyze it and make the “adult” decision. This, imho, is not good. But it falls within Test Wheelhouse, so it is safe. Please Please Please instead grab this book and take a leadership role. This book will help us start the disciple move into a direction setting role instead of just a measurement one.

 

This will likely be the topic of my next post. Thanks for reading…

 

What would a Leader do?

Just over 2 years ago I wrote my most viewed blog post to date: The Tester’s Job. As 2013 comes to a close, there is much hullaballoo happening in the Microsoft Test community. After a series of non-trivial reorgs, it is clear Microsoft is making a huge step away from the traditional role for Testers and even the word Test is being stricken from the vernacular with our fearless leaders’ titles being replaced with a new one Director of Quality. Followers of this blog know this is change I felt was coming for a while now as well as aggressively supported.

Last May, my own team underwent this change with some great successes and some even greater learnings which I presented recently to a set of Test leaders. This presentation has become the closest thing to viral that I’ve ever experienced. Mostly, I believe, because folks are anxious about the change and are eager to use whatever data they can find to help predict their own future. The deck mentions some pretty significant changes needed to make the paradigm shift happen. The most controversial being a very large change in the Dev to Test ratio (a number that was historically used to determine the “right size” of test teams during reorgs). In my experience, some folks are more comfortable with innovating and being a starter. Whereas, other folks are superb at executing and getting work closed. Between the two, I have always been much more interested in being on the frontlines of change. Accordingly, I’ve never much been afraid of change, and view it as an opportunity to explore something new. I thoroughly enjoy seeing how these new learnings can be used to grow myself, my team, and whatever product I happen to be working on. And as a result, my love of the New and of Learning has helped to make me quite adaptable over the years. However, I understand that not everyone’s built this way and even those who are, the changes coming might be more than they can tolerate.

One colleague sent me the following in email after seeing my presentation:

Sobering. This is a lot of where we are headed in [my group], but without (so far) shifting any resources.  This may be really hard on test from the sounds of it.  Are there any suggestions on how to lessen the pain? Do we just rip the band aid and give people retraining?”

Since this is something that I am getting a lot of questions about lately, I felt it would be a good topic for a post. As I mentioned last time, Spider organizations don’t scale to the degree we will need them, so we need to build up Starfish muscle. While this does mean a move towards Headless organizations, it by no means describes one that is leaderless. In fact, leaders become critical. One of the challenges with this shift is that people are so accustomed to living in Spider organizations that they forget, nay, afraid to lead. This is the first change I think people need to do.

Here’s a very simple strategy I have found that helps me when times are troubling:

  • Ask and answer: What would a Leader do? – If there were an actual leader right here and now, what would they be doing? Why? What goal would they be trying to achieve? How would they go about it?
  • Be that leader – Why not you? Everyone has the ability to lead. It’s just easier not to. Choose to lead. Bravery is doing the right thing even though you are afraid.

So what would a leader do in these times? Here’s what I think:

  1. Keep their head on straight. There are 4 key things people need from their leaders. Trust, Hope, Compassion, and Stability. You cannot provide *any* of these if you join the panic. Imagine the state of the fireman who is going UP the stairs of a burning building.
  2. Manage the Change
    1. Explain what, why, and how the change is occurring. All three! Many times I see leaders leave one of them out. Folks need all three in order to triangulate on the correct direction.
    2. Explain the goal and new direction. Telling folks where to head is easier and more beneficial than telling them what to avoid. “We need to ship weekly” is better than “We need to stop shipping every 3 years” as examples.
    3. Enlist others in making the change happen. People are more likely to follow something that they contributed to creating. I’ve always been a fan of enlisting the team to come up with the logistics and dates and to place themselves into the positions that they are the most passionate about.
    4. Pull the trigger. Be the tipping point to get the momentum going.
  3. Train ThemselvesThis is probably the single most important item. You cannot help others if you have not helped yourself. You need to learn more about the new world you are heading towards. Dive in. Then and only then will you be in a position to guide others. Seek out internal experts. Change jobs. Go back to school. Head to conferences. I read recently that if you just read 1 hour a day in the field of your choosing, you would be an international expert on that topic in just 7 years. Those investments add up very quickly. Do not underestimate it. (I, myself, will be starting my Masters in Analytics on January 6th! (very excited about it))
  4. Train Others – You need to distribute what you have learned. A couple of things I have been doing lately:
    1. When someone asks me to talk to them on a topic, I assume they will not be the last. So organize it as a presentation.
    2. Record it, so it can be shared.
    3. Create a local community and ask each interested person if they’d like to join it. I can no longer find the reference, but something like 80% people will join if they are just asked. Try to drive participation on the community to make it self-sustaining.
  5. Get out of the way – Remain as the bottleneck to the change for as little time as possible. Someone may need to stay at the helm in order to make sure the momentum continues in the right direction, but once it is clear that it has, get out of the decision making process and let the team be empowered.

Rip the Band-Aid?

    To be honest, I am a proponent of Band-Aid ripping in these situations. People are afraid to make changes due to the unknown consequences. As Brad Pitt’s character asked in MoneyBall, “Which would you prefer, a clean shot in the head or 5 shots to the body and bleed to death?” The longer you wait, the harder the pain will be for those involved. But DO NOT rip the Band-Aid without a plan.

One last note: One very popular question people have been asking me lately is do I think that Test is dying? I believe Test (as we know it) is like a chicken with its head cut off. It’s dead, but the rest of the body is still flapping about and doesn’t quite know it yet. I have now been at the company for 20 years and, in that time, have seen a number of these big transitions occur in the Test discipline. I find it wise to remember: Each time these transitions occurred, a fairly large number of people were affected, but as a whole, we improved and became more valuable to the company. I think this time around will be no different. My view is that, given our innate ability to code and test coupled with our passionate pursuit of quality, our staff is well suited for being the engineers of the future and perhaps better than any of the other disciplines. However, whichever way the wind blows, it’s clear we will need to change. My New Year’s Resolution is to help anyone and everyone I can to help make this migration. After all, it’s what a leader would do.

HAPPY NEW YEAR!

Irresponsible Accountability

How familiar is this narrative to you?

<Setting: Triage/shiproom discussion not so far away and not so long ago… Key contributors and decision makers are all in the room. They are trying to collaborate towards determining what bugs need to be fixed before they can ship. The planned ship date is looming ominously just around the corner….>

Dev Manager (of Managers): We should fix these bugs…

Dev: What’s the bar? I thought we were recall class only as of today.

Dev Manager: They are easy… We should do them… You guys should just be able to get them out of the way.

….

Sometime later

Dev: There’s one last thing I’d like to discuss. It’s clear that from the bugs we accepted today, we are ignoring the recall class only bar.

Dev Manager: I don’t care about the bar; we need to do the right thing.

Dev: But if we keep doing that, how are going to land this project in time?

Dev Manager: I’m sorry. I don’t understand your question or concern.

Program Manager: um, Mr. DM sir, what Mr. Dev means is we’d like to understand what Leadership’s plan is to get to the quality goals with the dates specified?

Dev Manager: What?!?! That’s *your* job. You want me to do that too?!?

When I heard this story from one of my mentees, I had already been thinking on the nature of Accountability Cultures. His team is in a bit of a mess right now as they quickly try to align reality with promises made to customers… They have bitten off far more than can chew. This is made worse by the strong lack of leadership in his team as evident by the Dev Manager in the story above, who is clearly a JeDI (and no, that is not good!).

So what do I mean by an accountability culture? I mean those workplaces where management is focused on assigning owners to tasks/work for the purpose of “knowing who is responsible”. These organizations are generally hierarchical in nature and this ownership model is intended to both “empower” and “control” how work is being done. However, far too often, the results it achieves is simply the knowledge of who, precisely, to blame when it fails. In other words, it does not optimize for helping teams succeed, but rather for helping them to fail. Teams who want to succeed figure out ways to work around their management’s policies by “going dark” and doing work in secret or working much harder than needed to satisfy management’s quest for owners and the desire to move the business forward.

My litmus test for the accountable person for anything: S/he is the one who apologizes when things go astray and is the one looking to make things right. This is significantly different than “the one who is to blame”.

Thankfully, I have only worked in 2 such teams where I felt the culture simply was too toxic to be fixed. Management usually does not want, nor understand, that their behavior is causing a negative downstream effect on their staff.

How do teams get to this point?

There’s how I think this happens.

  • Spider is beating the Starfish: Human society is constantly fighting it out over the best way to organize in order to produce the best results. Ori Brafman writes about this in his book The Starfish and the Spider.
    • Starfish models are headless. Teams of people work together to achieve goals. The individual is tantamount and bands together with others towards complimentary goals. If you played or observed World of Warcraft or similar games, you’ve seen this model in action. They are tribal in nature, fast, and have a lot of flexibility, but they don’t efficiently utilize resources in a single direction. Instead they are effective at being able to change direction at any moment in time and collaboration amongst the team’s members thrives. Decisions can be made quickly as the framework is principled, not tactical. (eg “do the features in your team’s backlog that add the most fiscal ROI”, vs. “Add the ability to Bold text in a help file”)
    • Spider models have a head. They scale to greater number of people, but have a critical flaw. Kill the head and the body dies with it. If the head has a bad plan, the body goes along for the ride. This is also known as “Command and control“. Competition thrives in this model. Folks quickly understand that to “win” you don’t need to convince others….. Only the head. Spider models are more effective when you know the objective to be achieved and how it needs to happen, but they are not effective when decisions need to be made quickly and repeatedly, such as is common in the Software services world.
  • Peter Principle is alive and well (it’s proven!)
    • People are getting promoted to where they are no longer competent.
      • They are not taking training to understand how to scale to their new role, nor are they learning the state of the art techniques
      • They have to make more and more decisions with less and less data, which is harder and harder. So the validity of their decision becomes more and more unstable.
  • Jensen’s Law of Politics – For years, I have been teaching folks this insight: “He who is defensive, loses… Always…”
    • One cannot win *any* game by playing defense only.”

So what happens is: Spider models get propagated (for a variety of reasons (control, $$, clarity of purpose, etc). People then get promoted to the point where they can no longer scale to efficiently provide good decisions to their subordinates. Since spider models are competitive by nature, people quickly (subconsciously) begin to realize the way *they* continue to win/keep the head position is by taking the offensive position.

    They blame others.

This then gets sustained in a vicious loop:

  1. First off, Leaders hate getting blamed.
  2. But, Leaders don’t have the time to learn how to do things differently
  3. But, this process, as a decision making framework, is too slow. Leaders don’t (can’t?) take time to understand the root causes to failure or the dependencies needed to satisfy success. So they keep plodding on. Essentially, “hoping” they succeed.
  4. But, since they don’t, they resort to finger pointing and brute force tactics (“30 hr” workdays) to set things straight again.

Is this fixable?

I think so, but teams *must* learn Starfish techniques and the environment must support it.

  1. In my mind, this means any form of Adaptive project styles (scrum, kanban, XP, etc). But they need to make sure they are *adapting* and not just iterating… Teams should be encouraged to act, but to validate everything with actual learning.
  2. Create an environment where people commit to goals, not told to commit. Don’t fool yourself. People cannot be told to commit to anything. In order for them to commit, it must fulfill some important purpose to them, it must be achievable, and they must understand the risks, the rewards, and the degrees of freedom they have if the project goes out into the weeds.
  3. Teams need to be taught to work together to achieve goals. Taught to trust and (more importantly) rely on their peers to move forward.
  4. There is an important distinction between solving problems and owning solutions. Owning the solution doesn’t guarantee that it actually solves any problem that is important. For the past 3 teams I have led, I have had a team motto that I adore: “my team’s job is to solve problems, not write tools or code. If I can leverage another team’s component, I will. NIH kicks ass!” (not invented here)
  5. Give folks guidance and principles to make decisions on their own. Enforce this. This one is probably the hardest to do. Folks get used to not owning their own decisions. It’s uncomfortable for them. “Are you going to blame me if I fail?” My style is to let individuals make the decision, but reinforce that the team will own cleanup if the decision fails. I try to create a spirit on the team that individuals are shepherding work, but the whole team is accountable for all work in progress (see earlier post). This helps people feel empowered and forces the team to have each other’s back.

Lastly, I recently got certified as a Program Consultant for the Scaled Agile Framework, which means for the next year I am allowed to teach it and certify others. One of the really great things I think those folks have figured out, is that in order to truly scale, you need to find an efficient way to decouple the business strategy from the implementation tactics. I’m over-simplifying, but in essence:

  • Management owns setting strategic direction, measurement of success, cadence, and funding
  • Team owns creating an implementation they commit to.
  • Management owns creating the decision framework that is principled based. Teams are pre-informed with the constraints.
  • Team owns making the decisions, staying within the constraints.  Team owns correctly setting expectations with Management.
  • When things go wrong, both sit in a room as peers to work out how to adapt. Both have an important and distinct role to serve. When they are working together to thrive, the business does too.

Special Bonuses:

  • Here’s one of my favorite videos of the famous All Blacks rugby team. Showing what committed teamwork looks like.
  • The US military defines Command and Control as: “The Exercise of authority and direction by a properly designated commander over assigned forces in the accomplishment of the mission. Command and control functions are performed through an arrangement of personnel, equipment, communications, facilities, and procedures which are employed by a commander in planning, directing, coordinating, and controlling forces and operations in the accomplishment of the mission”

    Their most recent investigations show that C2 doesn’t scale due:

    • The essence of command lies in the cognitive processes of the commander.
    • the influx of data now available
    • The speed at which the operation must execute.

Automation doesn’t kill productivity. People do.

Shortly after I wrote Forgive me, Father, for I have sinned, I received the following email from a colleague of mine:

Professore!

I read your most recent blog. Your blog is actually dangerously close to sinning as well. In principle I agree with your sentiment, but be aware of violent pendulum swings. There is still a lot of value in the type of automation systems we have built, but it has to be tempered with a self-enforcing quality of quality, and quality of developer code measures. Good test teams actually do enable bad developer behavior. We become like a cheap compiler. Test will catch any issues, and quickly too. Developers are perfectly capable of writing solid (not bug free) code. They are just not always incentivized to do so. With a good test team, they don’t have to. At [my company], they don’t get rewarded to do so. The test team carries the burden, and the blame, for quality. There are many factors that play into the subject you have chosen. You are only tackling one facet.

Also, you are not really presenting a fix in your “how to fix” section, but rather pointing out a possible end result of the automation effort.

H

I really appreciate this sort of feedback as it really helps me to understand where I communicated well and where I did so poorly. That blog can be read as written by someone who was newly “enlightened” and automation was not invited to the renaissance. This was not my intent and not the case. (Aside: I am very nearly at that point when it comes to UI automation… I get a very visceral nauseous feeling lately when I hear of folks investing in this…) When used properly, automation becomes one of the most important tools in a software engineer’s arsenal. That is the crux of it, though. It must be used properly. The point of my story is that I had not done so and it led to some bad outcomes: thoughtlessness and poor code quality. I had done a really great job doing something that the business wanted me to do, but in retrospect, it was not the right way to solve the problem. In fact, perhaps it was solving the wrong problem…

Damned if you do

My eyes really began to be opened about 10 years ago. I had changed teams and become a middle manager on a product I used every day and loved. I quickly learned they had 2 big problems: First, they could not get their Build Verification Tests to pass 100%. I, later, learned that this had been the case for 6 years in a row. This by itself was interesting to me. In my experience, no team kept moving forward when BVT’s failed; they stopped and fixed the problem. When I asked about it, they mentioned they had tried several things, but none of them worked. Second, the test team did not have the payroll they needed to keep up with dev. At was the first wave of Agile Development at Microsoft and this team had decided to experiment with it. Dev believed documentation was overhead and velocity was all that mattered. As a consequence, Dev would move *really* fast and ask for “smoke tests” – testing done by the test team before checkin. When the product still failed BVT’s the next day, they would rally around the need for even deeper smoke testing. I saw a vicious loop and asked to own the solution. My manager readily agreed… The problem had gotten so bad, he was seriously considering banning all automation. He dreamed of the untrained Test Engineers world that dominated Microsoft only a few years earlier. He felt automation killed productivity.

To solve the problem, I first measured it. I learned my teams were spending 50% of their time doing smoke testing and another 20%, fixing automation. I also was able to show that these efforts were not *in any way* helping the BVT’s to pass. The more things failed, the more time they would spend on trying to fix it, but would not. It was depressing. Once I got to the bottom of the problem, it was fairly easy to fix. The hardest part was getting people to let go of sacred principles that they held to be true. Without proof. This team refused to recognize that their automation program, as implemented, was never going to work. In a nutshell, they were stuck in a vicious loop. They had super complex automation running in their simplest suite (no unit testing existed in those days) and they were using it to validate the build. Since they had not pre-validated the individual components, they *always* failed when integration occurred. This high level automation was hard to debug. As a result, the Test team kept on slowly losing more and more resources to maintenance. Bigger than that, the team so overloaded, they did not notice that they were not fixing the problem, but rather making it worse.

Once I realized how much it was costing the project, we did three things: 1) Ban E2E automation in that suite, 2) Limit Smoke requests to 8 hrs per week per feature team, and 3) built a tool for dev to run on their desktop to run the new BVT suite themselves. Once this was fixed, the automation began to work consistently and correctly. The dysfunctional bottleneck was removed from the system.

I would come to believe that I had learned the true point of automation:

To reduce the overall cost of development.

I concluded: Automation that didn’t do this, should be stopped. I would later learn this was wrong.

Damned if you don’t

Years later, I would join another team that had the opposite problem. Their system at that time was “not automatable” (or so I heard over and over). Really what this meant was that it was really hard to do and expensive and no one had created the hooks to make it possible. Because of this, they had a small army of vendor testers that would do manual testing every day. The team (including me) thought this was super expensive, so we looked into starting an automation program (after all, this made it cheaper, right?)

Our constraints:

1) They did a (yet another) different variant of “agile” where they planned out their 2 week sprints based on dev capacity only. As a result, time for automation was often very rare.

2) There were far too few unit tests. As a result, dev “needed” test to work night and day at sprint end to validate the new code in time for sprint end.

3) As I mentioned above, test hooks were missing and/or unstable.

4) The vendor team was only able to keep running the same tests… They did not have the ability to absorb more tests into their runs. As a result, monthly test passes had to be funded by the sprinting testers. This caused a starvation problem for 50% of each month in the sprint teams.

Lack of automation was killing productivity.

My manager and I worked on this over and over and finally came up with a solution. I would take a few of my team and create a new team responsible for curating the automation.

Their goal would be understand and optimize the execution of test cases for the division.

NOTE: this following part is not really needed for this story, but I am including it mostly because I think was a nifty process invention. You can skip ahead to “THE POINT” should you like.

Here’s how we started:

1) The Optimization team started by getting all teams to document and handoff their tests, automated or not. Teams were motivated: a team that handed off their tests would no longer be responsible for running their tests during the monthly test pass.

2) The Optimization team would own these passes instead.

3) The Sprint teams were required to write whatever automation they needed in order to get to done and exit the sprint. This large meant sparse unit tests at best. But enabled the sprint teams to have higher confidence that the code worked as expected each sprint. This by itself was a massive improvement.

4) The Sprint teams were also required to write the test hooks needed for that automation.

5) After the initial handoff, sprint teams were required to handoff again at the end of each sprint.

Once tests were handed off, the Optimization team owned the following work:

1) Establish SLA: Adjusting the priorities on the tests cases into 4 different SLA buckets: Daily, Sprintly, Monthly, Quarterly. (aside: this team shipped every 4-6 months)

2) Drive getting these tests executed using the Vendor team

3) Prune: Length of time ignored was used as to determine the test’s importance. Any test case that had been consistently failing for “too long” (initially set to 3 months) would be moved to an ‘archive’ folder (essentially deleting it) and mail would be sent to the team that owned the relevant area.

4) Categorize and Automate: Go through each test case and categorize by the type of automation problem that test represented. UI? Stress? Backend storage issue? API? Etc. There were eventually around 15-20 categories. They would then automate whole categories based on their ROI. This was considerably more efficient than automating all of the P1’s across all of the categories.

5) Maintenance: Frontline investigation on any test automation failure when the vendor team reported it and either fix the problem or move it to the sprint team’s backlog.

It took a good while to get the priorities right based on business need and the team’s desire/ability to react to a failure, but once we did, we had an efficient model for funding the execution of the manual suite.

Every day the vendor would get a backlog of tests to run: (see fig:1)

  • 2/3rd of the Vendor team’s time would be spent on running the daily tests… all of them.
  • 2/3rd of the remaining time
    would be spent on the sprint tests. A small chunk would be executed each day, so that all would be execute at least once each sprint)
  • 2/3rd of the then remaining time would be spent on monthly tests
  • The rest would be spent on the remaining tests

Fig 1: Capacity allocation plan for test execution

This allocation meant we could predict and control our payroll costs for manual test execution. If the number of tests in a category exceeded its funding level, some other test got demoted. Tests being demoted out of the quarterly runs meant a conversation: 1) test was no longer represented risk that we cared about or 2) more resources were needed on that team.

THE POINT

Once we had done all of this work and socialized it, we were about to reduce the vendor team by almost one half. In addition, the rest of the test team loved us. We had enabled them to focus on their sprint work as well as taken the tiresome test pass off of their shoulders. “WooHoo!” I thought, “Look how we reduced the cost, mitigated the risk, and boosted team morale…” That had saved a TON of payroll money. Greedily, I went to the manager I put in charge of the Optimization team and asked how can we reduce the cost more (we were still 80% or so manual, so I assumed we could use automation to make this super cheap!)

He then pointed out that, in general, for every 1000 test cases we automated or pruned from here on, we would be able to get rid of 1 of these vendors.

“That’s fantastic”, I said, “That doesn’t seem like very many tests to have to automate. Do you know the breakeven point? What’s the max we can pay for the automation in order for it to pay off?”

“$50 per test case per year”, he replied.

“What?!? $50 per test case?!? That’s impossible! That’s essentially 1 hour per test per year. I’m not certain we can even develop the automation at that pace.”

The really great thing was that we had built a system in which it made it easy to see and make the call. Though I am drastically simplifying things for this post, he could show me the math readily… It was all true. Over time, the automation system would improve and its pricetag would lessen, but not to the degree necessary. At the time, this news was shocking. It turned out manual testers were very effective and a lot cheaper than the automated equivalent for our product.

Automation on this team, clearly, was not reducing the cost of development.

Cost savings was not the reason to automate. Automation was a tax.

The morale of the story is that automation’s purpose is not about saving money. It’s about saving time. It’s about accelerating the product to shippable quality.

My colleague, H, is right, of course.  There is “a lot of value in the type of automation systems we have built”. We have built great tools, but any tool can be abused. I believe the fix lies in transparency and measurement. Understanding that the goal is in accelerating the product to the goal, not in accelerating the intellectual laziness of its Dev and Test teams. A dev team that is leveraging the automation system that test built as a safety net might be making choices that are contributing to slower releases and greater expense. Please send these folks to ATDD/TDD classes to start them on a better direction.

Ultimately, it comes down to choices. What do we choose to measure and what do we choose to believe? Automation is a tool; how we use it is a decision.

All your Leftover are belong to Test

Today in the United States, we are observing Labor Day, which celebrates a strong history of economic and social benefits wrought from the American worker. So for you testers out there, can you find the bug in this code?  (being from US, not required)

Notice Test as default?   Some may not even view this as a bug.  This is the case if you believe a colleague of mine who stated a few weeks ago, “The definition of Test’s job is easy… We are the remainder. We pick up the work that’s left over after the other disciplines do their part.” Is that right? Is Test the cleanup crew of software engineering? I can see where that viewpoint is coming from. In TEST is a four letter word, I point out that the word TEST is meaningless because of all of the different types of work that can generally land on our shoulders. However, even given this, I don’t view test as essentially a technical Janitor. As I wrote in The Tester’s Job, we have a valuable and important job to do. While how we do it may take on a diverse set of actions, I don’t think it is those actions that define us, but rather the goal we are striving to achieve…. the acceleration of the product to shippable quality… I do think if you are on a Test team and the work you are doing does not help the product to be shippable quality, then you are doing the job of some other discipline (which may or may not be ok… it’s just not Test work).

About a year ago, I predicted my work environment was heading for a tipping point. The roles and responsibilities for Test would change in a dramatic fashion. I predicted it wouldn’t land for another 1 to 2 yrs. I was wrong. I believe the tipping point has already happened. There are a number of indicators that would point in that direction – the biggest of which being a large shift towards Service orientation. I bring this up because I think Test is fundamentally changing in a predictable fashion and those who understand the job of testing as well as the direction Test is heading can shape their own careers according to their preferences. If you view the value add of Test is the actions Testers do, then you may be heading for a surprise when the reformation finally lands on your doorstep.

So what are the options? This is the number one question I get from folks when I bring up this topic. I see several options available to Testers, but I think they will likely be one of the following:

1) Development – I think the bulk of test will shift into development roles. Testers who are strong coders and can’t shake the view that finding bugs is all that matters will go into this role, which optimizes for same.

2) PM – Those strong in customer outreach will head this way.

3) Dev Ops – Integration experts of the machine topology and software…

4) Toolsmiths – Title will likely be Developer, but will be developing tools that accelerate the rest of the business.

5) NFR Specialists – Non Functional Requirements (E2E, Stress, Performance, Load, etc) are a needed and deep technical specialization. Once tooling makes doing this simple and friction-free, these jobs will likely minimize, but today, it is often more work for Dev to own this than they can afford.

6) Data Scientists/Analysts – Beancounters of the world unite.

7) New Career/ Company

I think when the world stabilizes again after this shake up, we will find the role of “the remainder” gone. If that was what you liked about test, then you may be heading for a surprise.

Why not instead take a moment and think through how you really like to accelerate quality and then make a conscious decision to pursue it?

I, myself, am aggressively heading down the #6 path. And loving it… Unlike prior times in my career when my job was to point out flaws in other people’s code, I truly can state I am accelerating the product towards quality.

Forgive me, father, for I have sinned.

 

 

 

 

 

 

 

I have sinned. In the pursuit of silly and stupid goals and beliefs, I have committed crimes against quality, delivery dates, and the businesses I have worked in.

Goals like:

  • We need to move quality upstream
  • We need to automate everything
  • We need to automate in order to scale
  • Friction-free automation is the key to unlocking limitless potential

What did I do?

I (along with my accomplices) created a hugely powerful, popular, and flexible automation system. It’s even patented (though, not available publicly).

I created this system with 3 primary goals in mind:

  • Freedom – I wanted an automation system that let feature teams decide how to automate without becoming a burden to execution staff. Often times, I encountered folks trying to “standardize” automation efforts in order to “avoid duplication”. But these standardization efforts often entailed teams having to rewrite their functioning automation suite just to toe the line. Often at greater long term cost than maintaining the already stabilized, but no longer in favor, automation suite. In addition, these automation efforts often were a least common denominator approach, where the system solved 80% of the problem for everyone, but was not able to solve the P1 issue for a feature team, making it a non-starter or an excessive burden.
  • Friction-Free – I wanted a system that a “brainless monkey could use while sleeping”… I argued by making it so easy to utilize, we could hire much cheaper staff to get work done, saving money.
  • Cheaper – by doing the above, the system will enable teams to create more automation thereby making test passes cheaper.

If only I could go back 15 yrs and slap myself in the back of the head…. Hard…

Um, Brent, those goals seem awesome. What gives?

First off, I succeed in all 3 goals. But here’s what happened over time:

  • By enabling team freedom, I enabled the execution team to scale and handle more and more automation frameworks, test cases, etc
  • By doing this, teams got more invested into their automation infrastructure and created more and more collateral faster and faster
  • As more and collateral got added, the system remained “friction-free”, so there was no reason to think deeper on improvements. The reduction in payroll cost justified the increase in machine execution cost (though, neither were measured).
  • So more and more test passes got scheduled, this in turn, increased automation maintenance cost, total payroll, total machine cost, etc…

 

 

 

 

 

 

 

 

 

In short, I helped to create Test Zombies, but not only that I created Dev Zombies too. By trying to move quality upstream, I did the exact opposite.

 

I  had enabled brainlessness to thrive.

How to fix?

Learned from my Agile training, Systems Thinking helps to see the way. Lean Agile has an important principle that specifies we need to “Optimize the Whole”. This means that one should consider the final goal and determine the right things to do in the immediate context to achieve that final goal. In my case, by making test passes cheaper and encouraging simplicity., I essentially helped the team optimize for *more* test passes using cheaper resources. Ultimately, costing the business more money via the need for more and more machine resources, and while we could hire cheaper people, the test pass load was such that we needed more and more of them to maintain the machines, schedules, etc and lastly, we had enabling the dev team to produce arguably sub-standard code and still succeed. And what’s worse: almost all of the participants, when they think of their own sub-goals, will today *praise* me for what I have done….

By changing the “Cheaper” principle to be around decreasing the pricetag for regression testing for each milestone and/or release and increasing the efficiency of the program, the implementation strategy and therefore, the automation strategy changes in very important ways.

But that, my friends, is a story for another day.

Improve Team Collaboration with Strengths

It was about a year ago when my boss transitioned to a new team and a new person was hired to take his place. This new guy kept hearing from folks who had been directly interacting with my team. There was something special about it. Something he decided to investigate, so he called in my right hand person. I was not at this meeting, but my report swore it went very close to this:

Boss: What’s so special about this team?

RightHand: Well for one, we actually collaborate.

Boss: What do you mean? You send status reports, set up meetings, or something?

RightHand: We do those things, but that is not what I mean. I mean the team works together to achieve goals. The team understands each person’s strengths and weaknesses and we use that knowledge to effectively move team goals forward. We will handoff work to whoever’s the best able to do it based on time, skills, and interest… sometimes even in the middle of the work being done.

The environment was set up to allow the collective knowledge and skills of each person to flourish in a way that directly contributes to the team goals. In this way, everyone feels like they are using what they value in themselves towards goals that are valuable to our customers.

I have changed jobs recently and it is now review time at Microsoft and I’ve already been given a sense of what the team’s feedback is for me. Team-based Strengths discovery and focus is a win. For me, this means I am on the path to elevating Teamwork on my new team to the same level I did in my last team.

I want to develop Rockstar Teams more than I want a Team of Rockstars. I had succeeded in doing this once, and was well on my way to do it again. I’d like to share my technique with you today.

In this post, I am going to focus on the usage of Strengths, but there are a couple of other necessary ingredients needed to make this fit together and work sustainably. This technique works best when coupled with an environment focused on small, tight, and frequent feedback loops. I introduced both teams to the concepts of Kanban and Lean Agile, which fit the bill nicely.

In general my mantra was: Anyone can select whatever work they would like to do [from the ready column], but once they have, the whole team is responsible for getting it to Done. In a sense, the person selecting work was accountable for shepherding the work through, but the team was accountable for getting the work to its goal.

In particular, a several techniques were folded into developing a sense of whole team accountability:

1) Kanban has an aspect of pulling work, instead of pushing. I used this (amongst other things) to get idle folks to aggressively seek ways to help close current, inflight work, before picking up any new work. As a result, team members got adept at being able to collaborate on work already in flight.

2) Centers of Gravity, not area owners. “Everyone on this team owns all code on this team”. While it was often true that 1 person may have done the Lion’s share of effort on some architectural layer, they did not own it. They simply knew the context better than anyone else. I actively encouraged folks to do at least 1 work ticket in every area on a frequent enough basis to stay familiar with how the system works and design is transforming. If a person wanted to work on an area of the product they didn’t know, they would typically first pair program it with the Center of Gravity for that area. As a result, team members never acquired the bottleneck mentality. “This is my code. I am the only one who should change it.” Instead they saw knowledge bottlenecks as a problem and created slow successful strategies to eliminate them.

3) Weekly Retrospectives are a must. I don’t think the style of retrospective you do is important as long as you are focused on learning something valuable.

One style I use *a lot* is to:

  • sort the tickets by total calendar time
  • starting with the longest one first, ask the person who worked on it the following:

“What could the team have done differently to get this ticket done sooner?”

  • I try to encourage the team to think of late tickets as something the team failed the ticket’s shepherd, NOT the other way around. When people feel accountable for the success of others, teamwork thrives.

4) Strength-Finders framework brings interpersonal and self-awareness issues like no other I have found.

Strengths

The Strength-Finders framework was developed by Gallup as the result of decades of psychological research. While it’s quiz is similar to many psycho-typing frameworks, like MBTI, I have found it’s application in the workplace to be a lot more practical than those other frameworks. Strengths are a reflection of your psychology and as such rarely change in your adult life. They are important as generally your strengths are not only something you are good at and something you enjoy, but studies have shown that if you apply your strengths to achieving your goals, you will have a much greater achievement rate than if you applied those talents that you are weak in. Ie. If you are an introvert, you are more likely to succeed in convincing others by a series of 1:1 discussions, than by a large 1 time meeting. These same studies show you are happier when you get to do what you are strong in as well.

If you’d like to take the quiz, you can look at the resources I list at the bottom of this post. Once you take the test, depending on the test, you get a result that tells you the top 5 strengths you have (of a list of 34) and a good deal of verbage around what those strengths mean. One variant is the strengths-based Leadership test. I prefer this one as it shows the breakdown of your strengths into operational pillars.

For example, my strengths as reflected from the Strengths-Based Leadership test:

Name

Rank

Executing

Influencing

Relationship Building

Strategic Thinking

Brent

1

     

Ideation

2

   

Relator

 

3

Restorative

     

4

   

Individualization

 

5

     

Analytical

This calls out 4 important pillars:

1) Executing – basically, this is the drive to get work done.

2) Influencing – the ability to sell your team’s ideas to others

3) Relationship Building – think Team building – creates connections between the members of the team.

4) Strategic Thinking – the ability to think through and about the work needed.

It also mentions #1 strength is Ideation and #5 is Analytical etc. This, in short, shows *how* I do those things in the pillars that I am strong in.

If you’d like, you can think of the A-Team when thinking about the pillars:

  • Hannibal Smith is the Strategic thinker
  • BA Baracus is the Executor
  • Faceman is the Influencer
  • Murdock is the glue that keeps the team together

Together they are a super effective team and partner with one another to achieve objectives. I wonder…. Does the A stand for Agile?

Ok, so why strengths-based focus?

  • Creates transparency around the most important part of getting work done – the members of your team
  • Exposes actionable ways for your team to collaborate optimally.
  • People are more productive when utilizing their strengths
  • Creates a more effective and enjoyable work environment

The magic really happens due to the transparency of the team sharing and understanding each other’s strengths.

If you are familiar with Agile techniques, then you are already aware of how transparency into the system makes the throughput much higher. This is *very* true when that system is your team of people.

Ok, enough of the explanation. My recommendation:

Step #1 – Get the data

  1. (optional) Get yourself a Strengths Coach. I happen to know one where I work and he has been very helpful in working with myself and my teams on creating the strength and team based culture.
  2. Get your team to take the quiz (see resources below) and have them send the results to someone to aggregate
  3. Use the format above and simply list out the whole team.
  4. Get the team prepared to meet by doing some homework first. Tell them to:
  • Go through the list of 34 strengths and see if any jump out as additional strengths (1 or 2).  Write them down and bring them to the meeting.
  • Go through the list and see which ones jump out as definitely *not* on your list  (1 or 2).  Write them down and bring them to the meeting.
  • Think about *you* and be prepared to answer what each one of your strengths means to *you*.     Which of your strengths are the most exciting to you and why?
  • Think through the other members of the team and identify 2 people on the team that you suspect have strengths that are low on your own list.  One indicator:  they achieve results in very different ways than you do.

Step#2 – Meet and discuss

  1. Have each team member talk about themselves and share what strengths they are really proud of and why. Have them list out what strengths they think they also have (not in the top 5) as well as strengths they think are very low in their list. [aside: the purpose of this is to get folks familiar with each team member as well as the strengths themselves]
  2. Show the aggregation of the whole team. Ask they the team to look at the data and see what insights they gain from it. Is the team mostly strategic thinkers? Is there a particular strength that is dominate on the team? Etc.
  3. Go around the room again. This time have each team member talk about the 2 people they thought did results very differently from them and ask them a new spontaneous question:

Who are the two people and what is something that you hate doing that you think they 1) would love to do and 2) have a strength in doing it?

Step #3 – Follow through

  1. Continue to discuss Strengths to remind folks real time they are using it. Encourage your team to point out when they see other members using their strengths. This helps to reinforce this new knowledge and lets everyone see and acknowledge the strengths others are bringing to bear.
  2. I make this a part of all my discussions with my employees.
    1. I use 1:1’s and strengths to target career development
    2. I use strengths at standup to help guide decisions around who should pick up what tickets.
    3. It’s a great way to set up partnering relationships (pair programming is fantastic with this data)

This last step is absolutely the most critical. This is where the theory turns into application. You gain important insight. For example, it shouldn’t be a surprise if you have a person with all Strategic Thinking strengths who is the slowest to execute on your team. I can pretty much guarantee that his work is *well thought-out* though. This person probably would be happier if they were involved in the design phases of your work. Whereas a person, who is everything in Execution, is going to be happiest work hard to get things done (whether or not, their done is the “right” done).

So that’s it, there’s several moving parts, but once you go through it once, it’s not all that complex. I can state that I have received really super high reviews from my employees as a direct result of learning and then teaching this framework. People feel understood, valued, and know better how to fit into the team process in order to maximize productivity.

Resources:

1) Want to take the test yourself? You can go Gallup’s Site to take the simple test for $10 and 30 minutes.

2) Want to try to lead a team yourself? I recommend buying the Strength’s-based Leadership book as a start. I use mine as a reference book *all* of the time.

User Stories and 5 Why Analysis

 

Today I stumbled upon a very useful technique I felt warranted sharing. I am in the process of drafting a blog around the frailty of introducing Agile techniques to a waterfall team. I’m confident I’ll post that either this weekend or next. In the meantime, one thing I have learned from you, the blog reader, is that you tend to appreciate and forward on useful techniques to others when they are practical solutions to present day problems. May this help you.

 

My love/hate relationship with User Stories

 

There are a lot of great User Story resources. My absolute favorite is Mike Cohn’s site, here. The term “User Stories” is often overloaded, so I recommend visiting Mike’s site to make sense of the rest of the post. My view is compatible with his (if not a blatant overlap).

One of the first things I try to do when I am helping a team shift to Agile is to teach them to stop scheduling workitems and, instead, start delivering outcomes. This approach enables teams to decouple the problem they are trying to solve from the solution they currently favor. There a lot of benefits to doing this. Not in the least is the ability to define DONE in terms of value added to the customer. There are other techniques to defining Done up front (such as ATDD), but I have found that User Stories is really palatable to newbies trying out Agile.

But even User Stories have problems. They can be hard to construct. When I know I need, say, a new report created, it is just easier to write “Create new report” on the ticket and place it on the task board in the backlog. They can be rather verbose and hard to communicate succinctly. “As a decision maker for the release, I want to see fresh execution reports online, so that I can weigh-in on readiness armed with the right data to make an informed choice“. The friction created in the longer format makes the shorter format very alluring. Especially for those who haven’t yet experienced the value of the longer version.

 

The problem

For folks from waterfall world, they get the workitem approach. “Create new report” seems easy to understand and execute on. Today’s problem came from one of the testers on the team, who quite honestly, had gotten tired of me complaining about Scope Creep in their “stories”. As the Agile Master for the team, I push hard on making sure we are maintaining a high, consistent and predictable velocity. Scope Creep makes this very difficult, causes delays, and creates the potential of significant waste of effort incurring within the system. My team is using Lean and Kanban. We do not timebox our iterations, but each story has a 2 week SLA. The story this tester was working on was about some tooling we are creating. They were coming to me to let me know that “the design has changed again” and wanted to know what to do about it. The ticket was already passed its 2 week SLA.
In addition, the ticket was similar to “enable performance thresholds”. IE. It was ambiguously worded and it was entirely unclear when we would be done. I had warned of this before, but my style of Agile Mastering is to let teams make the decision and the mistake in order to enable learning, so I let it stick.

The solution

It is insufficient to point out what not to do. If you want folks to learn, tell them what to do instead. Here I suggested that the problem with the ticket was that Done was not clear. I said use a User Story instead… now as well as in the future. This particular tester had a hard problem with that. Even after explaining Stories. They could not pivot the workitem into a story. It was just unnatural for them to think of the outcome they needed. Unfortunately, this a far too common experience for me.

However, during a rare flash of total insight, I fixed this for them, by throwing in SixSigma’s Why Analysis technique. Primarily used to determine root causes of things, basically, the way it works is you simply ask ‘why’ 5 times.

The dialogue

I started off: “Ok, let’s try a different thought process. What if I were to tell you I see no value in this “enable performance thresholds” task, so I am going to cut it. How do you feel about that?”

Tester: ” I hate that.”

Me: “Why?”

Tester: “Because we need it”

Me: “Why?”

Tester: “Because dev needs it”

Me: “Why?”

Tester: “So they can decide if the product is good or not”

Me: “so what you are saying is that ‘as a dev on this team, you want performance thresholds enabled, so that you can decide if the product is good or not’?”

Tester: “yes”

Me: <blank stare>

Tester: “ooooohhhhh!!!!!”

 

We then talked about how adding additional why’s adds precision to the outcome desired and clarity around Done, while in most cases, keeping the implementation decoupled from the outcome. In addition, we talked about how to determine when the story is too vague (likely an Epic) and needing to be broken down into smaller stories.

I will see how well it plays out over the coming weeks, but the tester, at least, believed they would be able to confidently break stories into smaller outcome-based stories and with that defend against scope creep while still handling undiscovered stories in an Agile fashion.

 

 

Kanban for Chores

Today’s post is a lot more practical than most. It’s fun to mix things up now and again. I really enjoy it when work related activities can improve the home. I feel like the time invested in learning pays off doubly so in those cases.

Today I am going to share a new chore system that has been rolled out in the Jensen family. So far, I’m very happy with the results. To give credit where credit is due: months ago, in one of Alan Page’s blog posts, he told a story about how he introduced his family to Kanban. His kids, in particular, really seemed to dig it. I would share out the link to the post, but, sadly, I cannot find it. I think it lies in one of Alan’s older posts. If somehow it shows up, I will post an update here. (EDIT: as pointed out in comments, it was a series of tweets) As I recall, Alan’s system was to put the chores up on a kanban-related taskboard and everyone works together to get the chores done on the weekend before heading out for family fun.

As you may recall from my last post, I have moved recently. The new house is much bigger than the old one and we have an active after work/school life, so chores were going to the wayside at times. This, to me, was annoying. I remembered Alan’s post and said “hmm, that actually sounds like fun and with a few tweaks should work great for our household”.

Here’s how I did mine.

First, I acquired several supplies:

  1. A 4′ Magnetic Dry Erase Board
  2. A set of Dry Erase Pens (Although, I will probably change this to Painter’s Tape later… It looks cleaner.)
  3. 4 sets of Planning Poker cards by using the PDF available here.
  4. A set of Ink Jet Magnetic Business Cards by Avery

The board will serve as a base for the taskboard. Since both the cards and the board are magnetic, the cards will be a perfect medium for being task tickets. The dry erase pens will mark out the columns, WIP, and flow.

Next, I sat with my wife and we worked through the chores that we felt should/could be done by anyone in the household. Things like doing the dishes, taking out the trash, cleaning your room, etc. are not on the list. These chores we felt the kids should just do anyways… Family Tax. I then took each of those chores and printed it on its own business card. These ended up looking really sharp. Had I the patience or the time, I would have put pictures or decorated the cards so they showed the image of the chore. One of my sons really likes that type of work, so I may ask him to do it when these cards wear out. Plus I find if you get folks to help to work on the system, they feel more vested in its success. Can’t hurt.

At Dinner, I brought out the magnetic cards. This was the first time the kids had seen them and since all of the cards had work items on it that they recognized they were immediately suspicious. I trudged forward fearlessly and walked them and my wife through a variant of planning poker (without them realizing that what I was doing had that name).

Planning poker process we followed:

  • Ordered the chore list from easiest to do to hardest
  • I picked the one in the middle “Vacuum Downstairs” and announced it was worth 5 points. (My eldest, still suspicious, protested the use of points, claiming that it further proved that doom and gloom was coming his way… Teenagers!)
  • I then took all of the chores and put them into a single randomized stack.
  • I wrote a “5” on “Vacuum Downstairs” and kept it out of the stack
  • Then starting at the top, I asked “if Vacuum Downstairs is worth 5 points of difficulty, how many points is ___________________?” (for example, “Clean Refrigerator Door” or “Weed Front Yard”)
  • I told them how to use the Planning Poker cards I had made and iterated through all of the chores.
  • After we agreed on a point total for the chore (as well as got clear agreement on what Done meant), I wrote the final number on the card and went to the next card.

There was some discussion from the kids regarding the available point values, but I stuck to Fibonacci numbers. (they wanted a 4 and a 7)

 

Now I had all of the chores with points assigned, I went and made the simplest of taskboards (see below). Since I wanted the board to replace the current (and totally failing) allowance process, I then figured out the point to $ conversion by figuring out how much I would be willing to pay weekly and dividing that by the total of all of the chore points.

Then I told the kids (and my wife) the rules:

  • Only mom and dad can move tickets from the Backlog column to the Ready column. We will do so when we think that chore needs to be done.
  • Anybody can grab a chore they want from the Ready column, if they have available capacity.
  • Each person can have, at most, 2 tickets in the Doing column. They are not required to have any.
  • No one else can take that ticket as long as the owner completes it in 2 days. If they do not, then someone else can take it with notification.
  • At the end of the week, each kid will get paid according to the sum of their points.
  • Then the Done tickets will be moved to Backlog and my wife and I will re-fill the Ready column as needed.
  • Also, at the end of the week, we will as a family talk about the chores and adjust points up or down. I will adjust the point conversation rate as necessary, so that I am paying a constant amount each week.

The Results

Truly amazing. For those who have deployed similar things at work, it really shouldn’t surprise, but it did me. My kids really picked it up excitedly. The first day half of the chores got done. It was kind of a whirlwind. Both kids viewed it as a game and had a blast just moving a ticket to done. They both are eager to get the big ticket items, but when one figured out that he could get the smaller tickets in faster time than the big ones, he quickly surpassed his brother. Right now, it is definitely a situation of everyone wins. Kids are in charge of earning what they want and when. Mom and Dad are in charge of prioritizing the work and the house is staying clean and spiffy and as another positive note an unexpected “bonus” has popped up: The kids do not want Mom and Dad doing tickets. Why? Because then *they* don’t get paid. Brilliant, I say! Might just give me more time for blog writing!