It’s not often enough that I see teams experimenting. Well, that’s not entirely true. Most teams experiment by trying new techniques, by adjusting their process, or simply by trying something different. I’d call these anecdotal experiments, and they’re very valuable. However, it’s not these kinds of experiments that I want to talk about today. I’m focused on quantifiable experiments. Consider it a data-driven approach to analyzing team dynamics or individual behaviors. For me, this was borne from my affinity for the burn down chart. If used as intended, the burn down is simple, it’s useful, and it inspires a team to ask intelligent, focused questions. If my experiments do the same, I’ve succeeded.
Before I begin, I have a confession. I’ve rewritten this blog post several times now. Each time, it gets away from me so I scrap what I wrote and begin again. Why? Simplicity. I kept losing sight of it. Because of this, I’ve adjusted my approach. I intend to write this blog post in some rather broad strokes while my next will contain more context and examples of experiments I’ve run in my teams.
With my first broad stroke, let’s talk about some of the advantages of data-driven experiments:
- The act of observing can often create the behavior you intend. It’s called the Hawthorne effect. When teams realize the question they’re trying to answer or the problem they’re trying to solve, they become more aware of it. This awareness alone can sometimes be enough to solve a problem.
- It can diffuse unhealthy conflict. Having conversations about numbers fosters logic. By setting up and ultimately analyzing results as a team, the conversations become about the numbers and not about the emotions.
- Use curiosity as a motivator. Engage the team as we begin crafting the experiment. Let them know your hypothesis include the team in its creation. If we put value in their hypotheses, we’re bound to generate team interest. This will foster some rather riveting team discussions even before the experiment begins, and it will generate curiosity as to what the data will say.
However, with great power comes great responsibility. Be careful, and here’s why:
- People don’t do what you expect; they do what you inspect. Be wary of unintentional consequences that could come by analyzing the wrong data or by analyzing data in the wrong ways. Let’s say the team is analyzing story points, and we create an illusion that completing more story points over the next few sprints define a successful experiment. Intentionally or not, the team may begin inflating their estimates and give a “successful”—yet artificial—result.
- Data must be as unbiased as possible. Numbers can be made to tell any story. For an experiment to be successful, be sure to measure the right things in the right way. Otherwise, our team may not trust the results they see.
- Data is only as valuable as the questions it inspires you to ask. Data is a tool just as a hammer is a tool. It’s not going to swing itself. Conversely, data rarely contains your answers. Instead, it’s a tool to help us ask intelligent and more informed questions.
- Isolate your variable. Before entering an experiment, know exactly what questions we’re trying to answer. One question is ideal. Limit yourself to three and only collect data that directly relates to answering that question or questins. Further, maintain a dogged vision of what we’re attempting to measure. Otherwise, we risk overwhelmed ourselves or the team. Worse yet, we risk the data being interpreted in numerous and conflicting ways.
That’s all for now. Stay tuned for my next blog where I’ll share some experiments I’ve run in my teams over the years. I hope to see you all again soon.
Update: Here’s a link to my follow-up blog on this topic.
Do you want to get notified when new posts are published? Leave your email below.
16 thoughts on “A Data-Driven Approach To Team Evolution”
I’m a fan of transparency, and I hear what you’re saying about the Hawthorn effect being a tool, but I’d also be interested in hearing if you’ve run experiments where you didn’t disclose all (or any) of the details about the experiment in order to get more accurate data.
Transparency is vital. Agreed. However, there have been times when I delay providing teams with data if I fear it will bias the experiment. I haven’t yet withheld an whole experiment from a team, but I suppose I could visualize some circumstances will it may be helpful. Just like transparency, trust is important. If you wish to withhold info because of some level of distrust, I’d challenge you to tackle that problem first.
More to follow in my next blog post.
“When teams realize the question you’re trying to answer or the problem you’re trying to solve, they become more aware of it”.
The “you” and “they” bother me here very much. In Agile, there should only be “we”. I that team evolution should be guided by their own self-improvement program consisting of experiments driven by the problems they encounter and data-driven evaluation of the solutions they try.
So, does the Hawthorn Effect occur when the team realizes the question the team is trying to answer or the problem the team is trying to solve, the team becomes more aware of it? I would think not, because the team is already aware of what it is trying to do.
Maybe your approach does not apply to Agile teams, just teams existing in command-and-control cultures where improvement is driven by experiments run by people outside the team?
Thanks for the feedback, Dr. Gordon. The approach in my blog post works very well with agile teams. I’ve used it on many occasions with great success with all of my teams. The “you” and “they” you mention are simply to distinguish between the person who crafted the experiment (usually the scrum master) and those who are part of the experiment. In my humble opinion, the most gifted scrum masters are those who know how to harness the intelligence of the team, and experiments are a way to do so. With respect to your comment about the Hawthorn effect, I fear you assume that team members’ intellect (i.e. logic) and their emotions are aligned. They’re often not. A data-driven experiment can help tap into their logical side to help highlight an issue that the team hasn’t yet realized or is overlooking.
Maybe my follow-up blog on experimenting can help highlight what I’m up to. I posted it earlier today.
The Scrum Master is a facilitator, not a manipulator.
If there is a problem that calls for an experiment, the Scrum Master should be facilitating the team to:
* look at the problem in the retrospective,
* root causing the problem,
* decide on what to try to address that problem, and
* decide on how to measure how well that solution addresses the problem.
The team decides what experiment to run; the Scrum Master facilitates that happening.
Suggesting that the Scrum Master actually makes any of those decisions, let alone keeps any of the rationale temporarily hidden from the team to avoid raising the Scrum Masters goals is completely foreign to Scrum. Such an approach will erode both the team’s self-improvement programs and the team’s trust of Scrum and the Scrum Master.
Please, either divorce your approach from Scrum, or refactor it to be how the team itself can create experiments that provide data that is useful for driving the team’s continuous self-improvement.
Steven Gordon, PhD
somehow my spell checker changed “biasing” into “raising”
Suggesting experimenting is manipulating is like suggesting parenting is manipulation. Teams don’t come pre-baked and self-organized. If they did, the role of the scrum master would be largely unnecessary. They require a great deal of work to reach such a state. Experiments work great, especially with new teams.
Finally, I encourage scrum masters to fully engage their teams in the creation of any experiment. I write as much in the blog post: “Use curiosity as a motivator. Engage your team as you begin crafting your experiment. Let them know your hypothesis and let them create their own. If you put value in their hypotheses, you’re bound to generate team interest. This will foster some rather riveting team discussions even before the experiment begins, and it will generate curiosity as to what the data will say.”
Haha how would you feel if the team started experimenting on the Scrum Master? Kind of needling, but kind of curious what you will say 😉
Lol I’m game. What kind of experiments do you have in mind?
Actually, more like the PMO deciding to experiment on the Scrum Masters without telling them. Scrum Mastering is not like parenting – it is a peer-to-peer relationship.
I also noticed the follow-on article has more of the “me” (the Scrum Master) running experiments on “them” (the Team). I still fail to see how facilitating the team to design its own experiments to run on themselves cannot accomplish absolutely everything you are accomplishing and even more.
Ironically perhaps, I completely support true objective researchers running experiments to analyze and better understand Agile Software Development. Those researchers cannot have any other relationship with the teams they are analyzing without biassing the objectivity that makes that work true research.
Steven Gordon, PhD
> Scrum Mastering is not like parenting – it is a peer-to-peer relationship.
I agree, and I never said that scrum mastering is like parenting. My quote: “”Suggesting experimenting is manipulating is like suggesting parenting is manipulation.”
> I still fail to see how facilitating the team to design its own experiments to run on themselves cannot accomplish absolutely everything you are accomplishing and even more.
I agree. I feel we’ve already covered such territory in a previous comment so I’m unsure why we’re circling back. Here’s several quotes from my blogs as support:
* “Never lose sight of what matters. That’s not you. That’s not data. It’s your teams.”
* “It’s worth noting that with this experiment we weren’t attempting to adjust their behavior whatsoever. We simply wanted to highlight what percentage of stories were open for every day of the sprint.”
* “When teams realize the question they’re trying to answer or the problem they’re trying to solve, they become more aware of it.” [Cleaned up previously based on your feedback. Thanks for that.]
* “Engage your team as you begin crafting the experiment. Let them know your hypothesis and let them create their own.”
My use of “me” and “team” is to distinguish between the person:
* Facilitating the team to determine the experiment.
* At a computer with Excel open creating the experiment based on team feedback
* Inputting the data in excel to provide back to the team
A single person would have to be responsible for it, and for the examples I provided, that was me. It could have just as easily been a member of the team.
It seems our disagreements may lie in inference and not in fact.
Why does a single person have to be responsible for “it”? Do you believe the same is true for software, or only experiments?
Do you have a single example of a team member suggesting and executing an experiment, perhaps inspired by your example? Did they also choose to hide aspects of the experiment from their coworkers?
By the way, I have several times facilitated a team who was having more than one incomplete story at the end of sprints to try limiting WIP. This is a well known example of a way team tries changing its behavior to address a visible problem (multiple incomplete stories at the end of sprints) and then examines the result to see that it worked.
There are no good reasons for a team to try an experiment except to address a visible problem. Just trying stuff out of curiosity is like gold-plating – wasting effort on something that nobody needs. However, such intuitive experiments are a useful objective research technique (for those studying the subject). This is the case where the Hawthorne effect and other research issues are valid concerns.
You are confounding research and continuous improvement. They are similar, but distinct. Not understanding the difference clouds both ideas.
Then, please, phrase your blogs like the team is in on the experiments, not just the subject of them. More “we” and less “you” and “they”.
Newbies will get the wrong idea.
I’ve changed the language around a bit to make that more clear. Thank you kindly for the feedback, Dr. Gordon.
Hey Tanner — thanks for posting about this! I’d like to point out what seems to me to be a fallacy in your logic.
You say “The act of observing can often create the behavior you intend.”
Having been a part of one of the experiments you cite as an example, I observed the team creating the appearance of behavior change. With some actual behavior change too. But most of it was appearance.
In our team, once people realized what factor you were studying (# of open stories), they started making it seem like metrics were better (not maliciously, but just by keeping house a little better, and closing stories more conscientiously because they saw that someone was watching it).
So I’d be wary to draw conclusions that behavior change actually happened. You might have only changed the way behavior is recorded (esp since the behavior you are measuring is self reported).
Amen! I totally agree, and this mostly means that I have a lot to learn myself. If the best we could do was change help us change behavior only temporarily, then as the person reporting the information to the team, I overemphasized the numbers. It’s evidence that I still have a long way to go toward being a great scrum master.
There’s so much more teams could be teaching me. I just have to be willing to listen.