Agile team practices that showed us immediate results

Agile team practices that showed us immediate results

There are times when culture and the practices that a team follows (or does not follow) start affecting their delivery work, primarily in Agile teams. Non-essential meetings, unplanned or unwanted catch-ups, feeling lack of safety that forces people to keep record of things such as emails, are often part of a bigger problem.

As a delivery professional (Lead, manager, Scrum Master and in some cases even product owners), your responsibility is to ensure that your teams delivers the valuable outcomes, at frequent intervals and with a sustainable pace. This responsibility also includes protecting them from non-productive tasks and mentoring them for de-prioritising non-value adding tasks.

Unfortunately, sometimes teams do not even realise that some of their practices were slowing them down. This often happens because of a rigid mindset that is ingrained in their organisations and the cultures. And as you might already know, it takes time to change behaviours and attitudes that become norms.

The good news is that you can possibly change that mindset.

How do you do that?

The answer lies in the teams. Teams are built of people, and people prefer to stick with social rules. In fact, it is often hard for people to oppose or show resistance to norms of a group that they are part of. As an Agile delivery leader, it is in your and the organisation’s interest that you encourage your teams to develop a social contract or a working agreement. These contracts define how the team members in a team prefer to work together and what they find as acceptable behaviours. Since everyone in a team contributes and agrees to the principles of a social contract, it is also easier to improve practices that affect them most.

Just to be clear, these social contracts are not legally binding agreements. They are there to enforce the values and principles that people in a team believe in and agree to adhere to. 

I was working with this team of nearly fifty people as a delivery manager. The team was consisting of four product streams and there were interdependencies. We were facing some common challenges regarding the day to day functioning of the team. Through the retrospectives, I came to know that there were few common patterns across those streams where the teams were feeling the pinch.

I was able to break those patterns in four themes: 

Productivity (and process improvement)

Culture (behaviours & attitudes)

Development & growth 

Clarity of Mission

All of these themes can be summarised in some of the key challenges:

– Too many meetings. Many of which unplanned and go on for too long

– Lack of agenda and expected outcomes in meeting requests

– Lack of a medium for team members to appreciate their colleagues across the program

– A need for a common platform for the whole team to come together and share ideas

– Finding time for training or capability uplift

A need to know each other better which was a challenge due to the size of the group

– Clarity of roles at program level

 

Here’s how we solved the problems:

Productivity + process improvement:

There were too many meetings and this large number of meetings was not only affecting productivity, it was also impacting delivery schedule. Since meetings were reducing productivity, team members were also getting visibly annoyed. 

This is what the team decided to do:  

  • No one within the team was allowed to schedule meetings after 3 PM. What that meant was that people get unrestricted productivity time after 3PM.
  • Everyone in the team agreed that each meeting request will have a clear agenda and if possible, the outcome they were looking for. 
  • We also attempted to limit the meeting times. That is, any 30 minutes were planned for 25 minutes and any 1 hour ones were planned for 45 minutes.
  • We also asked people to come prepared so that their messages were concise and crisp.
  • We emphasised on face to face conversations whether people were onsite or working remotely.  

We kept an eye on these practices for adherence and ensured that everyone, specifically the senior team members behave like role models.

Culture:

The program director and I were closely aligned on the values that everyone in the team adheres to. Having a shared understanding at the management layer certainly ensures how each member of the team will behave.

  • We encourage team members to be completely open and honest in retrospectives and otherwise too. We also made sure that our own behaviour reflects what we were advocating for.
  • I attended and at times facilitated team retrospectives. What was motivating for me was that often team members approached me after the retrospectives showing their appreciation for making them feel safe in those sessions. Team retrospectives were purely for the team members and neither product owners nor anyone from the program team were allowed to attend unless required.
  • We made sure that the program management team accepts the improvement areas and takes actions. We circulate those actions to the team members so that they can see the congruency.
  • We also ensured that we provided regular updates on the status of the action items.    
  • To encourage peer to peer appreciation, we introduced “Kudos cards” in the program. When we introduced the cards, we also explained why there was more value in appreciating those who deserve and why a reward system from management does not work. 
  • Monthly retrospectives: Due to the interdependence within the streams, it seemed vital that the whole team has a common retrospective. Team retros also reflected that need. Hence, we decided to schedule a monthly retros for the whole team. In these sessions everyone in the team came together to share ideas openly and respectfully.

Development & growth:

Many project or product teams do not have additional budgets for training for their team members. Depending on the structure of the organisations, team members might belong to different specialist groups that are responsible for their professional development. Sometimes People and Culture (a.k.a HR) teams are responsible for arranging training. Dealing with some of these processes can be cumbersome due to bureaucracy. So what do you do then? Well, see how might you get your team some training and mentoring. We found the answer in some unconventional ways. We agreed that you don’t always have to separate training from ‘business as usual’. Therefore:

  • We build small pockets of training time in each team event. We had a quarterly team events calendar and we specified training time in the calendar. For example, to enhance team collaboration, we added a ‘personal mapping’ exercise in a monthly showcase. Easy! 
  • We placed a flipchart in the project room with suggestions as well as space for other ideas. Team members could either opt for an existing training sessions or could suggest a new one. There was enough autonomy to add anything and we did see some funny suggestions too.
  • We made sure that there were options for lunch-time sessions, brown-bags, separate training and even ad-hoc sessions. Once we organised a session on user stories by extending our stand up to an extra 30 minutes. What you might find hilarious that we expected 5 attendees, but nearly 25 people showed up. 

 Clarity on roles: 

Sometimes within large teams, where you are adding new members regularly, it is difficult for team members and leads to explain various roles (just to be clear, I’m not talking about titles here). In our case, we had data architects, data engineers, CX designers, UX designers, delopers, testers, business analysts, solution designers, legal counsels, procurement people, scrum masters, product managers, domain architects and others. With so many roles, not only explaining them can be hard, but also keeping them clear to avoid overlaps. You don’t want people unknowingly stepping on each others’ toes. Neither you want any frictions among team members, no matter how psychologically safe you believe your team is. We’re all humans after all!       

  • We created a team structure chart that clearly showed who was working on what. We also made sure that the chart shows which stream people belonged to. We circulated this chart to everyone in the team and also added it to our induction pack that we shared with all new team members. 
  • Although I am not a big fan of RACI matrix (responsible, accountable, consulted, and informed), it deemed necessary to us to create one. Of course we consulted our team about what they thought about it.
    Why I’m not a big fan of RACI matrix? Because these matrix define linear responsibilities. That is, one person =  one specific role. If you are a self organising team, you may have team members playing different roles. However, having this chart helped us bring more clarity about roles and it also reduced overlapping and multiple channels of similar conversations. Additionally, team was clear that their self-organisation wasn’t at risk.
  • Skill matrix: Now this is something that was interesting.  I had used skill matrix before to create cohesive, well designed, self supported teams. What a skill matrix does is to show the availability, lack of and level of expertise for a particular skill. For example, you may have someone very experienced in C++ and you may also have someone willing to learn it. As a lead, you can take actions to balance the skill shortage. For me, this experiment failed as no one ever willingly added their skills. I also trouble validating why that happened. Anyways, this was added to our ‘Failure log’.

As you might have noticed, we solved many of our problems by bringing people together. Creating common understanding was our first step in ironing out some of the challenges we faced. We also did a number of experiments and not all of them were successful. However, keep running short experiments, validating them through feedback loops and not losing focus from your primary goal can do wonders for your team. 

Did you do something similar in your organisation? How did that go? Let me know in comments.  

I didn’t know who Adam Gilchrist was and how that’s about career

I didn’t know who Adam Gilchrist was and how that’s about career

You’ll possibly find it funny that I didn’t know that Adam Gilchrist was a famous cricketer (read the full story below). I had little interest in cricket and I hardly knew any Indian or Australian players except the most famous once. That created no problems for me other than funny remarks from colleagues that I didn’t know about the game that my own countries (India’s and Australia) were so passionate about. 

 How successful someone is in their career is clearly seen by the passion they show about it. Passionate people deal with challenges more easily than those who do something just for the sake of it.

But what if you are not passionate about something and it’s just an interest? Well, the good news is that you can develop an interest and become better. Although in this post I will focus more on how passion helps you grow in your career. 

Let me start with a story first!

Few years ago I received a phone call from an acquaintance who told me that he had a breakfast invitation for me with Adam Gilchrist.

“Well, a breakfast sounds great. But who is Adam Gilchrist and what does he do?”, I replied to my connection.

There was a pause on the other end of the line. The silence broke and the person on the line informed me with a disappointing tone that Adam was a famous Australian cricketer. Then he asked me whether I watched any  cricket.

In fact I never had any interest in cricket. And therefore, I politely declined that invitation. I knew that if I had attended, people would have talked about cricket and my contribution (or the lack of that) could have got a bit awkward for Mr. Gilchrist and others.

That story ended there. However, the theme didn’t seem to.

Later in the year I attended a training on Agile processes organised by my employer. During lunch I started chatting with the trainer.

Being a practitioner, mostly you talk about your common area of expertise. So I asked the trainer, “Melbourne has many good Agile meetups going. Do you attend any?”

The trainer replied that he was often busy and doesn’t get time for attending meetups and events. Fair enough!

In order to not sounding rude, I acknowledged that response with a neutral statement and said that there was a conference coming where few famous Agile practitioners were speaking. Of course, I mentioned few names which our trainer didn’t seem to recognise. And at the next moment, he disappeared in the oblivion.

Actually, he didn’t return to continue the conversation and avoided any other discussion where any such references were used. I have a feeling that he only knew what the training material referred to.

The first scenario above demonstrates my ignorance about about the game of cricket and a well known player. Since I never had any interest in cricket, not recognizing Adam Gilchrist is hopefully pardonable.

I believe that it is okay not to know someone from a field that has little or no impact on you or your profession. But I think it is not okay to not know people who have made significant impact on the particular craft that you belong to.

WHY?

I would be surprised to meet those physicists who don’t know Richard Feynman or Stephen Hawking or Neil Tyson. How would you feel if you ask an aspiring (Hollywood) actor what she thinks of George Clooney and she shows her ignorance about George Clooney’s existence? I would be surprised too, even shocked! 

Although it seems to me that it is only the tech industry that appears to be losing it. 

I come across programmers writing applications in Ruby on Rails but never thought of knowing the origin of its creation. I come across multitude of testers who never know anything about testing beyond writing test scripts.

Does it really matter?

Not knowing the originators or those who have spent years practicing and developing the craft you work in may not affect you in your job, if it is only a job for you that pays a wage that pays your bills. That is, you are in a 9-5 job and don’t really care as long as it is provides ‘job safety’ and pays on time.

Where it might affect your growth is in the community and at employers who are passionate about their crafts. Why? Because that shows that you are not passionate enough.

Technology has made this world much smaller than you may think. It is far more easier to know about individuals and their skills.

Don’t let lack of interest be your enemy for your next job..!

How Management and Tech people fool themselves with Measurement

How Management and Tech people fool themselves with Measurement

Are you obsessed with metrics and measurement?

The software industry seems to have an obsession with metrics and measurement. We want to quantify everything. Once upon a time everything was about counting lines of code (KLOC). Managers ran around asking, “How many lines of code have you written? How many bugs per KLOC are there? What is the size of project in KLOC?” etc. Then we started counting everything else that was left from quantifying KLOC and managers started asking, “How many requirements are there? What is the number of test cases? How many bugs did you find? What is the defect density? How many test cases are passed? What is the requirement coverage?” etc.

The obsession with quantification is often an influence from the  manufacturing industry where you can count things that are physical and are visible to eyes. However, counting things in the software industry appears to have helped consultants who sell the premise that charts, graphs and measures based on invalid constructs are meaningful. The problem is often the misinterpretation of these metrics.

At one of my former workplaces, the testing team used to generate a report which had a metric called “Quality Index” (QI). The objective with this measure, that the team explained to me, was to have some indication of quality and performance.  The managers needed an indicator to assess development performance (for example, are there any issues with understanding, communications, requirements, process, and so forth). The QI measure was considered a yardstick. That is, every time a new build was tested, QI could tell managers how good (or bad) the build was.

However, there is a problem with using yardsticks. They can’t be used for measuring something that is subjected to interpretation or is subjective in nature. Often they are good heuristics and can be useful as a first order metric, but mostly for physical measurement only.

 

A metric, like quality index, may be used as an indicator to ask, “is there a problem here?” In that case it becomes a heuristic. Because it is fallible. It may help you find a solution, but it may never guarantee one. Using heuristics for assessing your key employees performance is dangerous. You may lose their trust and respect and they may eventually leave (unless that is what you actually want).

Metrics are a powerful tool but they can always be misinterpreted and can be skewed to show favorable (or otherwise) results. Without context they are very much meaningless. Quantitative measurement can lead to a false sense of control. It creates an illusion that we can understand and control something because we can count it. Someone mentioned in an online forum about quality that if you can’t measure it, you can’t have it. I guess this person was referring to Tom DeMarco who wrote in his book “Controlling Software Projects, Management Measurement & Estimation, (1982), p. 3.” that – you can’t control what you can’t measure. In a recent discussion Michael Bolton reminded me that few years ago Tom renounced from the opinion that he has held. His recent views can be read here.

As I mentioned earlier, metrics are often used as a sales tool by consultants to gain more business. I was once invited for dinner at a 5-star hotel by the testing group of a large bank in Australia. I wasn’t aware that the dinner was actually hosted by the bank’s testing vendor, a huge I.T. outsourcing firm. This vendor’s introductory presentation included the data of efficiency they brought to their banking client. These details included automating 3500 test cases, reducing test preparation time by 70% and so forth. As James Christie said in his blog post, “100 is bigger than 10. 10,000 is pretty impressive, and 100,000 is satisfyingly humongous. You might not really understand what’s happening, but when you face up to senior management and tell them that you’re managing thousands of things, well, they’ve got to be impressed.” This vendor certainly impressed their naïve client.

What is this beast called Quality index?

The QI index that was used by the teams I worked with had this definition:

The QI has been defined as a measure of defect density, such that the percentage of defects as a proportion of the total number of test cases executed is defined.

This is a measure of company’s software quality delivery to testing as opposed to company production quality.

Lower QI is better.

The report used to have statements like:

171 test cases executed successfully and 93 defects detected, providing a Quality Index (QI) = 54% (this is within the 1 – Unsatisfactory level).

There were graphs like the one below which explained what and how the quality of the build has been:

“So what’s the problem here?”, you may ask.

This seems to be a valid question, especially when you have been told about these indices and were presented with data that seemed accurate. We see such indices on TV every day where some eminent economist is presenting his view on the economy and predicting which way the markets will go, and later convincingly explains why the markets did not go the way he predicted. The simplest answer is that no one, including Nobel Prize winning economists, can predict the future. Humans simply do not have the ability to predict. You may say that you can predict that you will read the next word on this post – but even that is unpredictable. What you would actually mean is, “I predict that I might be able to read the next word provided the boss doesn’t call right at that moment, or the monitor doesn’t lose power or the sky doesn’t fall or..!” The list can go on.

I studied Statistics as one of the subjects during my Masters degree. While that study did not make me an expert in statistics, it did improve my knowledge of the subject though. And I think it will also help us examine what is the problem with this quality index. Let’s start by looking at definitions.

What is “quality”?

Jerry Weinberg defines quality as “value to some person(s)”. James Bach and Michael Bolton added ‘…who matter.’ to this definition. So the definition that I like is, “Quality is value to some person(s) who matter”.

Michael Bolton suggests that decisions about quality are always political and emotional; made by people with the power to make them; made with the desire to appear rational and yet ultimately based on how those people feel.

Let’s have a look at the overall definition once again. “The QI at Company X has been defined as a measure of defect density, such that the percentage of defects as a proportion of the total number of test cases executed is defined.”

What catches our attention is the term “defect”. Although I prefer calling them bugs.

Is there a point in defining defect density?

James Bach says that a bug is anything that threatens the value of a product – something that bugs someone, whose opinion matters (this last part was added by Michael Bolton). This definition automatically makes the benefits that someone may be seeking from quality index highly questionable. Like predicting the future, humans do not have the ability to find and explore all bugs that might be there in a system. Michael Bolton notes that “the idea of a “bug” is subject to the Relative Rule meaning a bug is not a thing that exists in the world; it doesn’t have a tangible form. However, a bug is a relationship between the product and some person. A bug is a threat to the value of the product to some person. The notion of a bug might be shared among many people, or it might be exclusive to some person.” So, there may be little point quantifying something that does not have a physical existence. Once people start counting bugs, they start falling in love with them. They feel that they own these conceptual non-physical things. Psychology defines this as reification, the perception of an object as having more spatial information than is actually present. Is it worthwhile defining density of an abstract concept or of something that is subject to relative rule?

 

The wild world of test cases

The definition of QI also talks about deriving a percentage as a ratio of number of test cases. What is your definition of a test case? My team stopped writing lengthy, step-to-step test cases or scripts in a deliberate move away from the idea that testers should develop test cases based on a requirements document. What I have observed is that many testers take a requirements document and create a number of test cases for each requirement like positive, negative, sedative, nonsense-itive and so on. These testers wrongly believe that these test cases provide complete coverage1 for the requirements. They create a Requirement Traceability Matrix (RTM) which is usually a table with test cases on one axis and requirements on the other. If all requirements have a mapped test case, then coverage is complete. The managers believe that these detailed test cases help their tester perform complete testing. Managers get upset when coverage metrics are not there or the RTM is missing.

What they don’t realize is that when they say test, what they really mean is check. Then, a single requirement may mean more than one assumption, proposition or assertion. A business analyst who writes business stakeholders requirements may interpret them entirely differently than the stakeholder herself. A developer may interpret them differently and similarly a tester may interpret and intersect the requirements in a very different fashion not understood or agreed by others. Hence writing a test case per requirement or multiple test cases per requirement sounds completely incorrect. If that is incorrect, then the ratio based on an incorrect number would be wrong too. And therefore, that makes the concept of Quality Index meaningless.

So what do you think of this claim now:

This is a measure of Company X’s software quality delivery to testing as opposed to Company X production quality.

Lower QI is better.

The QI as it is defined is not a measure of anything of value about the software or its quality – and is easily gamed (e.g. just split test cases into smaller test cases and hence immediately reduce the QI for exactly the same piece of software under test with exactly the same number of defects!). The QI itself even tells you that the testing it relates to (and the way it is being measured) is not great, by saying “Lower QI is better” – this means you are striving to make test cases that don’t find defects, why would you do that? Ethically we should not.

As skilled craftsmen, we should not waste time counting and showing percentages when we could best spend our time talking to our stakeholders about the things we see, the things that interest us, things that looks suspicious, the risks we observe, and the overall quality as we perceive it.

So, the next time you are counting test cases or bugs, working on a RTM or looking at percentages in reports, ask yourself, “Am I simply counting and giving statistics or am I helping our organization in delivering a quality product with the lowest risk of failure?”

 

1 Further reading:

 

kaner.com/pdfs/impossible.pdf

http://developsense.com/articles/2008-09-GotYouCovered.pdf

http://developsense.com/articles/2008-10-CoverOrDiscover.pdf

http://developsense.com/articles/2008-11-AMapByAnyOtherName.pdf

It’s not Agile that fails organisations. It’s the organisations that fail at agility

It’s not Agile that fails organisations. It’s the organisations that fail at agility

Agile has been touted by people (who understand it) as an approach, a value centre, a mindset and philosophy. Those who understand, have been observing some interesting posts and discussions going on social media that claim that Agile has failed in their organisation.
If we regard Agile as a tool that a team or organisation might choose to use, then perhaps we can understand the failure of Agile for that organisation. That’s be similar to any other tool that an organisation might use.  Sometimes tools work, sometimes they don’t! Understood! A hammer can certainly fail a carpenter if it breaks during carpentry work. But, if the carpenter does not know how to use a hammer, it is not the hammer’s fault, or is it? (Just too be clear, this analogy does not represent Agile as a tool).
Let’s not jump too prematurely to any conclusions. Instead, let’s try to cognitively analyse if there is a problem here. Jerry Weinberg’s Rule of Three* states that if you can’t think of at least three different interpretations of what you have received, you haven’t really thought enough about what it might mean. Another version of this rule that my friend Jari Laakso suggested was, “If you can’t think of three things that might go wrong with your plans, then there’s something wrong with your thinking.”

When someone says that Agile has failed them (in other words, their Agile way of working was not successful), the actual problem might have been:
They don’t know enough about Agile and they tried to “do Agile” rather than “be Agile”.
They thought that they knew about Agile and implemented it the way we knew it. What they did didn’t work. (Rajesh’s note: you don’t implement Agile in the same way you don’t implement truth.
They thought Agile was predominantly about specific practices and conventions: using post-it notes, having daily standups, having sprints and not much else. Despite those they couldn’t deliver anything.In some contexts, any (or all) of these cases may have been a key contributor to the failure of Agile.
What troubles me is that many people who blame an approach or a methodology, do not in fact try to first understand that approach or methodology.** There was a mention of waterfall methodology somewhere and most people in the discussion did not know about waterfall’s origin. Someone mentioned Winston Royce and disappointingly it turned out that even that person had selective take of the paper and decided to conveniently forget about the last few sections of Royce’s paper which are very important.
More often than not, Agile methodologies are implemented incorrectly. Some implementers don’t realize that there are Agile values and principles (Jari reminded me about ScrumButs). Some have not taken time to look at and understand the Agile manifesto. Many Scrum Masters never looked at The Scrum Guide. Some didn’t even know it even existed. I have done this experiment of asking anyone who mentions Agile whether they have actually read the manifesto. A large number of those had not. Many of those who had read the manifesto, did not try to  understand it well. Sadly those who understood it, could not implement what an approach as outlined by the Manifesto, because their organisations weren’t ready.
It is indeed often easier to blame a methodology or an approach. Agile adoption and implementations of related frameworks can fail for many reasons. What is important is to investigate what went wrong and whether that could be avoided. Even more important is to understand an organisation’s culture and whether the organisation and the approach are good fit for each other. Jerry says in his second rule of consulting, “No Matter how it looks at first, it’s always a people problem.” A good Agile coach might be able to help bring a mindset change if not the culture change.

So, as often the case may be, Agile hasn’t failed you, you may have failed Agile.

 

* The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully

 

** http://www.slideshare.net/EmielVanEst/did-toyota-fool-the-lean-community-for-decades

Why innovation almost always results from unconventional methods

Why innovation almost always results from unconventional methods

 

 

 If you look closely, the process that Henry Ford used for devising Assembly Line (the process that introduced mass production to the world), was itself a Lean and Agile process.

When Ford was working and competing with other niche car manufacturers, he didn’t copy the model that everyone else was using. That is, he didn’t rely completely on suppliers which were manufacturing vehicle parts as individual units. The parts that these suppliers produced, were not always identical. Assemblers at the factories had to mold them using hammers or mallets so that they fit the vehicle. Instead, Ford kept experimenting to achieve the outcomes he was looking for. His experimentation included changing almost all aspects of the process, often one at a time. His whole approach was iterative and incremental.

Here is an  example of what he did and how. 

Assembly station related experiment:

In order to reduce timing of building cars, Ford designed assembly stations.

At first, both the assembly station and the assembler were stationery.

Cycle time 514 minute.

Ford wanted to improve on this time and he decided to make changes. As a result, the assemblers were now moving station to station within the factory. That meant that the stations remained stationery.

The cycle time now reduced to 2.4 minute.

Isn’t that astonishing? A lot of people would be elated with such a feat and would just stop any further improvement. In the actual fact, stopping is even a far cry, most will not even agree that any more improvement would be possible.

The next change that Ford made was to keeping assemblers stationery and turning assembling stations into moving platforms, which became known as the assembly line.

The cycle time now was 1.3 minute. 

The point is, whatever you are attempting to do, check if that encompasses experimentation, exploration, critical thinking, pivoting, learning (even from failures) and iterating.

Setting up innovation hubs, centre of excellences and sloppily copying & applying labels has never made any organisation innovative or progressive. If you’re in a position of power, empower others and build an innovation culture. If you’re not in a position to make large scale changes, at least change the way how you do things.