Role of testing in DevOps

One question that I get asked frequently is whether DevOps implies significantly higher costs for testing.

Before answering that question, I would like to spend a few minutes on the types of testing that is usually recommended and what is done.

Teams following the V-model would have a focus on Unit, Integration, System and Acceptance [also known as business function] tests.

This is good.

But, has the limitation that a Dev influenced life cycle has.

The non functional testing is usually considered as a job for the specialists and is taken up in many cases as an after thought.

Even with all this, some aspects related to security ad vulnerability is rarely considered as part of the testing.

But then, as anyone who has developed software would know, the non-functional attributes need to be ‘baked-in’ and not bolted on.

This can happen only when the architecture takes these requirements also into account.

The most popular development approaches among DevOps teams – Agile and Lean – do not define a clear role for an architect.

By now, you would be getting the drift of what I am hinting at.

Even most critical applications today, may not be subject to the breadth and depth of testing needed to make them production quality and be reliable and available to meet business goals.

So, the first point to consider when computing the cost of testing should be against what is the desirable level of testing.

Next, the rigor of testing and the stage in which this testing is done. Which, usually is towards the release event.

By then, code freezes would have happened and the Dev teams busy with updating documentation or creating Easter eggs.

In a DevOps model, non functional requirements will include the considerations for successful deployment in production.

This will mean that the application – or enterprise – architecture should take this into account as well, right from the beginning.

and on a continued basis..

What continuous delivery [or deployment] will necessitate is that every increment that goes to production is fully tested.

Obviously this will mean that testing also has to be continuous.

Thankfully, approaches, techniques and enabling technologies and solutions make this a little bit easier, particularly when tests need to be repeated.

Another significant approach that has proven to be beneficial is the left-shift model.

Moving testing to as early in the life cycle as possible – either by including these criteria as part of the definition of DONE in earlier stages of development, or by running automated tests on every build.. – not only reduced the cost of testing, but also ensures that critical components are tested multiple times, thereby reducing the risk of failure.

Another requirement of a successful DevOps implementation is to be able to rollback gracefully.

Even with a lot of care in testing, sometimes, incompatibilities in the environment of apps sometimes leads to instability. Rollbacks help minimize the damage to production stability.

Now, to the question that triggered this post.

Yea, there are costs one needs to consider – for automating testing, to include testable architecture as a way of development etc.

But a fairer comparison would be to look at the cost / benefit.

As teams aim to deliver continuous business value, delivering it right every time becomes an extremely important requirement.

Based on the case studies that are documented and freely available on the web, paying attention to testing from the beginning, taking the Ops requirements also into the architecture have paid satisfying dividends.

When one considers the diversity in the devices used to access the applications, the complexity in testing becomes even higher.

The session on testing and mobility in the Decoding DevOps conference would touch upon many of these aspects and also be a platform for some of the practitioners to share their experiences.

Leadership, Communication; Culture
What do you think?

2 Responses

  1. Velocity of each individual iteration will be a different figure. There are many ways velocity gets impacted. Apart from planned absence (planned leave, training etc.) and holidays, there could be unplanned absences caused by illness, personal emergency etc. which impact velocity. User stories that do not get completed in an iteration get moved to next iteration. This brings down the velocity of the iteration where the story was started and bumps up the velocity of the iteration where it got completed. This being the situation, good practice is to take an average of last five or six iterations as the velocity of the team. Team stability is another factor that impacts velocity. Teams that have higher churn will see higher volatility in velocity. Other factors such as change in technology, adoption of new tools, increase in automation, will also impact velocity either positively or negatively! However, if team is stable and has reached “performing stage” steady rise in average velocity will be seen over a period of time till any of the factors mentioned above comes into play and impacts it.

    1. Thanks Milind, fully agree with your comment.
      Finally, irrespective of the increasing trend in velocity, there is improvement for sure. This cannot be missed, if observed. One of the intent of my blog is to encourage this observation, by taking a mildly provocative stand.

Leave a Reply

What to read next