Category Archives:

‘Test Case Immunity’- Optimise testing

Test cases and the defects therefore are central to what we do. We focus on uncovering more with the fervent hope of finding less later. As system matures over numerous test cycles, test cases stop discovering defects.

We are also constantly challenged in doing this with less effort and time continually. Automated tests and intelligent regression are the typical solutions to tackle this challenge by ‘doing faster’ and ‘doing less’.

In this webinar we examine an interesting idea of measuring “Test Case Immunity” to logically assess what test cases to drop by so that we can ‘do none’

Here is the slides with notes/transcripts of the webinar.

The recording of the webinar is available here.

You can view the webinar slide deck below

Design Scientifically

Here is the slides with notes/transcripts of the webinar ‘Design Scientifically ‘, the third in the Tri-series webinar ‘How to test a user story’.

The audio recording of the webinar is available here.

You can view the webinar slide deck below:

You can see the Q&A session of this webinar below. Click on each question to see the answer.

1. Why does it become ‘stimulate to uncover’ in agile projects while in waterfall model it was ‘discover bugs/defects’?

We normally think of test cases as something that has to be executed (stimuli that have to be executed) on the System Under Test (SUT) to uncover the issues. Our entire notion of test design is always been to “come up with the list of test cases” whether it is agile or waterfall.

What I meant is that, it is not only the act of design but is the process of digging deeper to understand the indented behavior and then finally to validate that the intended behavior is met or not.

In a waterfall model, we make a kind of assumption that everything that needs to be known about the behavior is given in the form of a document, which is not applicable in the case of Agile.

Here ‘stimulate to uncover’ means we are in the process of evaluation; that means we ‘kind of know’ what is supposed to done/evaluate.

In agile world, we are in the process of constant and continual discovery. Hence it is not only to stimulate to uncover, it is also about to ‘know the unknown’.

2. You explained E1, E2, E3 and E4. It is clear that E1 is smallest and E2, E3 and E4 become bigger and complex. Should we list up there E1-E2-E3-E4 in mixed? Or can we separate each of the entity from E1 to E4? I think it is very difficult to list up with mixed mode of E1- E2 - E3 - E4. I think it is easier for us to list each entity. Can we compare Ex (E1 - E4) one time? It must be very complex and difficult.

When we talk about user stories, we always think of these smaller elemental pieces (E1).

When you have a set of test cases to evaluate, what are we tracing these test cases to; A user story(E1)?, a collection of user stories(E2)?, or a usage flow that constitutes a set of user stories(E3)?. So having these four categorization will help you to trace those test cases into different type of entities.

It probably may not be ‘complex and difficult’ as the fact is that, it is a constant and continual integration that is in progress. As long as we know that there are these TEN major flows that are implemented in THREE sprints using say 50 user stories, the list is there. It is not that, it is available in the beginning but is constructed as we go.

3. In Agile model the user story may keep change until it get deliver, so how can we prevent a bug/fault in advance?

Please note that what we build today could probably be modified rapidly, and that is the whole idea of being agile/responsive. So when you build something like a user story and trying to evaluate the same, If you have a standardize set of defects or a standardized set of scenarios then we know that these are all the probable defects that may occur. So if we know that, we become more sensitive to that and will prevent such defects automatically. So you will not 'make the defect' and then 'uncover the defect'.

The whole approach will shift and we will produce 'less waste' and producing higher value, which is exactly the philosophy of agile says.

The notion of 'Think&Prove' is a process of higher degree of sensitization, so that we will kind of prevent issues rather than find issues.

4. Whatever we discussed, does it hold good only for BDD or TDD also?

The fundamental notion for both of these is to prevent defects.

BDD – Behavior Driven Development – approach is to prevent defects, use behaviors rather than guess work. That is, list down the series of conditions (behaviors) and use that to develop the software.

TDD – Test Driven Development - approach is to come up with various potential scenarios of use/abuse and then use it to prevent some of these issues.

So whatever we have discussed is very much part of BDD/TDD.

5. Is there any certification HBT providing?

At this point of time we have more internal HBT certification and have not done anything in terms of certification. We work with the organizations to go beyond the training and call it as “Indoctrination”. As part of indoctrination we work with organizations to help them understand HBT and subsequently work with them to deploy HBT into their organization within their process. Typically this starts with the base line, to understand this is what we are doing currently, these are some challenges existing and post the indoctrination and deployment we would like to see some significant improvement.

6. If we need explore more on these frame work, where we get the information?

We are in the process of coming up with the portal which will contains more information on HBT. It is in some sense available now, but is in the process of being constructed. In a few weeks of time this will be updated with reasonably good information. An email will be sent to all the participants about this. The URL of the link is There is information available in the form of blog which will have various articles. URL of blog is .

7. What kind of education class is available for HBT training? Is that possible to get that training from Japan?

At this point of time what we do is kind of physical class. There is an ongoing work that we are doing which will be done in few more months where we are trying to have online educational materials for HBT.

8. Are there any constraints to use this framework?

Please understand that it is a methodology, a system of thinking that provides you a set of tools for thinking. We constantly learn to meaningfully adapt to the situation. There is no rigidity in the entire stuff like we should write like this, follow these steps etc. These are simple set of thinking tools which we learn to adapt to variety of different situations whether it technology or the process model or be it a kind of application.


Setting a clear baseline

Here is the slides with notes/transcripts of the webinar ‘Setting a clear baseline’, the second in the Tri-series ‘How to test a user story’.

The audio recording of the webinar is available here.

You can view the webinar slide deck below:

You can see the Q&A session of this webinar below. Click on each question to see the answer.

1. Both in agile system and water-fall system, is this kind of HBT approach followed? Any change between these things?

HBT is a methodology that is adapted for different software development methodologies. Here we need to look at

  • What are the elements to test and how we adapt to what is the element, we are not talking about features and flows, and here it is user stories and its connections.
  • The connection between the notion of acceptance criteria and cleanliness criteria, how they relate and how we can apply.

These are the adaptation and adjustments done to adapt to the rapidly evolving model of agile development.

2. Think and Prove is only to decide between Manual or Automation decision.

No, it is not something to decide between manual or automation.

We always thought of and practiced most of times that, testing as a dynamic activity where we inject the system with a set of stimuli and examine the behavior for its correctness or lack of correctness. There is quite a few times where we can do it mentally.

We could say that, What if I give these kinds of inputs? What if this system is down? And we would know what will happen.

In Agile context there is no such role as developer or tester, we are all software engineer as part of a team which is constructing and testing in an iterative mode in a rapid cycle.

So the whole idea here is, we know what is “going IN” and if we could do a process of intellectual evaluation, it will be faster some times. We don’t have to think that always we need to come up with test cases and the taking a decision of executing it manually or automating.

3. The difference that I mainly see is rather than look for formal functional requirements; the start of this journey starts with the user's expectations and needs. Is that right?

There isn’t any detailed amount of requirement that come along as a large document in the context of agile. What we are getting is short user stories which evolve and changes quite rapidly based on what we have learn and what the user want.

It is quite natural that the start of the journey is really about the needs expressed as part of user stories with the expectation set out as acceptance criteria.

4.If we have so many levels of testing, wouldn't HBT methodology take longer time to test considering Agile?

Please understand that we did not talk about the “level of testing”, it is about 'levels of quality'.

We don’t have to do all these tests always. What we are telling is, If we have to conduct say THREE types of tests, is there a simple, natural order so that I can be efficient. First one may take 30 min, the second one may take 20 min and the third one may take 15 minutes. We are talking about the tests need to be conducted on an ENTITY in 'a sprint' and not in the entire life cycle.

The whole idea is to clarify mentally, to allow us to “be effective and be rapid” and learn to adjust as we do.

We are not talking about the standard notion of unit test/integration test etc. What we are saying is, given a particular entity on test (a user story/a collection of user stories), we want to ensure that, does it rejects the bad data from variety of interfaces where it accepts the data? Are there any structural issues that need to be worried about (maybe it is connected to different systems for other services, what if they have some issue?). Then we are talking about the individual small behavior for the user story and then going on to security, performance and etc.

So the whole here is NOT about more levels but it is about a natural order.

5. Testing is believed to be one of the early candidates for Automation amongst the SDLC stages. Would testing a user story lend itself to automation quite easily? Or do you think this brings to the fore the art and science of test effectiveness more than test efficiency?

The whole idea of breaking down the BIG system into smaller entities is, we want to code faster and correctly, we want to test effectively and test easily and evolves rapidly.

There is definitely an advantage because it is much small and therefore it lends itself to automation.

Definitely the smaller user stories are excellent candidates for quick and easy automation, especially if they need to be regress periodically. A user story by itself is small but a “combination of user story” as they become a flow, it become interestingly complex.

Automation does allow us to do speedier things and also help us to repeat the things quite frequently to ensure that you have not broken anything else. But at any point of time we should not necessarily believe that the automation makes the quality of the system significantly better; not necessarily true.

6. In agile system the baseline needs to be changed many times after it’s made. Is this correct? How many times need to change?

Yes it is right. There is no such thing as a uniform baseline; we are taking about in the context of a sprint. As system evolves, it will adjust rapidly.

So you want a system that evolves in consonant with the evolution of the user story.

You change as many times as it requires!.

Remember we are talking of individual smaller user stories that are constantly evolving with more appreciation of what we are building and more feedback from the possible customer persona.

7. If Baseline is prepared for each and every sprint, what if there is connectivity between the stories from one sprint to another how should we go about this?

Remember we have talked about the entities as any of these four types, 1) individual user stories 2) collection of user stories 3) collection of user stories across an epic 4) collection of user stories within an epic and across sprints.

We have baseline for these FOUR variations of entity and keep on adjusting as you go across the various sprints, these elements will become progressively more interesting and bigger. So there will be connectivity, obviously the user stories have to be used in a meaningfully collaborative manner to accomplish the value for the entire system.

Please keep in mind that we are not testing a user story alone always, we are testing a user story and their combination also. So whatever we are evaluating is (a user story/flow), we hopefully figure out the applicable tests for that, then come up with the tests cases, and do with the standard way of evaluation.

Remember what we have shown as tabular form is not consider as templates, rather think these are tools that will help us to think to be clear of what we need to do in a particular sprint.