design checklist-fi

Design checklists to “Do, Sync & Act”

This article is the second one on checklists, the first one being “The power of checklist

Checklists seem to defend everyone, a kind cognitive net designed to catch flaws of memory and attention.

Atul Gawande in his book “ The Checklist Manifesto” states that there are three kinds of problems in the world:

  1. SIMPLE: Individualistic in nature, solvable by application of simple techniques.e.g. Bake a cake.
  2. COMPLICATED: Can be broken into a set of simple problems, requires multiple people, often multiple specialized teams. Timing, coordination becomes a serious concern. e.g. Launching a rocket.
  3. COMPLEX: These are problems where the solution applied to two similar problems may not result in the same outcomes. e.g. Raising a child. Expertise is valuable, but most certainly not sufficient.

He continues on to say that checklists can provide protection against elementary errors in the case of simple problems that we are designed with. This can be accomplished by simple activity task checklist.

In the case of complex problems that require multiple specialists to coordinate and be in sync, a simple activity task checklist won’t do, what is needed is a checklist with communication tasks to ensure that experts discuss on the matter jointly and take appropriate action.

“Man is fallible, but maybe men are less so” Belief in the wisdom of the group, the wisdom that multiple parts of eyes are on the problem and letting watchers decide what to do.

So how can checklists help in solving simple/complex problems? Using simple activity task checklists to ensure simple steps are not missed or skipped and checklist with communication tasks to ensure that everyone talks though and resolves hard and unexpected problems.

Building tall buildings is a complex problem and the success rate of the construction industry’s checklist process has been astonishing. Building failure is less than 0.00002%, where building failure is defined as the partial or full collapse of a functioning structure. (from a sample size of a few million buildings!).

Now let us turn our attention to the complex problem. How does one deal with complex, non-routine problems that are fundamentally difficult, potentially dangerous and unanticipated? In these situations, the knowledge required exceeds that of any individual and unpredictability reigns. To solve these it is necessary to push the power of decision making from a central authority to the periphery, allowing people to make the decision and take responsibility.

So the checklist needs to allow for judgment to be used in the tasks rather than enforce compliance, so that actions may be taken responsibly. This needs to have set of checks to ensure stupid but critical stuff is not overlooked, a set of checks to ensure coordination and enable responsible actions to be taken without having to ask for authority. There must be room for judgment, but judgment aided and even enhanced by a procedure. Note that in COMPLEX situations, checklists not only help, they are *required* for success.

So, how does one make checklists that work? Aviation industry thrives on checklists in normal times and also to tackle emergencies. The advice from Boorman from the “The Checklist Factory” of Boeing for making checklists that work are:

  1. Good checklists are precise, easy to use in the most difficult situations. They do not spell out everything, they only provide reminders of critical and important steps, the ones that even the highly skilled professionals would miss.
  2. Bad checklists are too long, hard to use; they are impractical. They treat people as dumb and try to spell out every step. So they turn people’s brain off rather than turning them on.

“The power of checklists is limited, they help experts remember how to manage a complex process or machine, make priorities clearer and prompt people to function well as a team. By themselves, however, checklists cannot make anyone follow them.” (Boorman, Boeing)

So how should a good checklist look like?

  1. Keep the length of checklist between five to nine items, the limit of human memory.
  2. Decide on whether you need a DO-CONFIRM checklist or READ-DO checklist. With a DO-CONFIRM checklist, individuals perform jobs from memory and experience and pause to CONFIRM that everything that was supposed to be done was done. With the READ-DO checklist, individuals carry out the tasks as they tick them off, it is like a recipe. So choose the right type of checklist for the situation. DO-CONFIRM gives a people a greater flexibility in performing the tasks while nonetheless having to stop and confirm at key points.
  3. Define clear pause points at which a checklist should be used
  4. The look of checklist matters, they should be free of clutter and fit into a page

In summary

  • Ticking boxes is not the ultimate goal
  • Checklist is not a formula, it enables one to be smart as possible
  • It improves outcomes with no increase in skill
  • Checklists aid efficient working

As smart individuals, we don’t like checklists. It somehow feels beneath us to use a checklist, an embarrassment. The fear is that checklists are about mindless adherence to protocol.

Hey, the checklist should get the dumb stuff out the way so as to let you focus on the hard stuff. We are all plagued by failures – by missed subtleties, overlooked knowledge, and outright errors. Just working harder won’t cut. Accept the fallibilities. Recognise the simplicity and power of the checklist.

Try a checklist. It works.


If you find the article interesting, please ‘like’, ‘share’ or leave a comment below.

poc

The power of checklist

Recently I read the book “The Checklist Manifesto” by Atul Gawande. 

“An essential primer on complexity in medicine” is what New York Times states about his book whilst The Hindu states this as “An unusual exploration of the power of to-do list”.

As an individual committed to perfection, in constant search of scientific and smart ways to test/prevent and as an architect of Hypothesis Based Testing, I was spellbound reading this brilliantly written book that made the lowly checklist the kingpin, to tackle complexity and establish a standard for higher baseline performance.

The problem of extreme complexity The field of medicine has become the art of managing extreme complexity. It is a test whether such complexity can be humanly mastered: 13000+ diseases, syndromes and types of injury (13000 ways a body can fail), 6000 drugs, 4000 medicines and surgical procedures each with different requirements, risks and considerations. Phew, a lot to get right.

So what has been done to handle this? Split up knowledge into various specializations, in fact, we have super specialization today. But it is not just the breadth and quantity of knowledge that has made medicine complicated, it is also the execution of these. In an ICU, an average patient required 178 individual interactions per day!

So to save a desperately sick patient it is necessary to: (1) Get the knowledge right (2) Do the 178 daily tasks right.

Let us look at some facts: 50M operations/year, 150K deaths following surgery/year (this is 3x #road-fatalities), at least half of these avoidable. Knowledge exists in supremely specialized doctors, but mistakes occur.

So what do you when specialists fail? Well, the answer to this comes from an unexpected source, nothing to do with medicine.

The answer is: THE CHECKLIST

On Oct 30, 1985, a massive plane that carries 5x more bombs roared and lifted off from the airport in Dayton, Ohio and then crashed. The reason cited was “Pilot error”. A newspaper reported, “this was too much airplane for one man to fly”. Boeing the maker of this plane nearly went bankrupt.

So, how did they fix this issue? By creating a pilot’s checklist, as flying a new plane was too complicated to be left to the memory of any one person, however expert. The result: 1.8 million miles without one accident!

In a complex environment, experts are against two main difficulties: (1) Fallibility of human memory, especially when it comes to mundane/routine matters which are easily overlooked when you are strained to look at other pressing matters of hand (2) Skipping steps even when you remember them, as we know that certain steps in a complex process don’t matter.

Checklists seem to provide against such failures and instill a kind of discipline of higher performance.

Peter Provonost in 2001 decided to give a doctor’s checklist a try to tackle central line infections in ICU. So what was the result after one year of usage? Checklist prevented 43 infections and 8 deaths and saved USD 2M! In another experiment, it was noticed that patients not receiving recommended care dipped from 70% to 4% and pneumonia fell by a quarter and 21 fewer parents died.

In a bigger implementation titled the “Keystone Initiative” (2006) involving more hospitals of 18-month duration, the results were stunning- USD 17M saved, 1500+ lives saved!

ALL BECAUSE OF A STUPID CHECKLIST!

So where am I heading? As a Test Practitioner, I am always amazed at how we behave like cowboys and miss simple issues causing great consternation to the customer and users. Here again, it is not about lack of knowledge, it is more often about carelessness. Some of the issues are so silly, that they can be prevented by the developer while coding, and certainly does not demand to test by a professional. This is where a checklist turns out to be very useful.

In an engagement with a product company, I noticed that one of the products has a product backlog of ~1000 issues found both internally and by the customer. Doing HyBIST level-wise analysis, we found that ~50% of the issues could have been caught/prevented by the developer preventing the vicious cycle of fix and re-test. A simple checklist used in a disciplined manner can fix this.

So how did the checklists help in the field of medicine or aviation? They helped in memory recall of clearly set out minimum necessary steps of the process. They established a standard for higher baseline performance.

Yes! HIGHER BASELINE PERFORMANCE. Yes, this is what a STUPID CHECKLIST CAN DO!

So how can test practitioners become smarter to deliver more with less? One way is to instill discipline and deliver baseline performance. I am sure we all use some checklist or other but still find results a little short.

So how can I make an effective checklist and see a higher performance ? Especially in this rapid Agile Software world?

This will be the focus of my second part of this article to follow. Checklists can be used in many areas of software testing, I will focus in my next article on ‘How to prevent simple issues that plague developers making the tester a sacrificial goat for customer ire by using a simple “shall we say unit testing checklist”.

Related article: Design checklists to “Do, Sync & Act”


If you find the article interesting, please ‘like’, ‘share’ or leave a comment below.

as

Frictionless development testing

Very often in discussions with senior technical folks, the topic of developer testing and early stage quality pops up. And it is always about ‘we do not do good enough developer testing’ and how it has increased post release support. And they are keen on knowing ‘how to make developers test better and diligently’ and outlining their solution approach via automation and stricter process. The philosophy is always about “more early testing” which typically has been harder to implement.

Should we really test more? Well it is necessary to dig into basics now. Let me share my view as to what they probably mean by testing. My understanding is that they see testing as dynamic evaluation to ascertain correctness. To come up with test cases that will be executed using a tool or a human and checking correctness by examining the results. And therefore good developer testing is always about designing test cases and executing them.

And that is where the problem is. Already under immense time pressure, the developer faces serious time crunch to design test cases, execute (possible after automating them). In the case when it does happen, they all pass ! (Not that you would know if they fail!). And the reason that I have observed for the ‘high pass rate’ is that test cases are most often conformance oriented. When non-conforming data hits the system, Oops happens!

So should we continue to test harder? What if we changed our views? (1) That testing need not be limited to dynamic evaluation, but could also done by via static proving. That is, ascertaining correctness not only via execution of test cases but by thinking through what can happen with the data sets. (2) That instead of commencing evaluation with conformance test cases, we start in the reverse with non-conforming data sets first. Prove that the system rejects bad inputs before we evaluate for conformance correctness. (3)That instead of designing test cases for every entity, we use a potential defect type (PDT) catalog as a base to check for non-conformances first. Using PDT catalog as the base for non-conformance check preferably via static proving and devising entity specific positive data sets for conformance correctness.

So how do these views shift us to do better developer testing at an early stage? Well, the biggest shift is about doing less by being friction-less. To enable smooth evaluation by using PDT catalog to reduce design effort, applying static proving to think better and reduce/prevent defects rather that executing rotely, and finally focusing on issues (i.e PDTs) first complementing the typical ‘constructive mentality’ that we as developers have. Rather than do more with stricter process, let us loosen and simplify, to enable ‘friction-less evaluation’.

Think & prove vs Execute & evaluate

Picking up a PDT from the catalog and applying a mental model of the entity’s behaviour can enable us to rapidly find potential holes in implementation. To enable easy implementation of this idea, let us group the PDTs in into three levels. The first one deals with incorrect inputs only, while the second one deals with incorrect ways to accept these inputs while the last set deals with potential incorrect internal aspects related to code structure and external environment. Let the act of proving robustness to deal with non-conformances proceed from level 1 though 3 commencing by thinking through (1) what may happen when incorrect inputs are injected (2) how does interface handle incorrect order/relationship of these inputs and finally (3) how entity handles (incorrect)internal aspects of structure like resource allocation, exception handling, multi-way exits, timing/synchronisation or misconfigured/starved external environment.

Non-conformance first

Recently a senior executive was stating that his organisation’s policy for developer testing was based on ‘minimal acceptance’ i.e. ascertain if the entity worked with right inputs. As a result the test cases were more ‘positive’ and would pass. Post release was a pain, as failure due to basic non-conforming inputs would make the customer very irritated. And the reason cited for the ‘minimal acceptance criteria’ was the lack of time to test the corner cases. Here the evaluation was primarily done dynamically i.e executing test cases. When we get into the ‘Think & Prove’ mode, it makes far better sense to commence with thinking how the entity will handle non-conformance by looking at each error injection and potential fault propagation. As a developer, we are familiar with the code implementation and therefore running the mental model with a PDT is far easier. This provides a good balance to code construction.

PDTs instead of test cases

Commencing with non-conformance is best done by using patterns of non-conformance and this is what a PDT is all about. It is not an exact instantiation of incorrect values be it at any of the levels (1-3), it is rather a set of values satisfying a condition violation. This kind of thinking lends to generalisation and therefore simplifies test design reducing friction and optimising time.

To summarise, the goal was to enable to build high quality early stage entity code and we approached this by being ‘friction-less’. By changing our views and doing less. By static evaluation rather than resort to only dynamic evaluation. By focusing on robustness first and then conformance. By using PDT catalog rather than specific test cases.

Once the entity under development has gone through levels 1-3 quickly, it is necessary to come up specific conformance test cases and then dynamically evaluate them if the entity is non-trivial. If the entity under development is not a new one, but one that is being modified, then think through the interactions with other entities and how this may enable propagation of PDTs first before regressing.

So if you want to improve early stage quality, smoothen the surface for developer testing. Make it friction-less. Do less and let the entities shine. It is not doing more testing, it is about being more sensitised and doing less. Let the evaluation by a developer weave naturally and not be another burdensome task.

What are your views on this?

“Roadmap to Quality” – Panel discussion at SofTec 2012 Conference

SofTec 2012, Bangalore, July 14, 2012

The panel discussion on “Roadmap to Quality” was brilliant due to the cross-pollination of interesting ideas from non-software domains. Three of the four panelists were from non-software domains – Mehul@Arvind Retail, Soumen@GM, Raghavendra@Trellborg with lone exception of Murthy from Samsung, with moderation done by Ashok@STAG.

The key take ways from the panel discussion are:

  1. Continuous monitoring helps greatly as this is like a mirror that reflects what you do constantly, this is what Mehul@Arvind highlighted as being important in his domain of apparel/retail business. Ashok connected this to dashboards that are becoming vogue in our workplace, more in the Agile context
  2. Soumen@GM stated the importance of early stage validation like Simulation, Behavior modelling in the Automotive industry, as the cost of fix at the later stage is very expensive. The moderator connected this to “Shift Left”, the new term in our SW industry- how can we move validation to earlier stage(s)?
  3. Raghav@Trellborg a component manufacturer of high technology sealing systems stated need of understand of understanding the final context of usage of the component as being very important to know to ensure high quality. He also stated testing is deeply integrated into the “shop floor” i.e. daily work and the most important aspect of quality is not QA or QC but the underlying the Quality Systems in place. How do Q systems ensure that quality is deeply entrenched into the daily life. The moderator highlighted the fact the in software industry we have implemented systems, but these are still at an organizational level and the need of the hour in SW industry is to institutionalize these at a personal level
  4. Finally Murthy stated level of quality needed is not the same in all domains, in certain domains (like mobile) that have disruptive innovation and short life cycles, “we need just enough quality”. He highlighted the need to understand “technical debt” that we can tolerate as a driver for deciding “how much to test”

You can also read the special news on the panel discussion on Silicon India website.

Relavent topics:
a. Software testing lacking serious effort

You are only as good as your team

A semiconductor company is considered a pioneer in the 4G-WiMAX, dreams of being among the first companies to launch WiMax solutions. On the verge of launching their product, the only challenge in the un-treaded path, was imagination.
Their QA requirements was unique as the product being developing. They were looking for a partner who would be as spirited as they were. Can STAG prove its mettle? Could we be the team they were hoping for?

One question we are asked almost immediately after saying Hello is “Do you have the domain expertise?” and then we speak about HyBIST. It couldn’t have happened this time. Pioneers can’t ask for experience. Soon we were working on conformance validation (which later on became the IEEE standard). Within a few weeks we understood why they were looking for someone beyond ‘I-can-provide-testing-resources-too’.

BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes. The buildbot watches a source code repository (CVS or other version control system) for interesting changes to occur, then triggers build with various steps (checkout, compile, test, etc). The

STAG setup a system to automate the build, compile & validate code changes in the source code repository. The builds are run on a variety of slave machines, to allow testing on different architectures, compilation against different libraries, kernel versions, etc. The results of the builds are collected and analyzed (compile succeeded / failed / had warnings, which tests passed or failed, memory footprint of generated executables, total tree size, etc) and are displayed on a central web page. The entire system was around 6000 Lines of Code in Python.

This resulted in quick validation of code changes in the repository leading to reduced rework time, thus increasing productivity of distributed development teams.

Surprises are always around the corner

An industry leader providing end-to-end solutions to automate Foreign Exchange trading, our customer provides innovative solutions for financial institutions. Their flagship product, the online FOREX trader, connects to the various Trading Gateways across the world using Adapters. That is no small task. We’re talking millions of transactions at sub 200 ms response time.
When we were called to develop a automation suite for one its components, we didn’t expect anything challenging. Boy, were we in for a surprise or what?

An important middleware Component called as Adapter that links the FX Inside to the Provider. Different providers have their own Adapters. The real work of the Adapter was to direct a homogeneous data sent by the Client, while trading into heterogonous environment to provider and vice versa. These Adapters have to be tested for every release of the core applications. This is a backend non-UI programs that requires scripts to be written to test the functionality at API level.

The objective was to develop automation suite which can be used to test multiple adapters on both Simulator and Live setup. Automation suite should be flexible enough to cater to test new adapters that will be added in the future with minimal changes.

For that we interacted with the developers to understand the functionality of the Adapters and finally we developed a framework which would cater to automating multiple adapters and also add new adapters in the future.

The team started the incremental approach towards Automation of the Adapters by first interacting with the development and QA team, gathering the necessary information by which the common scenarios across the adapters were identified. The critical part of Automation was to develop scripts that can Automatically Restart the Adapters residing on a remote Linux box, send across trading messages to the adapter component, receive them by listening to messaging broker and parse the necessary information.

The result was much better than we anticipated. The execution time of the test scenarios for one Adapter taking two days earlier was reduced to thirty minutes for both live and Simulator environments, which was phenomenal for the client.

STAG developed a test suite to automate tests for every adapter at API level, thus, bringing down System testing effort by 40%.

Smart Test Automation to check product functionality cuts test execution time enabling faster market release

STAG Software was working on a dashboard product aimed at the mobile telecommunications industry. It was being developed on the LAMP platform, which is a solution stack of free, open source comprising the Linux (operating system), Apache HTTP Server, MySQL (database software), and either Perl, PHP or Python.

The major user interface (UI) component of the product, which was the management UI, had the facility to configure key components, configure handsets, user management (create, modify and delete), upload audio/video clips for video on-demand and live viewing, pinning channels for streaming, display status of streaming servers, streaming sessions, assets as well as generating reports for asset inventory and streaming activity.

The scope of the project and range of features dictated that the project would not only be development intensive, but post-development there would also be an equally intensive testing and debugging stage.

STAG automated the execution of a number of product feature test cases.

400 FUNCTIONALITY TEST CASES AUTOMATED

As some of the product features reached stability, STAG automated the execution of their test cases. Validation of UI-based features was automated using IBM Rational Functional Tester (RFT). The non UI- based server-side features and the validation of the product installation process was automated using Perl.

RFT enabled to automate 400 functionality test cases out of a total of 600 test cases for the management UI. A data driven framework was developed with the ability to take input data for test cases from an Excel sheet. 400+ test cases were managed by developing a catalog of around 40 reusable library functions and 22 main test scripts. These same test scripts could be executed on multiple browsers i.e. Internet Explorer and Mozilla Firefox, which also enabled considerable time and effort savings. Moreover, some of the libraries developed could be used as project assets.

FASTER TIME-TO-MARKET, COST SAVINGS

Benefits of automating the test cases were:

  • Test execution effort was brought down from 17 persons and machine hours to 7 machine hours
  • 42 person days effort was taken to design, develop and test the scripts, which was considerably shorter then anticipated
  • The testing team could focus more on other components/test cases, where manual intervention was essential
  • Cost savings
  • Faster time-to-market

——————
This case study was published in the IBM’s “The Great Mind Challenge for Business, Vol 2, 2011”. . The book recognizes visionary clients who have successfully implemented IBM software (RFT) solutions to create exceptional business value.

Setting a new paradigm in Software Testing

Below is the article that was published in October 2011 issue online edition of The SmartTechie.

—–
Although software testing has undergone significant progress in recent years, organizations have been demanding something new. Global market pressures are pushing organizations to get more for less. Innovation is not just for the principals, and they expect the same from their QA partners too. “The typical activity-based testing model has overworked itself too long. What the industry needed was a scientific approach” explains T Ashok, CEO of STAG Software, a boutique test engineering company headquartered in Bangalore. STAG is the pioneer of a scientific approach to testing – Hypothesis-Based Immersive Session Testing (HyBIST).

“Today, there is severe pressure on the captives to cut costs and over the years HyBIST has proven itself to significantly cut costs” says Ashok. In addition to standard offerings like outsourced QA, functional test automation, performance assessment, etc., STAG works with various organizations to deploy HyBIST. “STAG engages with its clients and restructures the entire approach of testing. The fundamental change has to come from the way a QA professional thinks and approaches a problem,” explains Ashok. While traditionally, the process is put together and people are forced to comply it, STAG’s approach differs. It leverages the organization’s available intellect first, before installing the process. A consultant typically engages with client over a period of three to six months where the testing team is made to unlearn the old techniques and start looking at the problems with a logical bent of mind, and then the process is set. The company has also licensed its methodology to the system integrators and service oriented companies.

Apart from deploying HyBIST in organizations, this year STAG is looking forward to release the HyBIST toolkit being developed over the last couple of years. The tool acts as a coach for a test professional and provides ‘guidance’ to test the right way.

Ashok feels that there is a much larger picture which needs addressing “It is not just the IT companies that need a change in approach but it is also the way colleges teach software testing to the students”. In order to make sure the students do not fall into the old routine, the company has started several programs through its newly formed CleanSoft Academy. CleanSoft Academy is also actively seeking to increase its network amongst educational institutions to make software test engineering part of its academic program. CleanSoft Academy is working to close the increasing gap between Industry requirement and available pool of skills. Instead of generic programs, four streams of test professional programs that are aligned to market requirements are available as a choice to specialize.

HyBIST is a goal-centered methodology wherein the goal of software cleanliness is set up, potential defect types that can impede the cleanliness criteria identified, and then activities performed to ensure purposeful testing that is indeed effective and efficient. HyBIST has been applied by STAG in various domains like Mobile, Healthcare, ERP/SCM, Media, eLearning and more, over the last ten years in various process models including Agile. This has resulted in lowered defects escapes (up to 10x lower), increased test coverage (at least 3x), better RoI on automation, lower support costs (by 30 percent) with no increase in effort, time or costs.
—–

You can also read the online version here or download a PDF copy of the article.

HyBIST enables agility in understanding

A Fortune 100 healthcare company building applications for next generation of body scanners, uses many tools including OS, compilers, webservers, diagnostic tools, editors, SDKs, database, networking tools, Browsers, device drivers, project management tools and development libraries. Healthcare domain meant compliance to various government regulations including that of FDAs. One such compliance states that every tool used in the production, should be validated for ‘fitness of use’. This meant as many as 30 tools. How could one possibly test the entire range of applications, before it is used? Considering the diverse range of applications, how could they have one team do it?

STAG was the chosen partner not because we had expertise in healthcare applications, but because of HyBIST enables test teams to rapidly turnaround. For this job, STAG put together a team with sound knowledge of HyBIST.

The team relied on one of the most important stages of the SIX-staged HyBIST – “Understand Expectations” – A scientific approach to “the act of understanding the intentions or expectations” by identifying key elements in a requirement/specification and setting up a rapid personal process powered by scientific concepts to ensure that we quickly understand the intentions and identify the missing information. We look at each requirement and partition these into functional and non-functional aspects and probe into the key attributes to be satisfied for the requirement. We use a key core concept Landscaping that enables us to understand the Market place, end-users, Business flows, architecture and other attributes, and other information elements.

Once a tool is identified, the team gathers more information from the public domain. This ensured the demo from customer (of around 45 minutes) is easily absorbed. During the demo, the customer also shares the key features they intend to use. This information eventually morphs into requirements. The team then explores the application for around 2 days. During this period they come up with a list of good questions, clarify the missing elements and understand the intended behavior. Thus the effort spent to understand and learn the application is as less as 16 hours.

“Never look down” – not the best suggestion for a startup

A Talent Management Company delivering end-to-end learning solutions was on a rapid growth path. Customer base was growing, and they catered to every possible segment. With international awards and mentions in every possible listing, it was a dream growth. Each customer was special and of high priority. The sales team filled order books enough to keep the engineering busy with customization. Within short period, it became increasingly difficult to meet schedules and then instances of customers reporting defects started coming in. The management smartly decided to enough of act the signs before things got out of hand. It is wise to check if the rest of the team is keeping up with you, when you are climbing high.

After a detailed analysis we put down a list of things that need attention – With no formal QA practice in place, a makeshift testing team of few developers and product managers assessed the applications before being released to customers. The requirement documents for their products did not exist. There was no tracking of the defects done which eventually resulted in delayed releases to the clients.

The team applying HyBIST hypothesized what could possible go wrong in the product (applied HyBIST core concepts of ‘Negative Thinking’ and ‘EFF model’) and staged them over multiple quality levels. The test scenarios and test cases designed were unique to each of the quality levels formulated as the focus on the defects to be detected at each level is unique and different (HyBIST core concepts Box model was applied to understand the various behaviors of each requirement/feature/sub-feature and hence derive the test scenarios for each feature/sub-feature). With close support from the management, we put together a net so tight, that no defects slip through.

 A clear mapping of the requirements, potential defects, test scenarios and test cases was done after completing the test design activity to prove the adequacy of test cases.

The robust test design ensures the product quality. The percentage of High priority defects was significantly high (65%) and were detected in earlier test cycles The test scenarios and test cases were adequate as the defect escapes was brought down to 2% from 25% and the regression test cycles was reduced from 30 to 12. More importantly, the schedule variance dropped to normalcy.