Share this Post
In the era of digital transformation and ever-increasing customer expectations, companies are continuously re-evaluating their digital strategy to stay competitive in the marketplace.
The proliferation of Agile and DevOps adaptation in order to improve efficiency, agility, customer experience, and profitability is the need of the hour to remain relevant and thrive in the current market dynamics.
Faster, Faster, and Faster is the mantra of digital strategies which leads to faster development, faster deployment, and faster delivery. This means shorter testing cycles while still covering all the risk areas.
The existing ways of performing quality assurance of the products are no longer adequate in today’s time, where software applications are supposed to be available anywhere, on any device, on every screen size, on every browser, and provide amazing customer experience to its users.
Sharpen Digital Assurance further with AI
We all know that finding and fixing a defect in the software comes at an undesirable cost which we all want to minimize. Having said that, if defects remain undetected and are passed onto the production stage, it might cause a severe impact on the project delivery cost, not to mention a dent in brand image leading to customer attrition. The problem at hand is to optimize the testing process to achieve a balance of risk mitigation and cost/efforts/timeline incurred.
Tremendous improvements have been seen in the software quality assurance domain in the last decade, such as automation testing, shift-left, lean testing, and so on. At the same time, the application complexity, devices supported, and speed of delivery have also increased manifold. The gap has become wider between the current state and target state in the software quality assurance domain.
The Key approach here will be the utilization of AI/ML in software testing. The AI/ML based algorithm will train the model to predict the areas of maximum risk so efforts are optimally aligned to achieve maximum ROI.
The AI / ML model training itself is tricky, as it needs a large volume of data to train the model on, and any error in data if not identified at right time, would amplify the outcome of AI thus doing more wrong than good. Careful calibration of the model and availability of correct test data is going to be useful to enable AI in the Testing process.
The To Be state is when the model will provide a probability of defects in the area of the code, so automation testing (static and dynamic) is focused on a specific area, thus helping achieve results in a shorter time.
Why Digital Assurance function is best suited for AI modelling
Everyone is fascinated by the results promised by Artificial Intelligence (AI) and there is a lot of buzz in the media too. Let’s go a bit deeper and take a look at the four fundamental elements of AI:
Categorization: Categorization involves creating metrics specific to the problem, for example:
- There is a huge rework cost of defect identification, fixing, and revalidation post-deployment to production. Example metric: rework cost is 20% of overall project delivery cost
- Customers are finding it difficult to use the application and finding the right section on the browser / WebApp; this is driving the customer to move to a competitor product. Example metric: Customer Satisfaction surveys mention 3.1 (which is lower) on a scale of 1 to 5
- The delivery cycle of the organization is 4 weeks while several companies within the same domain/market have started adopting a two-week delivery cycle, thus having the advantage of delivering faster to market. Example metric: SDLC cycle takes 4 weeks from planning to production deployment.
Quality Assurance: Quality Assurance provides the data needed to create foundations of AI/ML applications, such as historical defect data categorized into modules, impacted code area, releases, developers, type of issues, etc. QA also provides details on test cases executed/pass/fail, first-pass rate, etc. It helps join the dots and create problem statements.
Classification: Once a problem is categorized into various areas, the next step is to identify classifiers for each category to direct the user for analysis and conclusion. For example, in the airline travel domain, if the problem identified has to do with making a booking, the team needs to start classifying the possible causes of the problem: Web Application, Mobile App, Authentication, Authorization, Calendar, Pricing, Payment, and Reservation Factors and so on
Machine Learning: Now the problem is categorized and classified for domain-specific terms, the team can start feeding this data to machine learning. There are various algorithms and techniques broadly divided into supervised learning and unsupervised learning. Supervised machine learning with neural networks is becoming popular. Few other applications of machine learning are feature discovery, event correlation, and time series anomaly detection.
As the quality assurance function is generating a huge volume of data such as test cases, code reviews, defects data, test execution data, etc. it is pertinent to use this huge volume of data related to code quality, testing results etc. in order to train the model.
Collaborative Filtering: It is used to sort through large volumes of data and starts using AI based solutions. This helps in turning data collection and analysis into meaningful insight or action.
Challenges of using AI/ML into Quality Assurance
The key requirements for an AI system are:
- Enormous sets of data
- Validity of testing data collected from various source
- Integrity of data
The challenge is with the availability of a large amount of verifiable data. If there are outliers in test data, those should be taken care of while massaging.
Another challenge is the non-availability of a continuous stream of data, as most testing is done on a discrete basis. In such a scenario, it would be difficult to find patterns in the QA data of one release and other releases.
Since there are various attributes involved in training ML models, and QA data of different types of industry/programs may have different outcomes, it becomes a bit difficult to have a single ML model for all projects.
Future belongs to AI/ML and we should be ready to embrace the changes. At the same time, we also need to ensure authenticity, integrity, and availability of correct data. If the above-mentioned challenges are taken care of, AI can be very beneficial if used along with digital assurance to work and deliver solutions, thereby achieving predictive threat modeling. This will provide benefits such as shorter delivery cycles, improved risk management, and cost optimization.
Yatender has 20+ years of experience in software test engineering. As the head of Testing Practice at IGT Solutions, Yatender is actively involved in innovations related to test engineering covering new tools, technologies, and solutions, and enabling IGT’s clients to achieve faster time to market quality improvement, and optimization of developer efforts in overall SDLC. A result-oriented leader, proficient in delivering high customer value and achieving excellence in service delivery management with proven skills in consulting and managing large and complex test programs. When away from work, he enjoys reading on a variety of topics and spending time with kids.