Quality requirements in software development are constantly evolving. Companies need to control performance, security, intuitivity, run interface tests, and oversee the code quality. The growing workflow requires QA teams to change their testing methodologies. Our teams have been embracing trends like automation, exploratory testing, and Agile methodologies for a while already, and we are happy to see them become a common practice.
In this guide, we’ll share our experience of implementing QA methodologies and optimizing testing processes.
What is Quality Assurance?
Quality assurance is the process of optimizing the development and testing processes, selecting approaches that minimize the number of errors in the end code and help detect bugs quickly. QA is a process-oriented field: the goal is not to clean bugs in the products directly, but rather make sure that practices that caused those bugs won’t be repeated. The issues are fixed by testers and quality control teams.
Is testing and QA the same?
In this guide, we will be talking a lot about testing methodologies together with QA. While QA and testing constantly cooperate, it’s worth remembering that they are not the same. QA teams define product requirements, set up deliverables, and automate processes. They are not actually launching the product or looking for bugs.
Testing and quality control, time-wise, follows QA. Quality Assurance begins early on in the product, whereas testing is carried out together with the development stage or even later.
What is a QA methodology?
Quality assurance methodologies describe actions that teams take to organize and optimize the process of QA planning, design, monitoring, and optimization. QA, software testing and development methodologies are often the same – teams use similar approaches for all engineering process.
We’ll be examining methodologies that are commonly used for software development, testing, as well as Quality Assurance. We already described some of those in our guide to software development methodologies, but this time, we’ll be looking at them specifically from the QA perspective.
Waterfall is a standard software development strategy – the project is broken down by stages, and the teams move to the next phase only after the previous one has been finalized. Once the stage was completed, the team members can’t come back to it anymore. Let’s see what position QA holds at different stages of the Waterfall project.
Stage 1 – Requirements
For QA: participating in creating functional and non-functional requirements, security assessment, and acceptance criteria creation.
The task of a Quality Assurance team is to describe the ideal version of the product. QA experts set the deliverables for QA engineers and developers, define the criteria for evaluating code quality, and find methods for its assessment. Requirements, created by a QA team, will be used throughout the entire testing and development process. Product requirements are specified in Software Requirements Specifications.
Stage 2 – Design
Although QA experts don’t participate in design directly, they are constantly kept in the loop. QA teams oversee the process and make sure that the product corresponds to initial requirements. It’s easy for designers to lose track of the bigger picture as they solve day-to-day issues – so QA keeps the team in check by always prioritizing the code quality.
Active participation of a QA team in the design process gives product owners assurance that the team always keeps their work in accordance with high standards.
Stage 3 – Implementation
The implementation stage consists of development and deployment. This is where the team creates the functionality of the product. At this stage, the role of a QA team includes overseeing the development process, detecting architectural issues, and fundamental problems with the development approach.
For instance, a QA expert might spot that a framework, chosen by a development team, will make it difficult to uphold certain performance requirements. If quality assurance experts catch these issues early on, the number of bugs and fixes lowers significantly.
Stage 4 – Verification
In Waterfall, verification is the main stage for introducing QA test methodologies. QA engineers take in the product on the final stage of completion and check if it corresponds with set requirements. The team checks if testing and development processes were done on time, if the promised results were met, and seeks ways for improvement.
Stage 5 – Maintenance
QA analyzes feedback from users and seeks a long-term way to remove development and testing issues. In this quality assurance methodology, teams report quality improvements, number of fixed bugs and escaped decades, automate test cases, and provide feedback for testing and quality control specialists.
Agile is a QA and software development methodology that’s focused on maintaining a flexible process rather than separating it into defined stages. The team can return to previous tasks at any time if that provides better product quality. Instead of building incremental changes, Agile prefers frequent updates. The product has multiple interactions, each of which is released to the end-user.
Quality Assurance in Agile
Agile Quality Assurance prioritizes a user-driven approach and code quality over the strict organization. Teams release interactions to users, collect feedback, and keep improving the product. If the quality requirements change in previous stages, like design or planning, the team can easily come back to it. Any process modification is fine, as long as the quality is the key motivation.
Planning: definition of ready and done
In Agile planning, QC and QA need to define the criteria which are used for the process to be considered complete. Since Agile doesn’t impose strict time limits, it’s easy to get stuck on a single stage. Having clear acceptance criteria helps QA engineers avoid unnecessary perfectionism and balance between quality and cost-efficiency.
Design: documentation and communication
QA engineers should start documenting the processes and the results of all the project tasks. They should cooperate with designers, making sure that everyone is on the same page. The key goal for this stage is creating a comprehensive product’s wiki, clearly stating functionality, purpose, positioning, and intent.
Agile is focused on emotion more than any other methodology. Even if automation takes a lot of time at initial stages, Agile welcomes it, as it allows achieving better quality in the long-run. One of the main principles is prioritizing quality and innovation over short-term goals.
If the Waterfall quality assurance methodology requires a lot of planning and estimation in the first stages, Agile emphasizes on the importance of measurement throughout the project. QA team doesn’t have strict constraints, which is why they can easily get sidetracked if there are no metrics that show the real picture. We made a guide for our favorite Agile metrics, so if you’re interested, take a look.
Iterations are always brought to the end-users. They provide feedback – short loops allow teams to pick the right direction for product growth. The work on Quality assurance and improvement never stops – QA lies at the core of Agile.
Iterative development is a mix of Agile and Waterfall. On the one hand, the test methodology takes a flexible approach to re-visiting product requirements and publishing small releases, rather than big incremental changes. On the other hand, the project organization still follows the Waterfall logic.
Iterative development and QA
Teams break the project down into smaller chunks and check features one by one. Teams also release frequent updates to shorten feedback loop from users. This allows checking if the expectations of the team comply with the users’ needs and standards.
The stages and approaches to structuring them are the same as in Waterfall – once you are done with one stage, you don’t come back to it. However, the duration of each stage is a lot smaller than in Waterfall, because the objectives are
Similarities with Waterfall
- A lot of planning at the initial stage;
- Once the stage is completed, the team doesn’t come back to it anymore;
- The team sets the time and budget constraints for each stage of the project early on;
- Reliability and predictability are the core values for iterative QA.
Similarities with Agile
- User-driven QA is the key: teams receive feedback from real users and fix bugs immediately;
- Small updates are better than big releases;
- QA testers cooperate with developers and designers through the entire project and are slightly more flexible than in regular Waterfall;
- Fewer risks than in Waterfall, as the team can notice small defects before they scale into larger problems.
Extreme programming is a combination of Agile and iterative development. The test methodology favors short release cycles but uses Waterfall for the organization – well, Extreme Programming prefers Agile instead. Although Extreme programming is focused on quality, efficiency is even more important. Developers start from building essential functionality and getting it to the working mode.
Using extreme programming in QA
Extreme programming takes Agile practices and iterative testing and takes them to experience.
- Code reviews are substituted by collecting programming and QA;
- Testing is broken down into units;
- The team sets refactoring goals along with design and development objectives;
- The team schedules refactoring days and planned activities for going through customer feedback;
- The design and functionality are broken down into even simpler components.
All interactions are planned out in detail, but flexibility is still a priority. For us, extreme QA is a possibility to quickly deliver high-quality minimalist products. It’s an ideal methodology for MVP QA, refactoring, and redesigns.
Which QA Management Methodology to Choose
We compare examined QA test methodologies, advantages, disadvantages, and use cases.
Software testing methodologies
Quality assurance heavily relies on testing and cooperation with code quality teams. This is why we can’t talk about QA methodologies without discussing test principles, design techniques, and optimization methods. So, we take a brief look at the most common types and approaches, all of which are essential for QA test methodologies.
Test design techniques
All testing in QA is divided into two categories: static and dynamic.
- Static testing is a set of activities of monitoring, inspecting, and reviewing software quality without executing it. The team can look through documentation, team reports, and metrics to understand if the team’s workflow is successful or not. The software doesn’t have to be ready to run at this point.
- Dynamic testing requires a team to run the product and check its changing behavior. This testing checks dynamic variables like CPU consumption, response time, page load speed, and others. These factors depend not only on functionality but also on software compatibility with hardware, OSs, ability to work with low bandwidth, etc. Dynamic testing should provide a realistic picture of how the product will behave on users’ devices.
The most common static software QA methodologies are audits, walkthroughs analysis of metrics, technical review, management evaluation, inspection, and others. They describe processes, without focusing too much on the product.
Dynamic testing, in its turn, can be broken down into two parts:
- Whitebox testing: a type of testing that examines in-depth code. Testers should understand the motivations behind developers’ actions, know the details of the functionality and interface, and interpret the entire functionality.
- Blackbox testing: the tester puts the application under test conditions and looks at it from a user perspective. This type of testing doesn’t require inside knowledge of the codebase. The scope is evaluating the performance from the third-party perspective.
QA experts usually perform black-box testing. They are interested in the end result rather than in a detailed deep structure. The most common types of dynamic testing are system, acceptance, unit, and integration testing.
Unit testing vs integration testing
The testing and QA teams should be aware of how each individual features of the software functions and see the bigger picture at the same time.
Unit testing is a modular approach, where the application is broken down into components (one or several features), and each one of those is assessed individually.
Integration testing: the process of assessing the entire software to see the overall performance of the software. It’s typically performed after unit testing – the team brings all tested modules into bigger pictures.
Functional vs. Non-functional Testing
Functional testing is performed on every function of the software to check its compliance with product requirements. Usually, this is a black box testing – QA engineers are concerned with superficial performance without referring to the internal architecture.
QA engineers define the conditions in which the feature is supposed to produce a particular output – usually the ones that happen during the actual usual experience. The priorities are to check the performance of the database, interface, integration, a connection between client and server.
Non-functional testing assesses general characters that are relevant to the entire product. The main criteria are performance, intuitivity, reliability, and efficiency. If functional tests check the compliance with functional requirements, non-functional testing, obviously, relates to the non-functional ones.
Performance testing QA teams perform functional and non-functional tests to ensure that the product complies with an SRS document and then they optimize further development and testing processes.
The scope of performance testing is to assess and focus on the program’s speed, scalability, reliability, and detect technical errors when the software is running.
Performance testing types
- Stress testing: the application is examined under extreme conditions (high workloads, low bandwidth, high-frequency requests);
- Load testing: QA engineers check the product’s efficiency under different workloads, gradually increasing the number of active users and concurrent requests;
- Volume testing: checks how well the application is capable of processing large data models and measures average output time;
- Endurance testing: the team checks if the performance of the software is consistently efficient over a prolonged period;
- Scalability testing: the team checks the system’s ability to automatically adapt to increased and decreased workload;
- Spike testing: the high number of users and requests are introduced not gradually, like in load testing, but in sudden spikes. The goal is to check how the software adapts to momentary extreme changes in working conditions.
To perform performance testing, the QA team starts analyzing the test environment, defines realistic performance conditions and criteria. With these settings in mind, QA engineers and devs design plan test cases, prepare a suitable environment, and analyze the results.
Security testing is focused on detecting safety vulnerabilities, suspicious patterns, and checking software response to critical situations. The goal is to prevent data leaks, unauthorized access, malware penetration, etc.
Types of security testing
- Detecting vulnerabilities: QA team sets up automated software that scans the functionality, connections, databases, and detects problematic areas in the codebase and architecture;
- Penetration testing: QA engineers perform a controlled attack on the system to verify its responses;
- Risk evaluation: the QA team predicts what attacks the software is most vulnerable to, assesses the costs of resolving the problem, and builds prevention strategies;
- Network scanning: the software checks all the networks and connections in the software, monitoring if there are no “entrances” for cybercriminals.
- Ethical hacking: the team can hire a hacker who will explore the system with the company’s permission. This is what Equifax did when they were hacked by a research agency that detected critical flaws in their database – unfortunately, those were unresolved and resulted in a breach.
Security testers deliberately put the software in potentially dangerous controlled situations to have an idea of how a solution might perform under an actual attack. QA experts are responsible for the safety of the functionality, attack prevention, as well as setting up the infrastructure for handling the threat if it does penetrate the product.
Read about the difference between Quality Assurance vs Quality Control their product- or process-oriented scopes.
QA best practices: checklist
Choosing between software QA methodologies and making a list of must-run testing types is at the core of building a sustainable QA strategy. To execute your theoretical framework, you also need to understand your next actions. We prepared a practical checklist of best QA practices that should be introduced along with described methodologies and testing strategies.
- Flexible roles: Quality Assurance team should participate in Quality Control activities and sometimes even help out in white-box testing. Although working directly with a codebase isn’t their direct responsibility, having high transparency definitely improves the software.
- Clear release criteria: regardless of your methodology, be sure to update release criteria after every published iteration. Even if you are using less flexible methods, like Waterfall, revisiting product requirements is a must.
- Fixing is a priority: schedule days for code clean-up and tech debt removal. Implementing new techniques and automating test cases is fun, but you need to get short-term tasks done as well.
- Early automation: don’t put automation off – the earlier you start, the more time and effort you’ll save. When you think about your test cases with automation in mind, the transition from manual testing will come naturally.
- A dedicated security team: security testing should be handled by experts. Make sure that your QA team features experts whose main specialty is safety.
- Assigning responsibilities for performance quality: the performance team should also be dedicated only to monitoring performance issues. It’s better to have at least several QA experts whose task is solely overseeing speed, scalability, and stability.
- Short feedback loops: analyze the work performed by testing and quality control teams as fast as possible and communicate quickly. Obtain feedback from users and integrate their suggestions in your plans for next iterations.
An efficient QA team is transparent, flexible, and collaborative. The goal of Quality Assurance is to help developers become more efficient, taking full responsibility for process audit, analysis, and optimization. QA experts should free testers and developers from administrative work – so they can focus on improving code quality.
Adapting QA methodologies takes thoughtful time and planning. It’s a group effort – all team members should be on-board with the changes, although management has to always take the lead. The first step to optimizing your quality assurance process is being open-minded about changing your workflow and revisiting test design approaches.
Start by researching and evaluating your current situation – this will give you insights on where to move on. The next step is to consult experts who already have well-established methods that work for them. We can help you out with this one – you can contact our team and onboard experienced QA engineers to your team. We will gladly share our best practices, implement methodologies in your team, and deliver a high-quality product. You’ll be able to take pointers from our methods and later implement those methodologies independently.
Need a qualified team of developers?
Access the talent pool to scale your team capacity.