A PRO tale: diving into the QA realm
Hi everyone. My name is Victor, and I’m a QA Engineer at EBS-Integrator. If this ain’t telling you much, let me have another go at it. No QA role is the same.
Some professionals have a very deep technical understanding. Spotting bugs and errors in a glimpse, ready to step into a developer position when needed, these are true masters of their craft.
Others have a business-oriented mind. They can identify process errors that result in “conceptual bugs” when the code seems quite clean. After about 5 years of boiling this QA stew, and a few more in development, I check both of those boxes. There’s also enough scar tissue to prove this isn’t a smooth journey.
Reaching this milestone requires a deep understanding of the roles, bells, whistles and anatomy of any project.
You need to develop a palate for visions and the mechanics that turn them into digital products. All of this, in the name of productive feedback that results in:
a) service-related optimizations and b) viable vision corrections. Preferably before you’re deploying into production.
Today I will assault our blog with a very specific purpose:
building a case for QA engineering and unveiling inner workings of Quality Assurance (QA).
I often find myself approached by colleagues on coffee breaks quizzed about what QA engineers do. There is a common misconception that they are “second class” developers or worse: a frivolous perk for software engineers. No wonder due-diligent QA testers are somewhat a “Mr Cellophane” to everyone but programmers.
At some level, they’re quite like DevOps or System Administrator roles: you only remember about them when the sewage hits the venting pipes.
Hence, it’s quite easy to point fingers towards a QA engineer when your production goes down, or things go on fire. To put it bluntly: QA victories are but a silent whimper, while even the tiniest mistake is, in most cases, a nuke-launching matter.
We don’t even get a fancy title like “Data Scientist“, “UI/UX Designer“, “Vision Executive Guru”, or even “Senior External Relations Consultant”. No wonder people think we’re nothing but “Q” & “A” button pushers.
The industry hasn’t been kind to us either: most of the time, testing positions are entry-level jobs for people working in IT. This is quite a big “no-no”. Companies that practice this, attract disposable talent.
These juniors aspire to become high-end developers via chewing bubble gum, but the quality damage can be of catastrophic.
Big QA “no-no”-s and their consequences
Two things are bound to happen here:
- you get an ever-lasting dreamer that cannot reach its’ potential, stuck at the “junior” level for about 4 years: or
- this ever-lasting dreamer gets promoted despite its’ “progress”. The result? Low service performance and quite a great deal of unjustified, preventable quality debt (not related to technical debt).
This means that most testing jobs tend to hire more entry-level people who haven’t “burned it out” yet. Unfortunately, this reinforces the “button-pushers” stereotype.
Of course, in a few, I’d even dare to say, unique cases, that “snowflake” resource might get somewhere. Nonetheless, this is no excuse for recruiting personnel with few or no technical skills for QA positions. So please, please, please – stop doing all the above, before each piece of web-enabled software, ends-up being nothing but an Edsel engine.
Now, since I’m done venting, let’s focus on the most important matter of this post: the main value of a QA resource in a development cycle. So, should I just say “we test software” or “we ensure service/product quality” and call it the day!? (editor’s gun clicks, pointing at my cerebral cortex: no you shouldn’t!) – that would be quite like saying an Architect draws lines or a Doctor wears a stethoscope – plain, bland and technically inaccurate. I mean, you can’t see the entire picture without going back a few steps.
So – what is QA?
To frame this in a more common language, let’s flex your imagination muscles for just a bit. Think of any service provider as of a government of sorts. Project Management divisions act as a legislative branch. This one drafts laws and procedures that account for constraints and mediation.
The IT provisioning divisions would be similar to an Executive Office. It carries on their day-today labour, according to legislative instructions. Of course, that happens with some wiggle space on specific decision making. Think of finding the best-fit stack for a particular project and the best approach in delivering a service or product.
In this analogy, Business Development and Finance divisions ensure billing & basic issue resolution. They also carry satisfaction assessments. In essence, they make sure the executive office satisfies their constituents and no over-billing takes place.
Consequently, QA would fill the wide shoes of a Court. This division will make sure all those legislative instructions take their course. “The court” will also be keeping an eye on how efficient the executive office is and what solution rolls out to you aka “the people”.
As you might’ve realized, testers are those who ensure any piece of software is:
“Of the people, for the people, by the people” – ticking each “constitutional” box (aka their test plans).
“The Court” has the power of issuing a “motion to dismiss” in favour of the people. They would block a release if the quality standards do not fit initial requirements.
If an issue gets reported, “the court” is the first in the guillotine line. Hence the requirement of possessing higher domain & technical knowledge. These judges (in ideal circumstances) tend to know the overall functionality of the product. It is only fair, especially when some citizens aren’t that familiar with “the law”.
Analogically to “the law”, there are many types of “legislative issues” or bugs and monitoring mechanics (test types).
Let’s break those down from the QAs’ perspective and check on issues solved by this “legislative” branch:
- Business logic issues ( when something isn’t right according to business requirements – “the law” got broken);
- Security-related bugs (The code is vulnerable to some security exploits. Somewhat like certain espionage acts of warfare.);
- Regression Issues (some code updates caused existing features to break – like executive orders that defy “the law”. Only in the last years, there were quite a few where the Supreme Court has ruled against them – so you get the idea);
- Performance Issues (The code is slow or some actions execute extra functions. Think of bureaucratic processes that cause public access congestion.);
- Accessibility issues (The code doesn’t meet aria spec for accessibility. Think of classifying public information when access to it “the word of law”);
- UI bugs (The user interface doesn’t meet the design specification. Similar to not releasing any tax returns when “the law” requires it, obstructing public access to a politician’s record.);
- Integration issues (Two or more components don’t work together as expected. Like executive orders that break public service delivery).
As in the case of maintaining a lucrative country, maintaining order is not a default. To perform any action, you need to run a process first. Here, we’ll have to drift a bit away from our analogy and get back to a more formal display of our investigative mechanics
THE TURNESOL STRIP
Ensuring QA in a CI/CD project (or even a traditional one) is not defined as “a phase” of product development. On the contrary, QA is a continuous process following each development cycle – you can’t do it at the end. It is somewhat like cooking a slow-boiling stew.
When adding to it, you must taste each of the ingredients and make sure those are fresh. In the same time, you’ll have to stir carefully to check on consistency and taste it at various times, to understand when it’s ready. Hence, it is safe to assume that quality activities address two types of problems: revealing bugs and assessing the “readiness” of the product.
Sometimes, identifying and removing faults throughout the development process is not enough. A “stellar performance” requires extra measures to eradicate persistent faults post-delivery. Boy I love those… Almost as much as running mid-development assessments.
This is what makes QA processes original, creative and of course, challenging. Needless to say: if you’re not a challenge-facing persona, this career path is simply not for you.
The entire QA journey is a collection of tasks, carefully arranged in one’s toolbox waiting for the right time. Some focus on understanding the business purpose and ways to optimize it. Others focus on disaster scenarios. There are even approaches designed to investigate misuse. Though, in general, “religiously” QA assessments splits according to their goal, type of product, preferred software delivery cycle, etc.
Just to give you an outline, here’s a high-level overview, that divides these practices into “tests”:
QA – Unit tests
Unit tests take place at a very low level, close to the source of your application. They consist of testing individual units, components, modules and used software methods. Unit tests are in general quite cheap to automate and can run as a default. These “strips” are usually written and run by developers locally, before integration.
QA – Integration tests
These are quite similar to a preschool teacher that makes sure children play together. They check if different modules, services, or microservices used by your application “get along”.
For instance, this practice could focus on verifying the interaction with the database or make sure that microservices work together as expected and are “in perfect health”. Of course, as any nanny during this pandemic, integration tests are more expensive to run and require interactive monitoring in a highly-active ecosystem.
QA – Functional tests
Functional tests rely on business requirements, referring to activities that focus on checking a specific action or function of the code, in the context of certain objectives and service strategy. The essence of these tests is usually based on “Business Requirement Documentation” or well documented technical requirements. Depending on the project delivery method, use cases or user stories could be used as well. Functional tests tend to answer the question of “can the user do this” or “does this particular feature work”.
A quick note here: it is to mention that quite often, people tend to confuse integration testing with functional tests. This is because both of them focus on monitoring component interaction.
The difference here is that an integration test may simply verify the mechanics of database queries, while functional tests focus on specific input-output values defined in the technical requirements (aka verifying mechanics and a specific value generated by a given query).
End-to-end testing replicates user behaviour within a complete application environment. This practice focuses on testing various user workflows.
The assessment can relate to simple aspects (like page load or page speed insights) as well as more complex mechanics (like authentication, authorization, processing a shopping card or validating a transaction across multiple payment processors). In essential terms, these tests simulate activities a real user would perform within the app.
End-to-end tests are crucial. The only drawback: they might cost you a kidney, especially when maintenance and automation efforts kick in. As a rule of thumb, if you’re looking to cut-down on QA here, set a few key end-to-end tests and rely heavily on lower-level types of testing (like Unit and Integration testing) to ensure you have a quick way to identify what, when and where sewage pipes crack.
QA – Acceptance testing
This practice is quite straightforward. Usually, it involves running several “Acceptance tests” that define when a specific business goal gets achieved, following a specific “Exit Criteria” within the Business Requirement Documentation (BRD).
Their purpose is to enforce “the definition of done” from a business perspective. Usually, to perform such testing, you need an up and running environment that focuses on replicating essential user behaviours, defined by the BRD, without accounting for unpredictable workflows a user could perform. In some instances, acceptance tests run to the extent of measuring system performance; to ensure objectives such as resource consumption per usage or traffic capacity, are met.
QA Performance testing
These practices focus on non-functional testing, monitoring system behaviour under a significant load. Such tests deliver a better understanding of an application’s reliability, availability and “force-major” behaviour within highly unlikely usage scenarios, undocumented within BRD as objectives. Some basic performance tests focus on observing response times when executing a high number of requests or try to indicate the maximum system stretch within specific hardware limits.
Performance testing is a must for services with high conversion potential. Specifically for services that are quite likely to increase their user base in a short timeframe or for data processing practices that involve generating BigData Analytics within environments that focus on collecting IoT metrics and ingest “heavy” information flows. Despite their cost, these are crucial for disaster planning and fail-over strategies.
QA Smoke testing
If you’ve ever plugged a 120V stereo into a 220v power outlet, you already have an idea. These practices focus on delivering the assurance that major features of your system are working as expected, at the speed of a switch flip. Smoke tests are quite common when a new build is released, and business owners must decide whether to invest in lower-level investigations. In essence, they ensure major features work as expected post-build and deliver an overall idea about system stability post deploying a build.
QA Security Testing
To be fair, Security Testing requires it’s own blog (venting and all) – I will elaborate more on this in another PRO instalment, but to give you an idea, security testing focuses on negative inputs and whether these inputs are likely to create significant failures. This ensures basic risk management and avoids potential vulnerabilities of your system, highjackers could take advantage of, especially when the application’s scope is prone to multiple points of failure.
Security testing starts with risk assessment. Its’ main purpose is to validate that the system is exploit-proof via cyber-attacks or misuse that could result in data theft, denial of service or irreversible system damage. Security testing should never stop. When releasing software within a continuous development environment, changes are certain, new bugs arise and the risk of human error is always there. Not mention that “Black Hats” never sleep – you either adapt, patch and mitigate vulnerabilities, or risk breaches and service disruption.
A QAs’ Mic Drop
Finding bugs ain’t the main challenge – knowing where to look is. Now that we’ve cleared-out the mystery behind QA responsibilities, I bet you’ll think twice before recruiting an “ever-lasting” junior tester, with subpar business either technical understanding.
In a fast-paced physical and virtual world, continuous deployment and delivery is the norm and heavily investing in QA is no luxury. Software companies that see testing as a low priority, or as a gimmick add-on will learn a sour lesson. Let’s put it this way: if your projects could resemble a cuisine, releasing software without proper testing. This is quite like cooking “Coq Au Vin” in a rat-infested kitchen. sooner or later, that health inspector will get you and boy – don’t even dream of a Michelin star.
Cooking aside, It’s high-noon to end this post. If you’re into finding out more about testing, make sure to bookmark our blog.