Blog
A PRO tale: diving into the QA realm
Sep 21, 2020,

Hi everyone. My name is Victor, and I’m a QA Engineer at EBS-Integrator. If this ain’t telling you much, let me have another go at it. No QA role is the same. Some professionals have a very deep technical understanding, spotting bugs and errors in a glimpse, ready to step into a developer position when needed.

Others have a business-oriented mind and can identify process errors that result in “conceptual bugs” when the code seems quite clean. After about 5 years of boiling this QA stew, and a few more in development, I check both of those boxes and have enough scar tissue to prove this isn’t a smooth journey. 

Reaching this milestone requires a deep understanding of the roles, bells, whistles and anatomy of any particular project. You need to develop a palate for visions and the mechanics that turn them into digital products. All of this, to provide productive feedback that results in service-related optimizations and viable vision corrections. Preferably before you’re deploying into production. 

Today I will assault our blog with a very specific purpose: building a case for QA engineering and unveiling inner workings of Quality Assurance (QA). I often find myself approached by colleagues on coffee breaks being asked about what QA engineers do. There is a common misconception that they are “second class” developers or worse: a frivolous perk for software engineers. No wonder due-diligent QA testers are somewhat a “Mr Cellophane” to everyone but programmers.

At some level, they’re quite similar to DevOps or System Administrator roles: you only remember about them when the sewage hits the venting pipes. Hence, it’s quite easy to point fingers towards a QA engineer when your production goes down, or things just go on fire. To put it bluntly: QA victories are but a silent whimper, while even the tiniest mistake is, in most cases, a nuke-launching matter.

We don’t even get a fancy title like “Data Scientist”, “UI/UX Designer”, “Vision Executive Guru”, or even “Senior External Relations Consultant” – no wonder people think we’re nothing but “Q” & “A” button pushers.

The industry hasn’t been kind to us either: most of the time, testing positions are treated as entry-level jobs for people working in IT. This is quite a big “no-no” and companies that practice such an approach, attract disposable talent that aspires to become high-end developers via chewing bubble gum.

Two things are bound to happen here: 

  1. you get an ever-lasting dreamer that cannot reach its’ potential, stuck at the “junior” level for about 4 years: or
  2. this ever-lasting dreamer gets promoted despite its’ “progress” resulting in low service performance and quite a great deal of unjustified, preventable quality debt (not to be confused with technical debt here). 

This means that most testing jobs tend to get staffed with more entry-level people who haven’t “burned it out” yet, which unfortunately reinforces the “button-pushers” stereotype. 

Of course, in a few, I’d even dare to say, unique cases, that “snowflake” resource might get somewhere, however, this is no excuse for recruiting personnel with few or no technical skills for QA positions. So please, please, please – stop doing all of the above, before each piece of web-enabled software, ends-up being nothing but an Edsel engine.

Now, since I’m done venting, let’s focus on the most important matter of this post: the main value of a QA resource in a development cycle. So, should I just say “we test software” or “we ensure service/product quality” and call it the day!? (editor’s gun clicks, pointing at my cerebral cortex: no you shouldn’t!)  – that would be quite similar to saying an Architect draws lines or a Doctor wears a stethoscope – plain, bland and technically inaccurate. I mean, you can’t see the entire picture without going back a few steps.   

So – what is QA?

To frame this in a more common language, let’s flex your imagination muscles for just a bit. Think of any service provider as of a government of sorts. Project Management divisions act as a legislative branch, drafting laws and procedures that account for constraints and mediation. 

The IT provisioning divisions would be similar to an Executive Office, that can carry-on their day-today labour, according to legislative instructions, with some wiggle space on specific decision making (aka finding the best-fit stack for a particular project and the best approach in delivering a service or product). 

Business Development and Finance divisions ensure accurate billing, some basic issue resolution and public opinion measurement efforts, to ensure the executive office satisfies their constituents and no over-billing takes place. These are your treasury and International Relations branches.

Consequently, QA would fill the wide shoes of a Court, making sure all those legislative instructions are followed, keeping an eye on how efficient the executive office is and what solution is to be delivered to you aka “the people”. As you might’ve realized, testers are those who ensure any piece of software is “Of the people, for the people, by the people” – ticking each “constitutional” box (aka their test plans).

“The Court” has the power of issuing a “motion to dismiss” in favour of the people, blocking a release if the quality standards do not fit initial requirements or if that particular sprint is a swarming bug nest. 

If an issue is reported, “the court” is the first in the guillotine line. Hence the requirement of possessing higher domain & technical knowledge. These judges (in ideal circumstances) tend to know the overall functionality of the product as well, or in some cases, even better than the Product Owner (the people). It is only fair, especially when some citizens aren’t that familiar with “the law”. This configuration allows “executives” (aka developers) to focus on delivering outstanding modules & features for “the people”. 

Analogically to “the law”, there are many types of “legislative issues” or bugs and monitoring mechanics (test types). 

Let’s break those down from the QAs’ perspective and check on what issues are solved by this “legislative” branch:

  • Business logic issues ( when something isn’t right according to business requirements – “the law” has been abused);
  • Security-related bugs (the code is vulnerable to some security exploits – similar to certain espionage acts of warfare);
  • Regression Issues (some code updates caused existing features to break – like executive orders that defy “the law”. Only in the last years, there were quite a few where the Supreme Court has ruled against them – so you get the idea);
  • Performance Issues (the code is slow or some actions execute extra functions – quite similar to bureaucratic processes that cause public access congestion);
  • Accessibility issues (the code doesn’t meet aria spec for accessibility – similar to classifying public information when access to it is guaranteed by “the law”);
  • UI bugs ( the user interface doesn’t meet the design specification – like not releasing any tax returns when “the law” requires it, obstructing public access to a politician’s record);
  • Integration issues (two or more components don’t work together as expected – similar to executive orders that break public service delivery). 

As in the case of maintaining a lucrative country, maintaining order is not a default. To perform any action, you need to run a process first and here, we’ll have to drift a bit away from our analogy and get back to a more formal display of our investigative mechanics

THE TURNESOL STRIP 

Ensuring QA in for a CI/CD project (or even a traditional one) cannot be defined as “a phase” of product development. On the contrary, QA is a continuous process following each development cycle – you can’t do it at the end. It is somewhat similar to cooking a slow-boiling stew: when adding to it, you must taste each of the ingredients and make sure those are fresh; stir carefully to check on consistency and taste it repeatedly at various times, to understand when it’s ready. Hence, it is safe to assume that quality activities address two types of problems: revealing bugs and assessing the “readiness” of the product. 

Now, while identifying and removing faults throughout the development process will certainly help at increasing the quality of service, this might not be enough. For a “stellar performance” additional measures are required to ensure the absence of persistent faults post-delivery; and boy – I love those as much as running mid-development assessments. 

This is what makes QA processes original, creative and of course, challenging. Needless to say: if you’re not a challenge-facing persona, this career path is simply not for you. 

The entire QA journey is a collection of tasks, carefully arranged in one’s toolbox waiting for the right time. Some are focused on understanding the business purpose and ways to optimize it, others focus on disaster scenarios, there are even approaches designed to investigate misuse – but “religiously” QA assessments are divided according to their goal, type of product, preferred software delivery cycle, etc. Just to give you an outline, here’s a high-level overview, that divides these practices into “tests”:

Unit tests

Unit tests are performed at a very low level, close to the source of your application. They consist of testing individual units, components, modules and used software methods. Unit tests are in general quite cheap to automate and can be run as a default. These “strips” are usually written and run by developers locally, before integration.

Integration tests

These are quite similar to a preschool teacher that makes sure children play together. They check if different modules, services, or microservices used by your application “get along”. For instance, this practice could focus on verifying the interaction with the database or make sure that microservices work together as expected and are “in perfect health”. Of course, as any nanny during this pandemic, integration tests are more expensive to run and require interactive monitoring in a highly-active ecosystem. 

Functional tests

Functional tests rely on business requirements, referring to activities that focus on checking a specific action or function of the code, in the context of certain objectives and service strategy. The essence of these tests is usually based on “Business Requirement Documentation” or well documented technical requirements. Depending on the project delivery method, use cases or user stories could be used as well. Functional tests tend to answer the question of “can the user do this” or “does this particular feature work”.

A Software tester walks into a bar and then
orders a beer
orders 0 beers
orders 99999999999 beers
orders a lizard
orders -1 beers
orders a ueicbksjdhd. Everything is fine.
//after a while
First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone.

A quick note here: it is to mention that quite often, people tend to confuse integration testing with functional tests. This is because both of them are focused on monitoring component interaction. The difference here is that an integration test may simply verify the mechanics of database queries, while functional tests focus on specific input-output values defined in the technical requirements (aka verifying mechanics and a specific value generated by a given query).

End-to-end tests

End-to-end testing replicates user behaviour within a complete application environment. This practice focuses on testing various user workflows. The assessment can relate to simple aspects (like page load or page speed insights) as well as more complex mechanics (like authentication, authorisation, processing a shopping card or validating a transaction across multiple payment processors). In essential terms, these tests simulate activities a real user would perform within the app.

End-to-end tests are crucial. The only drawback: they might cost you a kidney, especially when maintenance and automation efforts kick in. As a rule of thumb, if you’re looking to cut-down on QA here, set a few key end-to-end tests and rely heavily on lower-level types of testing (like Unit and Integration testing) to ensure you have a quick way to identify what, when and where sewage pipes crack.

Acceptance testing

This practice is quite straightforward. Usually, it involves running several “Acceptance tests” that define when a specific business goal is achieved according to a specific “Exit Criteria” within the Business Requirement Documentation (BRD). 

Their purpose is to enforce “the definition of done” from a business perspective. Usually, to perform such testing, you need an up and running environment that focuses on replicating essential user behaviours, defined by the BRD, without accounting for unpredictable workflows a user could perform. In some instances, acceptance tests can be performed to the extent of measuring system performance to ensure objectives such as resource consumption per usage or traffic capacity, are met. 

Performance testing

These practices focus on non-functional testing, monitoring system behaviour under a significant load. Such tests deliver a better understanding of an application’s reliability, availability and “force-major” behaviour within highly unlikely usage scenarios, undocumented within BRD as objectives. Some basic performance tests focus on observing response times when executing a high number of requests or try to indicate the maximum system stretch within specific hardware limits.

Performance testing is a must for services with high conversion potential. Specifically for services that are quite likely to increase their user base in a short timeframe or for data processing practices that involve generating BigData Analytics within environments that focus on collecting IoT metrics and ingest “heavy” information flows. Despite their cost, these are crucial for disaster planning and fail-over strategies.  

Smoke testing

If you’ve ever plugged a 120V stereo into a 220v power outlet, you already have an idea. These practices focus on delivering the assurance that major features of your system are working as expected, at the speed of a switch flip. Smoke tests are quite common when a new build is released, and business owners must decide whether to invest in lower-level investigations. In essence, they ensure major features work as expected post-build and deliver an overall idea about system stability post deploying a build.

Security Testing

To be fair, Security Testing requires it’s own blog (venting and all) – I will elaborate more on this in another PRO instalment, but to give you an idea, security testing is focused on negative inputs and whether these inputs are likely to create significant failures. This is done to achieve basic risk management and avoid potential vulnerabilities of your system, highjackers could take advantage of, especially when the application’s scope is prone to multiple points of failure. 

Security testing starts with risk assessment and is performed to validate that the system cannot be exploited via cyber attacks or misuse that could result in data theft, denial of service or irreversible system damage. Security testing should never stop. When releasing software within a continuous development environment, changes are certain, new bugs arise and the risk of human error is always there. Not mention that “BlackHats” never sleep – you either adapt, patch and mitigate vulnerabilities, or risk breaches and service disruption. 

A QAs’ Mic Drop

Finding bugs ain’t the main challenge – knowing where to look is. Now that we’ve cleared-out the mystery behind QA responsibilities, I bet you’ll think twice before recruiting an “ever-lasting” junior tester, with subpar business either technical understanding. 

In a fast-paced physical and virtual world, continuous deployment and delivery is the norm and heavily investing in QA is no luxury. Software companies that see testing as a low priority, or as a gimmick add-on will learn a sour lesson. Let’s put it this way: if your projects could be compared to cuisine, releasing software without proper testing is quite similar to cooking “Coq Au Vin” in a rat-infested kitchen – sooner or later, that health inspector will get you and boy – don’t even dream of a Michelin star. 

Cooking aside, It’s high-noon to end this post. If you’re into finding out more about testing, make sure to bookmark our blog.