When I started my life in the Indian software industry it was a relatively less crowded, easy affair to be born as a QE. Although I was born as a C++\MFC developer, I could see that QE folks were relatively less as compared to devs. Some companies even did not believe that software testing was a separate desired skill in itself, but that is a different discussion topic.
However, as
the industry grew so did the QE fraternity. And now we are in 2022 where
software testers are abundant in number, may be the ratio of DEV:TEST is
skewed. We keep on hitting the posts very often everywhere that manual testers
are not recruited or desired as much as automation testers.
Software
testing is not about pointing the mouse and clicking on the UI all day long and
filing bugs like login name is exceeding 25 chars etc. It is much more than
that. It’s about understating the business use cases and complying to them. It
is closer to the end customer.
Identifying
test scenarios from a well written product use case document is according to me
the most under rated skill. And that is what defines a great QE from an
ordinary point-click QE. I have an opinion that this skill needs to be
sharpened as equally as devs or automation QEs would sharpen their programming skills. Reading
between the lines is of importance here too.
Having
started with C++, then moving to Perl, JAVA with a brief stint in Ruby and now
Python I am comfortable making the statement that learning the language of your
choice is very easy. What you need to learn is the design skills that make you
choose subtle language / technology stack features the right way. On the same
lines learning to test something is easy – but you need to quickly graduate to
the level where developers would consult you before writing any new code /
fixing any P0. And that is according to me the “enlightenment” for QEs. And
that is exactly what is ignored not only by the QEs themselves but by the
management also. Occasionally I have seen some great QE people go through git
MRs for every bug they test and try identifying corner cases based on that but
that is really rare. It is an exception. And to the other extreme I have seen
companies preventing dev code git access to QEs ๐ too.
No one
wants 100s of manual QEs on board but everyone wants to test their software for
sure ๐ which is written by 100s of
developers with varied cultural, technical and skill backgrounds converging in
that product code.
Then why do
we ignore manual testing?
Nowadays it
has become a status symbol to say we release code to production every s, ms,
ns, ps and so we have a cool set of 1bn tests which take few ms to run. So, we
do not need a formal manual QE team. But is that the reality? Here is a set of
questions every QE manager needs to ask himself or herself before firing the
manual QEs and recruiting automation QEs ---
- 1. How fit are your original manual
tests which after product stabilized you have automated as a regression suite?
- 2. How fit are your dev stories which
expose to the QEs what is being implemented and in what business use case?
- 3. How many times you have encouraged
(or set examples to) your manual QE teams to gel with Product teams to
establish an air tight process of compliance to business requirements?
- 4. How many times you have empowered your
team to say NO to a feature which is messed up instead of succumbing to sales
pressure and releasing it half cooked?
I am not against recruiting automation QEs at all and even I am not against automation. But before developing that (n+1) th fancy automation framework we should be introspecting on some important points as well ---
- 1. Do your automation engineers
understand the product from functional as well as deployment perspective?
a. Are they willing to understand the
product by first manually testing it?
- 2. Are you sure you want to write 1bn
selenium tests and then spend hours fixing locator and timing issues just to
conclude that you have screwed on RoI front?
- 3. What is it that you are investing
resources into really ---- Developing the automation framework or ensuring
quality of the product – under – test?
- 4. Are your devs writing good unit
tests and if yes can you not share the test automation burden with them by
simply following the test pyramid.!? ( https://martinfowler.com/articles/practical-test-pyramid.html
)
a. If not, can you not make them aware
of the fact that “Quality begins at home”?
- 5. Before starting to code repetitive manual
tests as automation tests have you analyzed how many external interfaces those
tests touches upon?
a. If your product talks to external systems frequently
then your testing is only automated as much as is the level of end-to-end automation.
- 6. Lastly, may be apart from the select
MAANG and like companies it is important to research and find out how many of
the software product companies really certify their production code only and
only on the basis of automated tests?!
Not all software
products need a new automation framework and not a single automation framework
can test all software products. So that tradeoff of when to write a new
framework and when to reuse an existing one is crucial!
On a different note – there are companies which do not have separate formal manual and automation QE teams but then those developers are committed and play the role of testers when in need and they code their own tests.