Search This Blog

Tuesday, March 26, 2019

Common Myths of Test Automation

It is not difficult to imagine the benefits of having automated testing alongside product development – faster releases, increased test coverage, frequent test execution, faster feedback to the development team, just to name a few, yet many organizations have not made the move or are resistant in investing in test automation.
In this article, we shall examine some of the most common myths of test automation and how these prevent organizations from succeeding in test automation.

Setting Realistic Expectations

Possibly the most difficult and challenging aspect of any test automation endeavor is to understand the limitations of automated testing and setting realistic goals and expectations to avoid disappointments. With that in mind, let’s see some of the most common misunderstandings and myths about test automation:

Automated Testing is Better than Manual Testing

Referring to Michael Bolton’s blog post “Testing vs. Checking”, automated testing is not really testing. It is checking of facts. When we have an understanding of the system, we can enforce that understanding in forms of checks and then by running the automated checks, we confirm our understanding. Testing, on the other hand, is an investigation exercise where we aim to obtain new information about the system under test through exploration.
Testing requires a human to make a sound judgment on the usability of the system. We can spot anomalies when we were not anticipating. We should not be lenient towards one or the other, as both methods are required to get insight into the quality of the application.

Achieving 100% Automated Testing

Just as there is no practical way of achieving 100% test coverage (due to endless possible permutations), the same applies to test automation. We can increase test coverage by running automated tests with more data, more configurations, covering a whole variety of operating systems, browsers, but achieving 100% is still an unrealistic goal. When it comes to automated testing, more tests do not necessarily mean better quality or better confidence. It all depends on how good a test is designed. Rather than aiming for full coverage instead, focus on the most important area of functionality which is crucial to the business.

Quick ROI

When implementing a test automation solution, there are other interrelated development activities than just scripting test cases. Normally a framework needs to be developed that can support bespoke operations which are useful and meaningful for the business, such as test case selection, reporting, data-driven, etc.
The development of the framework is a project on its own and requires skilled developers and takes time to build. Even when a fully functional framework is in place, scripting automated checks initially takes longer than executing the same test manually. Therefore when we require quick feedback on the new feature that’s just developed, checking it manually is usually quicker than automating the test. However, the ROI is returned in the long run when we need to execute the same tests at regular intervals.

Higher Rate of Defect Detection through Automated Checks

Although many of the vendor-supplied and home-brewed test automation solutions are very sophisticated and highly capable in performing complex operations, they will never be able to compete with the intelligence of a human tester who can spot unexpected anomalies in the application while exploring or executing a set of scripted tests against the system under test. Ironically, people expect automated testing to find lots of bugs because of allegedly increased test coverage, but in reality, this is not the case.
True, automated tests are good at catching regression issues – after a new feature has been added to existing code base, we need to ensure that we haven’t broken current functionality and we need that information fast – but, the number of regression issues, in most cases, tends to be far less than new functionality that’s being developed.
Another point to bear in mind is that the automated checks only check what they have been programmed to check by the person who wrote the script. The scripts are as good as the person who wrote them. All automated checks could happily pass but major flaws can go unnoticed which can give a false impression of the quality of the product. In essence, checking can prove the existence of defects, but it cannot prove their absence.

We Only Require Unit Test Automation

So, if the likelihood of finding defects is greater in testing new features, why aren’t we running our automated tests against the new functionality as it is being developed? Well, this is somewhat the case for teams that practice TDD.
The developers write a unit test first, watch it fail and then write enough code to get the unit test pass and the cycle is repeated until the intended functionality is delivered. In essence, these automated unit tests are checking new functionality and over time they form the unit regression pack which is executed repeatedly as new functionality is delivered.
But, there is a caveat to this. While TDD is highly encouraged and is a strong development practice in building quality from the grounds up, unit tests are only good at finding programmer errors, not failures. There is a much larger aspect of testing that happens when all the components are tied together and form a system.
In fact, many organizations have the majority of their automated checks at the system UI layer. However, scripting automated checks for the UI or system, while the features are being developed is at best a daunting task, as the new functionality tends to be volatile (subject to many changes) during development. Also, the expected functionality might not be known until later, so spending time automating a changing functionality is not encouraged.

We only Require System UI Automation

There are values in running automated checks at the UI and system level. We get to see what the user experiences when interacting with the application; we can test end-to-end flows and 3rd party integration when we could not test otherwise; we can also demo the tests to clients and end-users so they can get a feel of test coverage. However, relying solely on the automated checks at the UI layer has its own problems.
UI is constantly changing to enhance visual design and usability and having automated checks failing due to the UI changes and not changes in the functionality can give a false impression of the state of the application.
UI automated checks are also much slower in the speed of execution than at unit or API layer and because of this, the feedback loop to the team is slow. It might take a few hours before a defect is spotted and reported back to the developers. And when something does go wrong, the root cause analysis takes longer because it is not easily apparent where the bug is.
Understanding the context of each test and at which layer the test should be automated is important. Test automation should be part of the development activity, so the whole team is responsible for test automation, with developers writing executing unit tests, Software Developers in Test writing executing and maintaining acceptance tests at API and/or UI.

Losing Faith and Trust in Test Automation

This last one is not a myth about test automation, but a side effect when test automation goes wrong. You spend many hours developing a perfect test automation solution, using the best tools and best practices, but if the automated checks do not help the team it is worthless.
If the team has no visibility or knowledge on what is automated and executing, they either release with fear of unknown or duplicate their regression testing efforts. If the automated checks are flaky, slow, give intermittent results then it can confuse the team more than providing a safety net and a confidence booster.
Don’t be afraid of removing automated checks that are always failing or give inconsistent results. Instead, aim for a clean and reliable suite of tests that can give correct indications of the health of the application.

Conclusion

Test Automation is a long term investment. It will take time and expertise in developing and maintaining test automation frameworks and automated scripts. Test automation is not a one-off effort where you deliver a solution and let it run. It needs constant monitoring and updating.
Rather than aiming to replace manual QAs or expecting the automated checks to find lots of defects, we should instead embrace the advantages it brings to the team, such as liberating QA’s time for more exploratory testing where chances of revealing defects is maximized, or using automated scripts to create test data that can be used for manual testing.
Understanding the limitations and setting realistic expectations is important in overcoming these myths of test automation.

Git Cheat Sheet – Git Commands Testers Should Know

This post is a Git Cheat Sheet with the most common Git commands you will likely use on a daily basis.
If you are a technical tester working alongside developers, you should be familiar with the basic Git commands.
This post contains enough Git knowledge to get you going on a day to day basis as a QA.

Initial Git Setup

Initialize a repo

Create an empty git repo or re-initialize an existing one
$ git init

Clone a repo

Clone the foo repo into a new directory called foo:
$ git clone https://github.com//foo.git foo

Git Branch

How to Create a New Branch in Git

When you want to work on a new feature, you typically create a new branch in Git. As such, you generally want to stay off the master branch and work on your own feature branches so that master is always clean and you can create new branches from it.
To create a new branch use:
$ git checkout -b 

How to List Branches in Git

If you want to know what branches are available in your working directory, then use:
$ git branch
Example output
develop
my_feature
master

How to Switch Branches in Git

When you create a new branch then Git automatically switches to the new branch.
If you have multiple branches, then you can easily switch between branches with git checkout:
$ git checkout master
$ git checkout develop
$ git checkout my_feature

How to Delete Branches in Git

To delete a local branch:
$ git branch -d 
Use the -D option flag to force it.
To delete a remote branch on origin:
$ git push origin :

Git Staging

To stage a file is simply to prepare it for a commit. When you add or modify some files, you need to stage those changes into “the staging area.” Think of staging as a box where you put things in before shoving it under your bed, where your bed is a repository of boxes you’ve previously have shoved in.

Git Stage Files

To stage or simply add files, you need to use git add command. You can stage individual files:
$ git add foo.js
or all files at once:
$ git add .

Git Unstage Changes

If you want to remove a certain file from the stage:
$ git reset HEAD foo.js
Or remove all staged files:
$ git reset HEAD .
You can also create an alias for a command and then use it with Git:
$ git config --global alias.unstage 'reset HEAD'
$ git unstage .

Git Status

If you want to see what files have been created, modified or deleted, Git status will show you a report.
$ git status

Git Commits

It is a good practice to commit often. You can always squash down your commits before a push. Before you commit your changes, you need to stage them.
The commit command requires a -m option which specifies the commit message.
You can commit your changes like:
$ git commit -m "Updated README"

Undoing Commits

The following command will undo your most recent commit and put those changes back into staging, so you don’t lose any work:
$ git reset --soft HEAD~1
To completely delete the commit and throw away any changes use:
$ git reset --hard HEAD~1

Squashing Commits

Let’s say you have 4 commits, but you haven’t pushed anything yet and you want to put everything into one commit, then you can use:
$ git rebase -i HEAD~4
The HEAD~4 refers to the last four commits.
The -i option opens an interactive text file.
You’ll see the word “pick” to the left of each commit. Leave the one at the top alone and replace all the others with “s” for squash, save and close the file.
Then another interactive window opens where you can update your commit messages into one new commit message.

Git Push

After you have committed your changes, next is to push to a remote repository.

First Push

Push a local branch for the first time:
$ git push --set-upstream origin 
After that, then you can just use
$ git push

Push local branch to different remote branch

To push a local branch to a different remote branch, you can use:
$ git push origin :

Undo Last Push

If you have to undo your last push, you can use:
$ git reset --hard HEAD~1 && git push -f origin master

Git Fetch

When you use git fetch, Git doesn’t merge other commits them with your current branch. This is particularly useful if you need to keep your repository up to date, but are working on something that might break if you update your files.
To integrate the commits into your master branch, you use merge.

Fetch changes from upstream:

$ git fetch upstream

Git Pull

Pulling is just doing a fetch followed by a merge. When you use git pull, Git automatically merges other commits without letting you review them first. If you don’t closely manage your branches, you may run into frequent conflicts.

Pull a branch

If you have a branch called my_feature and you want to pull that branch, you can use:
$ git pull origin/my_feature

Pull everything

Or, if you want to pull everything and all other branches
$ git pull

Git Merging and Rebasing

When you run git merge, your HEAD branch will generate a new commit, preserving the ancestry of each commit history.
The rebase re-writes the changes of one branch onto another without creating a new commit.

Merge Master Branch to Feature Branch

$ git checkout my_feature
$ git merge master
Or with rebase option, you use:
$ git checkout my_feature
$ git rebase master

Merge Feature Branch to Master Branch

$ git checkout master
$ git merge my_feature

Git Stash

Sometimes you make changes on a branch, and you want to switch to another branch, but you don’t want to lose your changes.
You can stash your changes. Here’s how you do a stash in Git:
$ git stash
Now, if you want to unstash those changes and bring them back into your working directory use:
$ git stash pop

My Profile

My photo
can be reached at 09916017317