Selenium and similar tools are the de-facto standard when it comes to GUI testing. However, they go far enough?
Selenium tests are not infallible, as we found to our cost late one Friday evening when the support team got an urgent call. A few days after a system update, the customer could no longer use the application as the send button was not on the screen. The support person immediately checked the Selenium automation tests to check for any failures but was confused when they noticed that all the tests had passed… what had happened?
How good are you at spot the difference?
The send button had rendered on the form, so the Selenium tests all passed, however a CSS change had been made and so the send button had become hidden behind one of the input text areas. There was no way that Selenium could pick up this regression bug. Selenium only cared that the button appeared on screen, not where it appeared. Readability, usability and the appearance of a web application - all the things CSS control - are completely irrelevant when it comes to traditional GUI Test Automation.
But what if you could simply assert that your application needs to look like a standard image of the website? Or even that each element of a page looks like a base image to prevent this kind of regression?
A sea of Bloatware
When looking at visual testing with my current employer, we were unable to simply use another offering. Many options didn’t work straight out of the box, or contained lots of bloat which would just add an extra overhead to the maintainability of the tests. There had been a massive investment in our automation frameworks and lots of investment which had been made in upskilling developers/testers in C# and Selenium and possibly looking at new languages or frameworks could mean a steep learning curve. The main issue for us was that we needed it to fit in with our current automation framework, and so we had to create our own visual testing framework.
Whilst developing our our framework, it quickly became apparent that the rendering from different browsers would often cause tests to fail. For example, if we were to take a base image of our homepage in a Chrome browser and then run our test in Firefox to ensure the homepage hadn’t changed, the test would fail due to the browser adding additional padding to some elements. We needed different sets of base images for different browsers.
A major plus was the impact to our CI pipeline. When comparing a visual test of a web page to our traditional Selenium tests, where we asserted each element’s text, we noticed the visual tests executed far quicker. The test framework simply navigates in a web driver to the desired page, takes a snapshot which is held in memory and then byte by byte compares the image to the base image stored locally or wherever you’d prefer.
Easy to Maintain: A Dramatic Reduction in Code
Compared to Selenium, the tests were much easier to maintain. Using a visual approach can reduce the amount of lines of code required for an automated test dramatically. As an example, a previous test I’d been asked to write was 500+ lines, when writing this using the visual framework this was just four lines of code; one line to specify the base image, one line to specify the URL to compare, one line to do the compare and an assert, and that’s it!
One of the fantastic things with the visual testing framework, and specifically the one we designed, was that whenever a test would fail, a copy of the original image is created and when differences are found, a pink box is drawn, highlighting any areas of change and making them very easy and quick to identify.
Accuracy: Finding the Right Level of Tolerance.
During development of the testing framework, and whilst running spikes using various other visual testing tools, they were all pixel perfect. The frameworks were able to detect a single pixel difference. However, after a few months of testing and asking other team members to help out, the tests started to fail. We found that within our visual framework, we had to start allowing a certain amount of tolerance during comparison. A common issue we encountered was that the colour would sometimes be slightly off (a slightly different shade) depending on the machine taking the base image, or taking the comparison image. Whilst this wouldn’t be a problem on the machine which would always run the tests, it would cause issues when developers would run the tests locally. So, we had to introduce a tolerance for the 256 different intensities.
Whilst this no longer was a single pixel perfect we found it accurate enough to still enable us to assert that layouts were as expected, as well as checking that wording was correct and elements were being rendered as expected.
One of the big challenges for us has been dynamic content, often our web pages display a logged-on client name or perhaps a news feed. One of the ways we’ve been able to get around the issue is to create a helper which simply blankets over the dynamic elements.
Even with the ability to cover dynamic content, it’s not always desirable to cover all elements of a page. What if you only want to check a small portion of your page, perhaps just a button to ensure it still matches the customers set branding?
By using visual testing, you can not only test that a whole page matches what you expect it to but it is possible to check an individual element. By specifying an element by CssSelector or ID you can take a base image of that individual element and then run tests to check for changes.
Our Automated Visual Testing tool has shown a great deal of promise. It has enough advantages over Selenium and 'spot the difference' manual checks for CSS issues that what started as an interesting experiment has grown into full-blown tool development. It still has its challenges (I didn't even get into any Storage issues), but there are clear ways to overcome these. It is also not a magic bullet, but with time, and getting to know the right tolerances for your application, you will start to wonder how you lived without it.
If you're interested in learning more, pick up a free copy of the framework we are developing over on GitHub and let us know what you think.
We'd love to hear your thoughts!