Automated testing for web is not new at Bonnier Broadcasting. But it is an area which has improved lately, where the automation framework has been replaced and completely rebuilt.
Cypress end-to-end tests interact with website UI-elements and then confirm the expected behaviour, e.g. “click the button and verify the image is displayed”. This is common behaviour for automated web testing and nothing strange about that. A very simple code syntax example of this can look like:
To identify elements, the use of specific test attributes such as data-test is preferred as it improves test stability. This means that often when writing tests, attributes needs to be added to the elements as well in the web application code. Allowing QA’s to add these attributes to the application code increases the handling speed as well as increases the QA’s understanding for the application.
A limitation in Cypress is that not all bugs can be captured. Typically, things like misalignments or graphical errors will not be considered and therefore automated tests can never completely replace manual testing.
There is a separate repository of test code for each web application supported, with CI integration to a Travis environment. The test runs are scheduled as well as triggered on check-ins for both application and test code repos. Once triggered, the test runs uses parallelization to increase execution time. Detailed results including video are available through a Cypress dashboard, along with high-level results from Travis shared using Slack notifications.
Automated test suites will save time and quickly give you confidence that the functionality is intact, but it also requires good ways of working and recurrent maintenance. Results must be monitored continuously, and the framework need updates whenever there is new application behaviour or new features added. Refining tests cases to improve test coverage is required, e.g. whenever a bug occurs it should be examined to see if a test case update would catch it early next time.