Välj en sida

Automated testing for web is not new at Bonnier Broadcasting. But it is an area which has improved lately, where the automation framework has been replaced and completely rebuilt.

The story behind it is that the previous framework was supporting Ruby, which for many became a hurdle to get committed and involved. As all web developers are comfortable using JavaScript a decision was made to replace the existing framework and start from scratch. After a quick evaluation of the available tools, Cypress was selected as the weapon of choice. For more information about the tool itself and the available functionality please visit the excellent website www.cypress.io, which include all the setup details needed as well as descriptions and examples to get started writing test cases.

Cypress end-to-end tests interact with website UI-elements and then confirm the expected behaviour, e.g. “click the button and verify the image is displayed”. This is common behaviour for automated web testing and nothing strange about that. A very simple code syntax example of this can look like:

cy.get(“[data-test=’main-button’]”).click()

cy.get(“[data-test=’main-image’]”).should(‘be.visible’)

To identify elements, the use of specific test attributes such as data-test is preferred as it improves test stability. This means that often when writing tests, attributes needs to be added to the elements as well in the web application code. Allowing QA’s to add these attributes to the application code increases the handling speed as well as increases the QA’s understanding for the application.

A limitation in Cypress is that not all bugs can be captured. Typically, things like misalignments or graphical errors will not be considered and therefore automated tests can never completely replace manual testing.

One of the strengths is that Cypress fail tests also in case of errors for the web application, e.g. JavaScript exceptions. In such cases, the functionality may be intact and the error not visible to a user. None the less, it can be an important bug worth finding.

There is a separate repository of test code for each web application supported, with CI integration to a Travis environment. The test runs are scheduled as well as triggered on check-ins for both application and test code repos. Once triggered, the test runs uses parallelization to increase execution time. Detailed results including video are available through a Cypress dashboard, along with high-level results from Travis shared using Slack notifications.

Automated test suites will save time and quickly give you confidence that the functionality is intact, but it also requires good ways of working and recurrent maintenance. Results must be monitored continuously, and the framework need updates whenever there is new application behaviour or new features added. Refining tests cases to improve test coverage is required, e.g. whenever a bug occurs it should be examined to see if a test case update would catch it early next time.

The development team’s involvement is crucial for success. Getting back to the initial idea behind changing to a framework supporting JavaScript, we can see that more team members are contributing. This hopefully means that it is easier to get started, just as it should be.