This is Part II of a series of blog post discussing the topic on the Convergence of Test Automation Platforms, Services, Frameworks and Tools. You can read Part I here
Visual comparison tools are getting more traction, in desktop, mobile web and for native apps. Fundamental thing to understand here is, this is not a replacement for functional automation. Tools like Wraith are for manual checks on different break points to other open-source ones to build your own Visual Comparion pipeline to vendor solutions with full integration to git and pipeline. The biggest advantage here is, it is very difficult to get this right if you want to implement this locally, as this depends on things like graphics driver, the machine the base image was taken, where it is being compared, integration with git and the scaling factor.
One of the key aspects of visual testing is the effectiveness of the test results. Unless the tool has the capability to isolate true issues from the visual changes, it can generate a lot of false failures and reduce the effectiveness of automated visual testing. Having AI capabilities in Visual testing tools will give it an edge to identify true failures.
Features like specific area testing, layout testing, are also important factors in visual testing as it allows tester options to include or exclude the dynamic content of the page from visual testing.
Some of the visual testing tools will help to Shift Left by providing capabilities to run visual testing based on code check-ins.
Key thing to understand here is, Visual Comparison cannot completely verify your functionality, but it could augment it.
The spectrum ranges from UI rendering-performance to waterfall chart to API performance testing to profiling to static code analysers.
Typical performance testing starts with JMeter but a complete solution goes far beyond it. For example, to performance test a micro-frontend app, stub the api calls to focus purely on the rendering delays, removing network latency caused by apis altogether. But microservices will need a different approach, i.e, without stubbing. Main challenges are integrating performance testing into CI/CD, performance testing at different throughput, performance testing-without simulation, mimicking real user experience on real browsers.
There are hidden gems, tools which allows us to Shift Left Performance Testing. By using profilers, component-level fine tuning and using tools like Sonograph to find the complexity of an application — to continuously keep a tab on technical debt. Another perspective is to use tools like AppDynamics during performance testing to measure the impact, to get better insight into how your application behaves under stress.
A complete Perf/Load/Stress platform, would be something that could, help Shift Left and introduce performance testing close to the code like profiling, reduce architectural debt continuosly using powerful static code analysers, besides having the ability to stub, use real or simulated browsers using external or internal cross browser providers for UI based performance measurement, supporting micro-services for API based performance testing, ability to generate synthetic traffic or generate traffic using enterprise digital platforms, ability visualise results, easy integration with CICD, for desktop, mobile web and native apps, all in one place, in a truly 360 Automation way.
In this day an age, Crowd Sourcing as a concept is rather unique. Most or all players do not go beyond any form of manual testing. For example, crowd sourced automation of test cases, crowd source hackathon or a crowd source app development are unheard of, but then it could blur the line between crowd sourcing and service providers. Crowd Sourcing is here to stay and it has a very peculiar and powerful feature that no other type of vendors can compete nor challenge i.e the multitude of people across different landscape, devices, OS, the combination of apps running in their devices, different connection speed, differently aged devices, different age groups and different usage habbits. This is the closest that you can test something that reflects your product in a real world. But when you use this service, at the wrong place, for example, every-time for releasing your software, then it is complete waste of money because you could better invest that money into automation.
Intelligent reporting is something that could save a lot of money to teams as a result of huge productivity increase. Image this scenario.
When you have many tests running on your CI/CD pipeline , 10 to 20 times a day, based on the number of commits, branches, Pull Requests and engineers, it is difficult to keep tract of the trend of your automation test failures for every run by everyone for every change. Imagine you would like to:
No Jenkins plugin can do justice to uses cases like above. Intelligence Reporting involve, storing all automation artifacts like test cases description, screenshots, time duration of the test, html report, trace logging urls, unique identifiers linking test failures to logging systems, failure error messages,etc into a storage system like Elastic Search or MongoDb. A front-end app to visualise the data from the storage system. The visualisation will help reduce the mean time to find the root cause of any automation failures.
Over a period of time, you will be sitting on a gold mine of test automation error messages, which could be split into different categories like Automation issues, Defects, etc and could be converted into an ML model. This ML model, can then be fed back into your automation framework, where you could predict the automation failures type as you store them into the storage systems.
If your ML model is super consisten and reliable, you could go one step further and use Jira api to create a defect on the fly if your ML model says so, as you store test results!. This is the moon shot, this will be everyones’ dream, and it is possible!
Performance, Intelligence Reporting and Visual Regression are key part of 360 Automation, especially Intelligence Reporting. Without this, no solution is complete. The take away here is, a complete solution is possible only by having the ability to plugin different tools to build a 360 Automation solution,as a One Platform, like AWS.
In Part III of this series, we will look into the following topics