In this article we highlight the milestones of user interfaces and software testing to show how software testing evolved over time.
Introduction: It is complicated!
There are dozens of different articles and blogs that try to identify the first “software tests”. For the purpose of this article, we are just going to stick to Hetzel and Gelperin who divided testing until the 2000s into five significant eras and described them as follows.
Debugging Oriented Era
During this phase in the early 1950s, there was hardly any distinction between testing and debugging. It was merely the same. A developer would write his code, try it and in case of facing an error he would analyse and debug the problem. There was no systematic or concept behind this process. A distinction from debugging and program testing wasn’t even present until the late 1950s.
Demonstration Oriented Era
Starting in the late 1950s debugging and testing were starting to separate – from now on debugging was considered eliminating errors and testing was considered finding (possible) errors. The major goal of this era was to make sure that software requirements were satisfied. During this time though, many types of testing weren’t discovered or even thought about yet. Negative testing for example, the attempt to break an application, was not practiced. Considering how expensive, rare and time consuming computing power back then were, it’s understandable that nobody wanted to mess around too much with the computers.
Destruction Oriented Era
Popular computer scientist Glenford Myers changed this in the late 1970s and early 1980s and initiated the destruction oriented era. In this era, breaking the code and finding errors in it were the main goals.
Glenford Myers popularized concepts such as “a successful test case is one that detects an as-yet-undiscovered error”. Debugging and verification were separated even further, to the delight of the developers. The first testers were hired who would be given a software and try to break it everywhere they could. From simple errors like typing letters in (meant to be) numeric text fields to the complete collapse of the software.
The problem with this was, that software would literally never get released because there was always another bug that could be found. Or if you fixed a bug, another bug decided to appear somewhere else. Changes had to be made.
Evaluation Oriented Era
In the mid 1980s, software testing took a new direction towards the evaluation and measurement of software quality. At some point it had to be accepted that all errors could never be found, so the relevance of testing had to be redefined. From now on, testing was considered an improvement for the confidence on how good a software was working. Testing was done until an acceptable point was reached, a point where major bugs and crucial problems were fixed.
Prevention Oriented Era
The last era before different types of testing such as the user interface testing emerged, was the prevention oriented era. In the late 1980s and 1990s computers “came home”. Literally. Computers became affordable for regular customers and thus the requirements for software testing changed once more. Tests now focused on demonstrating that software met its specifications and that computer defects could be prevented. Code was divided into testable and non-testable code and new techniques for software testing came up. Most popularly in the 1990s was the exploratory testing, which took the sheer “will of destruction” from the 70s and tried to understand the system more deeply to find some more complex bugs.
History of User Interface Testing
Finally, let’s narrow all these information down and have a look at how user interface testing emerged from the history of user interfaces and software testing.
History of User Interface Testing
First of all let’s get back to the scenario we just opened, the “homecoming” of the computer. What did that mean for software and especially user interface testing? Compared to today, testers will most likely smile and think about how much easier the environment-part of UI testing was. There were very few different computer models on the market, which is why Windows 95 is iconic to this date. Basically every family that had a computer in the late 1990s had a Windows 95, with a user interface everyone can still remember vividly. That shift of the main customers from big companies and governmental organizations toward private customers, UI testing really started as the requirements were so much more diverse.
Today, there are thousands of different devices and user interfaces. Responsive web design was something that couldn’t even be thought of, back when everyone had the same computer and the same monitor.
Freeware programming language AutoIT for Microsoft Windows was the first possibility to do some software testing on your own computer at home. It was primarily intended to create automation scripts and was very popular among computer enthusiasts in the early 2000s.
Agile Development
The early 2000s lead to a whole new awareness and appreciation of software quality. The agile manifesto can be seen as the driving force behind this change. Today “agile” is an umbrella term that includes methods, concepts and techniques that build on the agile manifesto. As a reminder these four key values were:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Now why is this so crucial for UI testing? In his book Succeeding with Agile, Mike Cohn came up with the concept of the test pyramid.
The test pyramid is a simplistic test metaphor that is used to visualize the different layers of testing and how much of testing needs to be done on each layer. Back in the early 2000s, single page application frameworks such as React or Angular were really popular and many UI tests were just unit tested in all of these frameworks. Looking at the huge variety of different display sizes and responsive web design, the test pyramid may be as good of a metaphor as ever. Unit testing UIs is basically impossible today, but it was popular back then and cleared the way for the first automated tests.
Automated testing
The first framework that really hit big when it comes to UI testing was Selenium. First published in 2004, Selenium offers a portable framework for testing web applications. It provides a playback tool for functional tests without the necessity of learning a test scripting language.
The new awareness for software quality coming from agile development led to a more in-depth understanding and approach to testing. User interface profited from this trend, as it got more attention after being overlooked for a long time because of how slow and expensive it is for test departments. Until today, most smaller companies still test their user interfaces manually, which shows how difficult it is regarded to make the switch to UI automation.
While automated testing has become the norm for every other type of software testing, automated UI testing can be considered the showcase field of test automation. But most tools collapse with today’s requirements of being able to cover every single display size and format. Only two decades later the first completely visual UI automations are emerging to solve this problem.
Visual Test Automation in the 2020s
One of the most promising approaches to solve the struggles of UI testing is a human centric UI testing approach. This approach has profited the most from scientific accomplishments in Computer Vision – it enables the humanisation of UI testing.
Many robots and automations are supposed to work as human-like as possible. But how do you train an AI to detect UI elements like humans? You teach them! By giving an AI as much information about UI elements as possible it will do the job eventually.
Why is this such a huge leap for test automation and why will it shape the future of UI testing?
As we mentioned before, most UI testing tools rely solely on code and that’s why they struggle with varying display sizes, formats and even the tiniest UI changes. AIs that detect elements are independent of test environments and context, just like manual testers. This technique is completely independent of the visual appearance of an application. For example almost every single login page can be executed with the exact same script and same test step descriptions.
The possibility to teach an AI the same semantics that manual testers use will shape the future of UI testing.
With this peek into the future, this article about the history of (UI) software testing ends. Let us know which milestones should have been mentioned in your opinion and why.
If you are interested in (test)automation you should join our Discord community!
Top comments (0)