History of software testing

Debugging period (1947–1956)

In 1947, the terms “bug” and “debugging” were coined. Grace Murray, a Harvard University scientist who worked with the Mark II computer, detected that a moth had got stuck in a relay causing it not to make contact. He detailed the incident in the work log, pasting the moth with tape as evidence and referring to the moth as the “bug” causing the error, and to the action of eliminating the error as “debugging”.

At that time, the tests were focused on the hardware because it was not as developed as today and its reliability was essential for the proper functioning of the software. The term debugging was associated with the application of a patch for a particular bug as a phase within the stage of software development, and that is why the tests that were performed were only of a corrective nature by taking certain measures in order to make the program work.

Was in 1949 when Alan Turing wrote his first article about carrying out checks on a program and then in 1950, in the article “Turing test”, he explains the situation of how a software must adapt to the requirements of a project and the behavior of a machine or a reference system (human logic) must be indistinguishable.

Demonstration period (1957–1878)

In 1957, Charles Baker explains the need to develop tests to ensure that the software meets the pre-designed requirements (testing) as well as the program’s functionality (debugging)..

Test development became more important as more expensive and complex applications were being developed, and the cost of solving all these deficiencies affected a clear risk to the profitability of the project, and experience in this new sector was increasing.

A special focus was placed on increasing the number and quality of tests, and for the first time the quality of a product began to be linked to the state of the testing phase.

The aim was to demonstrate that the program did what had previously been said to be done, using expected and recognisable parameters.

Evaluation period(1983–1987)

In 1983 a methodology was proposed that integrates analysis, revision and testing activities during the software life cycle in order to obtain an evaluation of the product as development progresses.

The testing phase is recognized as an integral phase in the elaboration of a product, acquiring special importance due to the appearance of tools for the development of automated tests, which notably improved efficiency.

Prevention period (1988 — present)

In 1988, William Hetzel published “The Growth of Software Testing” where he redefined the concept of testing as the planning, design, construction, maintenance and execution of tests and test environments. This stage was mainly reflected in the appearance of the testing phase at the earliest stage in the development of a product, the planning stage.

It is important to visualize how the software testing stage has evolved and emerged from its absence to its continuous presence throughout the life cycle.

If we imagine the entire development phase as a finite line, where the beginning is the planning and the end is the monitoring of the sold product, we would see how the testing phase has moved to the left. It appeared as a post-production stage, later it was a pre-production stage and now it is in the complete phase. This practice is known as Shift-Left, and is occurring in many other fields, as we will discuss in other posts below.

Through these 70 years of history we can learn a lot about the changes that will take place, not only in the field of software quality, but also in others such as Security or DevOps.