There is a plethora of tools available for engineers to test their code these days. However, frameworks like Mocha.js came around only in 2011 with the ascent of NPM. So I wonder what were the practices around software design a few decades ago?
They had to define some input data, manually calculate the output, and then check it by running the program. See e.g. the paper The FORTRAN Automatic Coding System by Backus et al. (1957):
He then programmed the job in four hours, using 47 FORTRAN statements. These were compiled by the 704 in six minutes, producing about 1000 instructions. He ran the program and found the output incorrect. He studied the output ... and was able to localize his error in a FORTRAN statement he had written. He rewrote the offending statement, recompiled, and found that the resulting program was correct.
In the really early days, (mainframe & batch jobs days), testing was accomplished by the systems engineers writing a set of test data that they had hand calculated the results of but only gave a small sub-set of to the programmers. The programmers wrote the code and the test suites were run against the program and the output passed to the systems guys who passed back pass/fail results.
Of course we were also using static analysis tools such as lint, (in Unix from 1979), to spot areas needing specific attention.
A little later, when programmers had direct access to some hardware, we would write your own test harnesses & stubs to test our own code. Some teams had a process whereby developers tested each others code and for some projects where many sub-processes had common interfaces a generic test harness might be implemented to allow standardisation of testing and save the time of every developer creating their own test skeleton for each process, I have done this myself on more than one team.
In the late 19xxs they, we, were were using test frameworks such as LDRA Testbed, (founded 1975) & IBM Rational Test RealTime for safety critical code C.