Normally, when planning software development, you have to divide the project into tasks and sub-tasks and then try to estimate how long each of those tasks will take.
The problem, is that each new projet is unique in it's own and many of the pieces of a project can only be tested further down the road.
To take an example, in my first job our of college, at Avantron, I was tasked into creating multiple API such as a database access API, a communication layer API, a logging API, etc.
One of my co-workers would consume those API calls in the UI he was in charge of creating, but had I waited for him for test my API calls a bug could have sat waiting for weeks or even months before he would be discovered, by which time new features might have been built on that faulty code requiring months of refactoring (rewriting).
I couldn't risk it so I wrote my own test modules which would query my API calls and make sure they functionned properly. For the record, this was a simply application called "TestComm", which was initally made just to test the communication layer API calls, but which later could use all of API calls.
TestComm was so simple to use that our QA department began using it to trigger errors in our systems by sending calls in the wrong order!
And yet, it had only taken me a few hours to build the prototype and over the 3 years I worked at Avantron Testcomm must have received less than 40 hours of coding and yet, could test every API calls I ever wrote in seconds.
That's the leverage power of automated tests...
But it wasn't test-driven development
Test-driven development is when you write the test first, which will fail because you still didn't write the proper code for the test to pass, and then, make as little coding as possible for the test to pass properly.
I modified Testcomm after I made a new API call, but I could have started with TestComm, and made a call to a new message type to the Avantron device I was communication with.
Imagine I were to make a new "poll" message which queries a unit to know if it's available to talk.
Here is how test-driven devlopment would work:
- I add a button "Poll" in TestComm, which takes about 20 seconds in the GUI
- I hook the button to a new function call "poll", which takes about 30 seconds in the code
- I call the "poll" function of the communication layer from the new "poll" function, which takes about 30 seconds
- I add to the TestComm log text box the result of the "poll" function.
In less than 2 minutes, I have a button I can press to test the poll message, so, how do I make it work?
Simple: I run my test and try to fix whatever problem I encounter, until the test works.
Test #1
When I start TestComm and connect to the IP of a device, I can now press the poll button which will return an exception: The poll function doesn't exists in the commnication layer API!
Solution: I create a new empty function called "poll", which takes no parameter and recompille everything.
Test #2
I start TestComm and connect to the IP of a device, I press the poll button which will do nothing and returns nothing! The Poll function does nothing, so nothing occurs.
Solution: I know how to send a message, and I imagine that I was told that the "Poll" message type is 88, without any data in it. As it turns out, I have a function which allows to send a message on the current TCP/IP socket, with a type and a byte array.
I call the function with type 88, and send an empty byte array.
Test #3
I start TestComm and connect to the IP of a device, I press the poll button which will returns nothing! The Poll function did send the message, but I didn't bother returning anything, so Testcomm gets nothing back.
Solution: The send Message function returns if the message was sent or not, so I simply return that value.
Test #4
I start TestComm and connect to the IP of a device, I press the poll button which will returns true! The Poll function did send the message, and it returned that it did. This is a success!!!
Except that the device I am connected to doesn't support the Poll function yet, and I recevied true. That's odd, isn't it?
No, it's not, because we returned that we SENT the message and not that we received a response.
Solution: I read the documentation I have (in reality, I was in charge of the communicatio protocol, so it would be documentation I wrote), and it mentions that the device will return a Poll Response, type 89.
Well, I have a function that waits for the next message, and optionally, for the next message of a certain type, with a timeout value.
I began waiting for message of type 89, with the default timeout value. The function will either return false, if by the time of the expiration of the timeout, nothing is received, or the content of the message of type 89, if one was received.
Test #5
I start TestComm and connect to the IP of a device, I press the poll button which will returns false after a small delay. The Poll function did send the message, but the device didn't repond so it returned false.
Solution: Update the firmare of my device to the new version which does support the poll message.
Test #6
I start TestComm and connect to the IP of a device, I press the poll button which will return the content we got from the device after a small delay. The Poll function did send the message, and the reponded.
Why is that better?
By breaking down the development into test steps, we eliminate a lot of complexity in the design. In fact, the one thing missing from the above example is that after each test, we should refactor to simplify our code.
We also ensure that each time we make a step forward, the previous step is fully tested... and will remain being tested for every future step
This is important: by doing continuous automated testing, we avoid regression bugs: bugs in previous code that gets broken due to interaction with new code.
Test Driven Development is a proven method to reduce the complexity of projects all while improving code quality.