When we want to add new features to our CMS, we want to be 100% certain they will run smoothly for our clients. Despite our workloads being eased with the help of computers and automated testing, there’s a lot of manual work that has to go into updates and upgrades too.
Our software is split into many different components that interact, much like the parts of a car engine. Over time we modify various existing components or make new ones to give the CMS new features for our customers and their end users.
Whenever a component is changed or added, we have to test our software both at the component level and as a whole. The primary reason for testing is to find where our software does something different to what is expected (often referred to as a ‘bug’) and then fixing it.
The following is a short list of some of the manual testing we do.
All of the Intergage CMS software code is reviewed at least once by another member of the team. This means that at least two people (the developer and reviewer) have looked and agreed every line of code. Where the reviewer has feedback, the review process repeats until both the developer and reviewer agree.
Once code has been written, every line of added or changed code is debugged. This means very slowly (pressing a key to progress one command at a time) going through each line of code and watching what really happens when the computer executes it.
The test process may need repeating many times with slightly different inputs or configurations to ensure every route through the new code has been taken and debugged. Sometimes we even have to inject false errors into the testing environment to make sure that the software code deals with planned and unplanned situations.
When you add an item then immediately delete it, you would expect the database to be exactly the same afterward as it was before. We manually check this, for example adding multiple products to a shopping basket, and then removing them all and ensuring we get back to the same state.
There are also a vast number of database references to be checked. This means watching the contents of the database tables or viewing log files so that we can check that the software is correctly adding, updating or removing data. We also check that any database upgrades that are required for new features execute exactly as expected.
A new feature is tested just as that, rather than a collection of code. This means using the feature from the point of view of a real user. This often means testing a variety of different configurations, and making sure everything works the way you would expect.
By the end of our exhaustive testing we have a high degree of confidence that our software is working correctly, before we dare release the new code to our live servers. This is followed by paranoid manual double-checking on carefully chosen sites just to make sure, whilst keeping a close eye on our monitoring alerts and transactional reports.
The whole process is designed to be entirely seamless and without any risk of breakage so that our customers don’t even know that an upgrade has happened. It’s only when we tell our customers that they have the benefit of a new feature that they realise!
No posts found, be the first!
[url]http://example.com[/url] or [url=http://example.com]Example[/url]
[list][*] Point one [*] Point two[/list]
Copyright © 2016 Intergage Ltd | All Rights Reserved | Registered in England | Company No. 03989761 | VAT No. 754 8431 12