This post is not about the sort of testing people talk about when nearing a release and deciding whether it’s done. I have another word for that. I call it “testing,” or sometimes final testing or release testing. Many projects perform that testing in such a perfunctory way that it is better described as checking, according to the distinction between testing and checking I have previously written of on this blog. As Michael Bolton points out, that checking may better be described as rejection checking since a “fail” supposedly establishes a basis for saying the product is not done, whereas no amount of “passes” can show that it is done.
Acceptance testing can be defined in various ways. This post is about what I consider real acceptance testing, which I define as testing by a potential acceptor (a customer), performed for the purpose of informing a decision to accept (to purchase or rely upon) a product.
Do we need acceptance testing?
Whenever a business decides to purchase and rely upon a component or service, there is a danger that the product will fail and the business will suffer. One approach to dealing with that problem is to adopt the herd solution: follow the thickest part of the swarm; choose a popular product that is advertised or reputed to do what you want it to do and you will probably be okay. I have done that with smartphones, ruggedized laptops, file-sharing services, etc. with good results, though sometimes I am disappointed.
My business is small. I am nimble compared to almost every other company in the world. My acceptance testing usually takes the form of getting a trial subscription to service, or downloading the “basic” version of a product. Then I do some work with it and see how I feel. In this way I learned to love Dropbox, despite its troubling security situation (I can’t lock up my Dropbox files), or the fact that there is a significant chance it will corrupt very large files. (I no longer trust it with anything over half of a gig).
But what if I were advising a large company about whether to adopt a service or product that it will rely upon across dozens or hundreds or thousands of employees? What if the product has been customized or custom built specifically for them? That’s when acceptance testing becomes important.
Doesn’t the Service Level Agreement guarantee that the product will work?
There are a couple of problems with relying on vendor promises. First, the vendor probably isn’t promising total satisfaction. The service “levels” in the contract are probably narrowly and specifically drawn. That means if you don’t think of everything that matters and put that in the contract, it’s not covered. Testing is a process that helps reveal the dimensions of the service that matter.
Second, there’s an issue with timing. By the time you discover a problem with the vendor’s product, you may be already relying on it. You may already have deployed it widely. It may be too late to back out or switch to a different solution. Perhaps your company negotiated remedies in that case, but there are practical limitations to any remedy. If your vendor is very small, they may not be able to afford to fix their product quickly. If you vendor is very large, they may be able to afford to drag their feet on the fixes.
Acceptance testing protects you and makes the vendor take quality more seriously.
Acceptance testing should never be handled by the vendor. I was once hired by a vendor to do penetration testing on their product in order to appease a customer. But the vendor had no incentive to help me succeed in my assignment, nor to faithfully report the vulnerabilities I discovered. It would have been far better if the customer had hired me.
Only the accepting party has the incentive to test well. Acceptance testing should not be pre-agreed or pre-planned in any detail– otherwise the vendor will be sure that the product passes those specific tests. It should be unpredictable, so that the vendor has an incentive to make the product truly meet its requirements in a general sense. It should be adaptive (exploratory) so that any weakness you find can be examined and exploited.
The vendor wants your money. If your company is large enough, and the vendor is hungry, they will move mountains to make the product work well if they know you are paying attention. Acceptance testing, done creatively by skilled testers on a mission, keeps the vendor on its toes.
By explicitly testing in advance of your decision to accept the product, you have a fighting chance to avoid the disaster of discovering too late that the product is a lemon.
My management doesn’t think acceptance testing matters. What do I do?
Warning: This is mostly a narcissistic post that will add little value to the testing community.
I’ve been pretty depressed about my proposal not getting picked for Let’s Test 2014. Each of my proposals have been picked for STPCon and STAR over the past three years; I guess I was getting cocky. I put all my eggs in one basket and only proposed to Let’s Test. My wife and I were planning to make a vacation out of it…our first trip to Scandinavia together.
Despite my rejection, my VP graciously offered to send me as an attendant but I wallowed in my own self pity and turned her down. In fact, I decided not to attend any test conferences in 2014. Pretty bitter, huh?
I know I could have pulled off a kick-ass talk with the fairly original and edgy topic I submitted. I dropped names. I got referrals from the right people. My topic fit the conference theme perfectly, IMO. So why didn’t I make the cut?
The Let’s Test program chairs have not responded to my request for “what I could have done differently to get picked”. Lee Copeland, the STAR program chair was always helpful in that respect. But I don’t blame the Let’s Test program chairs. Apparently program chairs have an exhausting job and they get requests for feedback from hundreds of rejected speakers.
Fortunately, my mentor and friend, Michael Bolton read my proposal and gave me some good honest feedback on why I didn’t get picked. He summarized his feedback into three points which I’ll paraphrase:
So there it is. One of my big testing-related failure stories. Wish me luck next year when it give it another go, for Let’s Test 2015…man that seems a long ways off.
Some people say they “don’t have enough requirements to start testing,” or that the requirements are unclear or incomplete or contradictory or out of date. First, those people usually mean requirements documents; don’t confuse that with requirements, which may be explicit or tacit. There are plenty of sources of requirements-related information, and it’s a tester’s job to discover them and make inferences about them. Second, insufficient clarity about requirements is both a test result and a project risk, and awareness of them may be crucially important project information. (If the testers aren’t clear on the requirements, are the programmers? And if they’re not, how is it that they’re building the product?) Finally, if there is uncertainty about requirements, one great way around that problem is to start testing and reporting on what you find. “Insufficient requirements” may be a problem for testing—but it’s also precisely a problem that testing can help to solve.
Too often testing is focused on getting the right answers, rather than asking worthwhile questions and helping to get them answered. There are at least two overarching questions that a tester must ask. While looking directly at the product, at its customers, at the project, at the business, and at the relationships between them, the tester’s first question is “Is there a problem here?” When the tester observes anything looks like it might be a problem, the tester’s next question is for the testing client and for the rest of the project community: “Are we okay with this?”