I have not failed. I’ve just found 10,000 ways that won’t work.
Thomas A. Edison

Three things cannot be long hidden: the sun, the moon, and the truth.

The only true wisdom is in knowing you know nothing.

Let me tell you first what negative testing means to me. It can be any kind of test execution activity based on improper or inconvenient scenarios. It may show a system is properly working or not—so it is not different from any other test activity from this perspective—but the type of actions taken by the tester are negatively driven.

I observe some misinterpretations of the definition of negative testing. Surely, I respect every opinion, but the negativeness of this test activity is not coming from the aim of showing that some system is not functioning, rather it is coming from the way we approach the system. In this case, testers are negative, not the systems. They are basically entering invalid inputs or executing some unexpected user behavior.

Hopefully, we are on common ground now. (If not, you can still read my ideas. At least they prove that your ideas are better!)

Let me move on to the next topic. People say that testers are pessimistic by nature, but their scenarios are by default positive! How can this happen?

The answer is clear: it is all because of the test basis. To be more precise, we naturally observe requirements, analysis deliverables, business process diagrams, and functional documents all written and prepared in positive manners. It sounds logical, because we do not expect directly negative expressions in requirements. In other words, requirements generally tell us what a system will do rather than what it won’t do.
As a consequence, test scenarios tend to be positive from the nature of the basis on which we wrote them. What, then, can we do?

In my work environment, I encourage people to write at least 20–25 percent of their test cases negatively. To be honest, I don’t like such arbitrary numbers but feel that this is necessary in order to ensure that sufficient attention is given to negative testing.

Let me relate an experience I had:

In a very complex transformation project, we were about to finish functional testing. Lots of bugs were found and fixed, and some modules seemed to be mature enough for user acceptance tests (UATs). Just before we officially announced the UAT phase, some colleagues told me that they had found many defects while they were trying some unwritten negative test scenarios.

Until that moment, we were neither concentrating on nor encouraging any negative testing activity. Just after that, I did some research into negative testing, and at first, I was not thrilled. I said to myself, this cannot be that effective.

When I returned to the office, I asked the guys to show me their work. They showed me some negative scenarios, like entering five characters into a field where it should be a minimum of eight, typing letters in numeric fields, concurrently logging into the system from several different browsers and checking the systems authentication/authorization behavior, checking data limitations and attachment size controls, and so many other invalid actions.

Frankly speaking, all the test cases failed! It was really impressive to see such intense negativity in a single run, but the things I saw completely changed my mind. From that moment on, I was a fan of negative testing.

And if you wonder what happened to the project, it was postponed for three months…