4 min read

What is Testing?

I figured for the first post on this blog, I’d start with a deceptively simple question. “What is testing” is a question that sounds relatively simple on the surface, but relies largely on intuitive understandings. I’ve worked and studied software testing for a few years now, and I commonly see even experienced testers fail to give a solid answer to this question. I find this fascinating, that so many people can work in a vital part of the software development cycle, and yet be unable to define it. Those that can give a definition often give definitions that contradict their peers. So perhaps it’s foolish of me to try to define it, but I think at least discussing it brings me closer to understanding: and surely understanding testing is a step towards mastery.

Testing is, simply put, an effort to measure something against specific attributes of quality. You'll notice this definition is vague almost to the point of uselessness, and I think that's an important feature of testing. Without defining your quality attributes and means of measurement, testing is useless. This is one thing I think is lacking in a lot of newer software testers, this idea that they first need to actually articulate what they're trying to measure. "Make sure the software is correct" seems like a simple enough task, but what is correct? How do you know something is correct? These questions require at least some thought - and ideally those thoughts should be documented.

For some software, the bar for one quality attribute might be much lower - usually at the trade-off of a higher quality attribute elsewhere. Maybe you have a low bar for performance because your security needs to be top notch. Maybe your functionality itself can be lacking because you have a high need for availability (i.e., the product has to ship now). Different areas of the same software can have different quality goals, and your first step in testing should be identifying these goals. I plan to write up a longer blog post on this topic later, but for now it should be sufficient to say that the first step of testing is identifying our goals.

This is all very high-concept, so I'd like to bring it down with some example software. Let's take the classic To-Do app, every software developer's starting project. The software is simple, you can add elements to the list, remove them, and they get persisted in a database and displayed on the screen. What do we suppose our key quality attributes are here? In the ideal world we would say "It's all important! Test it all!", but good luck finding a company that will fund that effort. So, what matters most?

  • Functionality
    • Almost¹ always one of the most important quality attributes. When most people think of testing, functional testing is the first thing to come to mind. This can be further broken down into correctness, capability, security, reusability, etc. but that's a topic for another day.
  • Accessibility
    • This one is often important, but for a small simple app we likely don't need a huge amount of effort here.
  • Performance
    • This is a hard attribute to lock down - it really depends on our expected load and uptime requirement. If we expect millions of users with 99.999% uptime, this becomes a huge priority. Otherwise, probably pretty low. In our example, we probably don't spend long on this attribute.
  • Compatibility
    • This one is easy for us - we're in browser and we aren't doing anything fancy, so we should get this for free. Just some basic checks to ensure we aren't using JS that isn't fully browser available. This gets much harder with standalone software, or embedded software 😱.

This is not an exhaustive list, nor is it a comprehensive detailing of reasoning for the quality attributes. It's simply a handful of attributes that I think will be useful for demonstration.


Once we have our quality attributes defined, we can start to assess how to actually test the software. Each of these quality attributes is different and so may need a separate approach to assess them. In general, we have two categories of testing approaches, automated testing and manual testing. For this blog post, I won't go into detail about what those are (each of those are deep topics in their own right), and will simply use them to illustrate my point.

Let's take our quality attributes again, along with how much effort we expect to spend on them:

  • Functional
    • Effort: high
    • Style: automated
      • We can cover this easily in unit and integration tests. Some E2E workflows might be overkill for an app of this size, but can give us some benefits in other areas.
  • Accessibility
    • Effort: medium
    • Style: manual
      • Automated accessibility testing is certainly possible, but it's complex and inconsistent. For an app of this size, manual passes should be sufficiently simple.
  • Performance
    • Effort: low
    • Style: automated and manual
      • To assess behavior under heavy loads we will almost certainly need to include some automation, however we can also measure lots of browser performance indicators manually with Google Lighthouse.
  • Compatibility
    • Effort: low or medium
    • Style: automated or manual
      • If we created functional E2E tests, we can reuse those with different browsers to verify compatibility. If not, we can also do this manually - the software is small enough to test all major browsers in an hour or two.

That's what testing is! Of course, there's still implementation to go (and I will cover that in another post), but once you've gotten here you've built an important foundation - this exercise helps guide testing for the entire duration of the project. There might be changes in the plan as business needs progress, but only by having a documented record of how your quality attributes have shifted can you adequately assess the impact of those changes.

This is also not specific to software! Although I have used software as my example, fundamentally the process works for anything you need to test. As long as you can identify what quality means (i.e. what your quality attributes are), you can - and should - create tests to verify the quality.


¹ - I have been on several projects where functional quality gets hamstrung at the cost of the software shipping sooner. It's a very common occurrence, and I prefer to see it as sacrificing functionality for another attribute (often stability, demonstrability, or affordability).