During a training session on the fundamentals of testing, a participant asks me the following question: what is the meaning of software testing for “Data”?
Let’s start with a simple case study: investigating a movie dataset. Thus, I highlight some of the test types and levels we may have for “Data.” Indeed, the term “Data” is generic and can have several meanings. Moreover, we test to reduce software failure risk in operation in a given context and for specific use cases. Therefore, clarifying testing for Data requires a well-defined use case.
Investigating the dataset
Using the data analysis process, I investigate a movie dataset from TMDb with Python. It contains information about then thousand movies. I am looking for answers to business questions such as: what kind of properties are associated with movies with high revenues? Do unpopular movies have common properties? This dataset is a CSV data file. I acquire, load, transform, and create reports on it via Jupyter Notebook, where I write and run Python scripts. All that would represent the system to test. Be aware that Jupyter Notebook – as software – is an existing subsystem out of the scope for testing. Thus, my approach to answering the business questions is via the Notebook, an exploratory analysis following a data processing and transformation phase.
To understand the meaning of testing here, let’s view this system through different working domains:
- Data at the source,
- Loading and transformation,
- Data visualization.
The data at the source
The CSV dataset is a crucial part of the system under test. Indeed, its quality impacts the relevancy of the analysis results. So, it is often specified as a customer’s requirement. Thus, checking the dataset’s quality is part of the functional tests. I am doing this, for example, in the “Data Assessing and Inspection” part of the Notebook.
Data loading and transformation
One perspective on the system would show:
- A source system: it is the one presented above but without the Python scripts, the transformation results, and visualizations.
- A target system: it is still the same as above. But this time, with all the Python scripts and results – including the visualizations.
For “Data”, integration tests usually apply to the data flow from the source system to the target system. Here, we create the target system by running the Python script blocks written in the Notebook. Therefore, the integration tests are not relevant. But, sometimes, Python scripts in the notebook call (via a web service, for example) external software components. For example, it could be specific data processing or access to external data sources. In this case, we perform integration tests between two components as usual. These tests may be, for example, regarding the web service as an interface.
From another point of view, the data transformation could be part of the component tests (or unit tests). Individual components – blocks of Python scripts – are developed and executed in a Notebook for data transformation. The development team creates the tests for the individual components designed for the data transformation. For example, I am using “assert” blocks in this “Identify Customer Segments” Notebook (Using unsupervised learning techniques to identify segments in a population).
I perform a system test when I run the entire Notebook. Indeed, the system starts from the raw data to – through different transformations – present the answers to the initial business questions.
For significant data processing, one can have a separate test team to validate the functional data transformation rules at the system test level.
As a developer or tester, I do one or more successive runs of the entire Notebook with a client. He is waiting for the results of the data analysis. We would be in an acceptance test situation. The client would validate the results and answers obtained – usually via visualizations – based on requirements heard beforehand.
For data visualization, the tests consist of observing or interacting with the visual elements created (graphs or dashboards) as the end user would. We do those “reporting” tests according to requirement specifications for visualizations. We would then be in a situation of doing functional tests as system tests and then acceptance tests. Checking non-interactive visualizations according to requirement specifications is a review. Indeed, we would be doing static testing.
To all this, I would add a few ideas for non-functional tests we may see when it comes to “Data”:
- Performance tests: using a load testing tool to generate multiple reports and visualizations simultaneously. These tests help to check the generation lead time for visualizations or memory leaks.
- Security tests: in some situations, you can perform transformations or generate visualizations only after specific processing to protect the source data. It is also known as anonymization. The tester is the one who ensures – based on specifications or not – the anonymization.
Testing for Data differs from an ordinary testing project on several points:
- Mastery of the Data specific terms
- A good perspective on the object under test
- Knowledge of working domains such as data visualization and transformation
The rising prioritization of data requires cutting-edge quality techniques and practices while supplying and using those data. That’s why more and more professionals are learning software quality for data.
Contact-us for more about mastering quality for Data and Data Analytics projects.
 The test types are: Functional Testing, Non-Functional Testing, White-box Testing, Change-Related Testing.
 The test levels: Component Testing, Integration Testing, System Testing and Acceptance Testing.
 The meaning of running a Jupyter Notebook: https://docs.jupyter.org/en/latest/running.html
 Specific tool for anonymization: https://www.ihsn.org/software/disclosure-control-toolbox
Leave a Reply