Blog

How Should Your CMT Lab Store Test Data?

The way a CMT lab stores data has a direct impact on its test results, accuracy and reputation. In this post, we highlight four approaches to data storage and how each one affects the construction materials testing environment.

One of the most important things a lab must be able to do is retrieve and reference past test results. If it can’t do this, it won’t be able to do equally important things like pass an audit, analyze trends or evaluate results bias. And if it can’t do those things, it can’t be in business.

This is what good data storage and documentation control are all about.

Now, all construction materials testing labs know how critical data storage is. Results must be retained and retrievable for at least three years for auditors. Because of that, you’d think there would be one standard approach to managing it. But in reality, there are four.

  • Filing Cabinets

Results are printed, divided into sets, organized by job and filed into cabinets. If a lab breaks 500 cylinders in a day, they will need to track 500 pieces of data and file each break into the appropriate sample report. This manual data entry and physical storage process is extremely vulnerable to errors and data loss.

  • Flat Files on a Server

Results are typed into a computer application – like Excel – and stored in digital flat files on a shared server. There’s less paper involved but labs still have to manually add test data to the system, as well as move documents throughout the entire process.

  • Relational Database

Test data is entered into a structured database, such as a homegrown Access file. Data points can be identified, organized and accessed in relation to other pieces of data. But, again, the workflow is still burdened by manual data entry.

  • Integrated Platform

Data entry is truly automated. Results are synced and can move freely between multiple integrated systems such as the database, testing machine, LIMS and more.

Four different methods. Do they produce equal outcomes? Of course not. In fact, each one represents an incremental improvement over the one preceding it. To understand why, let’s break down how each storage method works in the lab.

Onboarding Samples

In a filing cabinet system, technicians will fill out a stack of papers with the project, location, date, time and other key pieces of information about the sample. Then, they’ll copy that across seven pieces of paper which will be sent to:

  1. The engineer
  2. The wet concrete test results folder
  3. The 7-day test folder
  4. The 14-day test folder
  5. Three 28-day test folders

Every time tests are conducted, the technician has to return to the folder, retrieve the file, record results and push the paper to the next step. And after the three 28-day tests, all results have to be compiled into a final sample report. Finally, someone will have to pull test files from respective folders and then drive that information back into the folder in the stack.

If you use a server to store data, every time you run a test, it will publish results in a flat file format. You then have to manually reassign the results into set, sample and project folders where they belong. While a server is more secure, it doesn’t address the tedium of the process. The person managing or accessing the data still has to know exactly what your hierarchy is in order to find the right results. And any further reporting would require someone to manually recompile data into a presentable form. It is, essentially, a digital version of the filing cabinet process.

At this stage, a relational database provides some value. For example, when you run a test and add results to a sample report, every other file where that data is relevant will automatically update, so you never have to question if results are most up to date or dig through the hierarchy of files to make changes. However, the problem of having to manually copy and paste results from the test machine to the database still exists.

With an integrated platform, tests are assigned to a unique ID number. After a test runs, results are automatically sent to a cloud database (and other systems of record connected to it) and organized relationally. Results are then accessible and searchable any time, forever.

Testing Machine Functionality

 Many testing machines have barcode functionality. In a filing cabinet or server system, the machine reads the barcode and records that ID number. This creates a flat file of results with an ID as its file name. But the file name is just a number and does not contain any other information on the specimen.

If you have a relational Access database, the specimen ID number can be put into context. After the results are generated and transferred to the database, you can see the test results in relation to other aspects of the sample. But, again, this isn’t an automatic process. The machine creates a separate file that you have to integrate into your existing Access database. Plus, it doesn’t tell the machine anything about the specimen before the test – it only provides the context after results are generated.

An integrated platform can actually make the testing machine “smart”. If the machine is preloaded with key specimen data points like sample date, expected strength, dimensions and weight, it can scan a specimen ID and identify the contextual information associated with it. So, it “knows” what it’s testing. When the test is complete, results tied to the ID are automatically synced to the searchable database.

Data Analysis

To generate a sample report in a filing cabinet or server system, a technician must pull results and add them to one common document. For example, let’s say you want to conduct an analysis, like mix design. You likely have hundreds of documents on hand that you have collected, sorted and filed. To run an analysis, you have to retrieve and sort through these documents and look for every test. You can’t just say, “give me every test I ran on mix design one.”

Equipped with an integrated platform, you can say that. And once you have instant access to all of the tests, you can run the analysis automatically. While a homegrown Access database will help you see every test in context, streamlining analysis efforts, it’s not instant, and you typically can’t run the analysis at the click of a button.

Data Sharing

Projects with multiple stakeholders on different platforms require an efficient way to share data – but without having to share everything. Unfortunately, that’s the only option with flat file systems and even a robust relational database. At the very least, you will need a person to manually divvy out information to the appropriate parties.

Integrated platforms can provide multiple stakeholders with filtered connectivity to test information through an API. Everyone can access the platform and get what they need, without requiring remote access into your LIMS or ERP.

Conclusion

An integrated CMT platform is the clear winner here. It cleans up field and lab test results, upgrades testing machine functionality, unlocks powerful data analysis and allows for transparent data sharing. Best of all, technicians don’t have to spend time transposing numbers, matching results with specimens, deciphering handwriting or searching through filing cabinets for missing data. This means they’re not contending with the potential for human error. Plus, it’s simple to set up – unlike a relational database that requires a robust initial build out and is heavily reliant on accurate manual data entry.

If you’re still using a filing cabinet, server system or homegrown Access database, that’s okay. But don’t ignore the effect it’s having on your CMT environment.

Learn more about the data storage benefits of an integrated CMT platform. See how.