Skip to main content Skip to main menu Skip to footer

Blog

Are Your CMT Results Biased? Top 4 Factors

Construction materials testing data is only as accurate and reliable as the workflow that produces it. But what if your workflow is biased? How would you know? The answer depends on how you manage these 4 factors.

ACI 214R-11 identifies eight principal causes, or sources, of variation in strength testing. In other words, the elements that can add bias to your CMT workflow. Things like fluctuation in characteristics and proportions of ingredients, concrete temperature and curing, fabrication techniques and specimen testing.

When you boil everything down, these causes fall into four categories: operators, machines, processes and documentation control.

Above all else, these four factors affect the “correctness” of your test performance and, thus, the quality of your results. So, if you want to make sure your results aren’t biased, you first need to ensure these 4 factors are being managed the right way.

Here’s what that looks like.

#1. Operators

The chief factor in test performance comes down to the people prepping, running, recording and analyzing tests. People make mistakes and unknowingly add bias. Natural inconsistencies exist between the way one technician operates a machine and the way another one does it.

To help mitigate these weak points, ASTM international standards mandate that labs keep records on testing personnel, ensuring that proper work experience, education and on-the-job training requirements are met. But despite all this, most of the time five different technicians mean five different tests.

One technician might be more careful when prepping a specimen. One might change pad caps religiously while another doesn’t. One selects the right cap, one selects the wrong one. One sees a high edge and cuts it. Another one doesn’t.

Or how about tests run on a manual compression machine. Manual machines have levers: full advance, metered advance and hold. If technicians full advance too far, they will preload too much and violate the test spec. Then there’s the metering valve, where technicians must adjust the load rate as the test runs. Often, technicians will monitor the metering valve for the first few tests of the day, get as close as they can to the right load rate and then never touch it again.

Because they’re busy, many operators tend to let manual tests run on “autopilot” like this, even though the process isn’t automatic and requires human involvement.

Conclusion

Operators are the top influencer of test results. A workflow loaded with manual tasks puts your results at risk. Reduce errors and variance with more automation.

#2. Machines

The second biggest factor is the type of machine used to run the tests. A manual machine and an automatic machine will produce two very different tests. If your lab relies on the former, there are a few characteristics impacting test performance that must be carefully controlled.

Manual machines open the door to bias from operators. As we mentioned above, operators like to set the manual load rate and forget it for the rest of the day. Setting and forgetting is poor practice on the technician’s part, but manual machines still make it easy to get away with.

Manual machines also run hot. Once a technician hits the ‘on’ switch, the pump operates at full speed all day. This means the machine continuously pumps all of its oil, dumping the excess over a relief valve. The resulting friction heats up the oil, and the higher temperature reduces viscosity. This leads to poorer load rate control and inconsistent tests (not to mention shorter component life).

This doesn’t happen on an automatic machine. An automatic machine builds the load slowly and automatically detects and adjusts the rate as needed. This is because its pump only runs as fast as is necessary to complete the work – there’s no dumping over a relief valve at any point. Also, the motor on an automatic machine typically runs at 10 percent of full speed, which means pumps run cooler and smoother, and live a longer life.

Conclusion

If you want the time-saving benefits of a ‘set it and forget it’ approach, without the impact on results, get an automatic machine.

#3. Processes

Processes, from the way specimens are prepped to the way data is analyzed and stored, affect the variance and quality of testing. All are guided by the policies a given lab follows to mitigate the kind of issues we’ve covered so far.

For example, when a technician puts concrete in a mold, the bottom is as close as possible to level. The top, which is uncapped, is not. Labs should have a policy that lays out the proper procedure for correcting the imperfection. Many labs strike off the excess material to even it out. Others rely on pad caps to fill in the missing piece. But pad caps can handle only minor imperfections, not severe ones.

Also, caps wear out over time. ASTM states you can only use pad caps for a certain number of tests, and for a certain load rating. If you exceed these limits, caps will reach end of life faster. Labs that have policies around inspecting and disposing pad caps are able to assure proper test performance.

Then you have the processes relating to results recording and data transfer. Handwritten results and manual data entry are susceptible to errors. But specimen barcodes and machines that read those barcodes to automatically connect specimens and test results in a database improve data accuracy, and automation creates a consistent, repeatable testing process that lends to better test performance.

Conclusion

Establish policies that put trust in your process, not your people.

#4. Document Control

How do you acquire, organize and store test data?

Whether you record results by hand or by keyboard, you still have to file away the data somewhere. Many labs are required to print out the results of every test. If you’re breaking 500 cylinders a day, that’s 500 pieces of paper. Paper that you have to divide into sets, organize by job and file into cabinets. Paper that you have to retain for at least three years (per C1077-17, 9.6). Paper you have to sort through when an audit or inspection pops up.

Documentation control is critical to your ability to pass an inspection and also analyze long-term trends. You can’t determine true test performance if you can’t (easily) retrieve and reference past results. Integrated databases offer a way to store and recall data instantly, without the manual burden. If you have to prove test results from a year ago, you can access what you need in seconds.

And there’s another level of documentation that’s typically ignored. Unless you’re printing every test as described above, there’s a good chance you’re not saving the data required to generate the X-Y graph of a test.

This is important because if there’s ever a question about test results, that question typically starts with the numbers that are stored – the specimen stress or strength. If the strength is in question, and all you have is the strength value, how do you prove that the value is correct? The easiest way to do this is to show the X-Y graph of load or stress versus time. That graph is visual proof of how the test was performed, and how the specimen behaved during testing.

Most systems can display this data during and immediately after a single test, and some store it for a limited time, but almost none can easily produce it six months or even six days later.

Conclusion

Lose the filing cabinets and store verifiable test data forever in a connected database.

Overall Conclusion

Accurate, unbiased results demand a truly automated testing workflow. In this kind of environment, operators need not intervene in the middle of the process. Machines live longer and produce repeatable, consistent results. Processes don’t need to be questioned, altered or ignored. Document control is suddenly more manageable and thorough.

A truly automated testing workflow will drastically enhance test performance – and ForneyVault® can get you there. Get in touch today or request a demo to learn more.

Want more like this? Join our mailing list for monthly content.