by Casey Ball

Preparing a resinous flooring system specification for any facility requires some number crunching, and not just on the budget side. Specifiers need to review numerous performance data points to select the optimal flooring system for the given environment. Any ‘accounting’ mistake made during the review may sacrifice performance in the final floor coating selection.
A major specification error could mean years of lost durability, for example. This could easily happen when comparing the Shore D hardness of different resinous flooring systems. Simple logic would predict the harder the floor, the better its durability. However, harder coatings abrade easier than softer ones. By selecting a system with a lower hardness value, such as a thin-film urethane, a specifier can enhance the flooring’s abrasion resistance and esthetic performance. Of course, the flooring should not be so soft that it cannot meet the demands of its service environment, which may include handling heavy vehicle traffic. The specifier, therefore, needs to select a system striking the right balance to enable long-term performance.
To determine the optimal balance of flooring performance properties for a given service environment, specifiers need to understand how the various data points influencing product selection translate into real-world performance. Data sheet figures can be meaningless if they are not considered in the context of the actual service environment. Additionally, such figures may be misleading, as different labs may produce varied testing results, making it difficult to perform direct product comparisons. Further, certain performance data may lack relevance because the concrete substrate supporting the flooring system likely has lower performance characteristics than the flooring itself. Thus, the concrete will fail at lower applied forces than the flooring system’s ratings, making the data less relevant to actual service conditions.
By understanding testing protocols that influence product data reporting, specifiers will be able to optimize flooring system recommendations for specific environments.
Pay attention to data precision, bias, and variances
When developing facility flooring guidelines, specifiers need to understand the limitations of some testing methods used to generate data for product comparisons, including their precision, bias, and allowable variances. Some ASTM tests and data points enable reasonably accurate comparisons, while others offer less relevance due to potential discrepancies between labs and the personnel running tests. This is why ASTM does not intend for its tests to provide numerical comparisons of standalone data. Specifiers should only make direct product performance comparisons when similar systems are tested at the same time, in the same lab, using the same technician.
The overall relevance of using ASTM tests to evaluate physical and chemical characteristics of flooring systems relates to the precision—or repeatability and reproducibility—of each testing method, as well as the resulting acceptable levels of variance. Tests conducted mechanically, such as one using a pneumatic testing device, have lower allowable variances, while tests relying on human factors and subjective observations, such as manual adhesion tests, have greater allowable variances.
Per ASTM, the repeatability of a test “addresses variability between independent test results gathered from within a single laboratory” (intralaboratory testing). Reported repeatability figures measure the maximum difference calculated between multiple tests run on the same piece of equipment with the same technician in the same lab. For reproducibility, ASTM “addresses variability among single test results gathered from different laboratories” (interlaboratory testing). For more information read Pat Picariello’s “Fact vs. Fiction: The Truth about Precision and Bias,” published in ASTM Standardization News, March 2000.
Labs should perform round robin testing on samples to determine repeatability and reproducibility values. Coatings suppliers can then publish an average, using the greater and lesser results to determine an allowable variance. This variance itself can create some confusion when comparing products because it may be broad. For example, when comparing the Shore D hardness of flooring systems, ASTM D2240-15e1, Standard Test Method for Rubber Property—Durometer Hardness, allows for a 16-unit reproducibility variance between labs. Therefore, a Shore D value of 80 in one lab would be considered equivalent to a Shore D value of 64 reported in a different lab. This is a large gap that may reduce a specifier’s confidence when making a product selection.
When considering the precision of testing methods, intralab repeatability results are typically accurate since the conditions and test operator are the same. Thus, specifiers can make accurate comparisons when two or more products are tested in the same lab under identical conditions. However, variances between labs can make interlab reproducibility results suspect, especially if a coatings manufacturer’s lab has greater accreditation than a third-party lab performing the same test. The third-party lab’s results, while unbiased, may not be as accurate.
Speaking of bias, specifiers should review any bias statements listed in ASTM standards to assess the general accuracy of tests. Bias occurs when a systematic error exists “contributing to the difference between the mean of a large number of test results and an accepted reference value” (For more information read Pat Picariello’s “Fact vs. Fiction: The Truth about Precision and Bias,” published in ASTM Standardization News, March 2000.). Bias statements describe how labs correct test results to provide accurate figures for comparison.
Finally, specifiers need to carefully review reported results when comparing data, as manufacturers do not follow set reporting standards. For example, one manufacturer may use ASTM C579-18, Standard Test Methods for Compressive Strength of Chemical-Resistant Mortars, Grouts, Monolithic Surfacings, and Polymer Concretes, while another uses ASTM D695-15, Standard Test Method for Compressive Properties of Rigid Plastics. The standards are similar, but reported results may not be directly comparable. It is also possible suppliers report data in different units of measurement, which can create confusion. For instance, one supplier may express abrasion resistance from ASTM D4060-14, Standard Test Method for Abrasion Resistance of Organic Coatings by the Taber Abraser, for its product as 30 mg (0.0011 oz) of material lost, while another supplier reports its product as 0.04 grams (0.0014 oz) lost. At a quick glance, the data point in grams may look superior.