Validation of a Laboratory Test (and why you probably mean verification)
- Bryan Knowles
- Aug 31, 2024
- 7 min read
What is validation? What is verification? How to I bring a new laboratory test on board? Hopefully we can simplify this for you.

Let’s clear something up. Usually when someone talks about test “validation”, they mean “verification”.
“Validation” is done when a new kind of test is developed by a lab or manufacturer, or a new use of a test is desired. It requires large studies to determine sensitivity and specificity and patient studies to determine reference ranges.
“Verification” is done when an analyzer or new test is delivered to a lab, and the lab wants to get it up and running. The user is verifying what the manufacturer has already validated.
Test verification, in the context of clinical laboratory science, refers to the process of confirming that a test method produces results that are accurate, reliable, and appropriate for its intended clinical use. Unlike test validation, which is typically performed by manufacturers before a test is brought to market, verification is conducted by clinical laboratories to ensure that the test performs as claimed under actual working conditions. This involves evaluating the test’s performance characteristics, such as accuracy, precision, linearity, and reportable range.
Components of Test Verification
The key components of test verification include accuracy, precision, linearity, reportable range. Each of these components addresses different aspects of a test’s performance, contributing to the overall reliability of laboratory results. The following are the four pillars of a verification study:
Accuracy: Accuracy refers to how close the test results are to the true or accepted reference value. A test with good accuracy consistently produces results that are very close to the actual or known value of the analyte being measured. In practice, accuracy is typically evaluated by comparing the results of the test method in question with those obtained using a gold-standard reference method or with established reference materials.
A test is considered to have good accuracy if its results fall within an acceptable range or percentage of the true value. For example, in clinical chemistry, accuracy may be defined as within ±5% to ±10% of the reference value, depending on the analyte and the clinical context. High accuracy minimizes systematic errors or bias, ensuring that the test provides true measurements that are clinically useful for diagnosis and management.
Precision: Precision evaluates the consistency and reproducibility of test results under identical conditions. It is often assessed by running multiple replicates of the same sample over different days, with different operators, or using different equipment. Precision is typically expressed as the coefficient of variation (CV), which indicates the extent of variability in relation to the mean value. High precision means that repeated measurements produce similar results, which is crucial for monitoring chronic conditions or managing therapies where small changes in test results are clinically significant.
A test is considered to have good precision if the CV is within a low acceptable range, typically less than 5% for many routine clinical chemistry tests. For more sensitive assays, such as those used in immunology or molecular diagnostics, a CV of less than 10% might be acceptable. High precision ensures that the test produces reliable results that are consistent across repeated measurements, reducing the risk of random errors.
3. Linearity: Linearity, or the assay’s reportable range, assesses whether the test results are directly proportional to the concentration of the analyte across a specified range. To verify linearity, laboratories measure a series of dilutions or standards that cover the expected range of analyte concentrations. This step ensures that the test can accurately quantify the analyte over the entire range, from the lowest to the highest concentrations.
4. Reference Range: To verify a reference range for a laboratory test, a clinical laboratory must collect and analyze data from a representative sample of the healthy population. This process involves selecting a group of individuals who are free from the condition the test is designed to detect and meet specific inclusion criteria, such as age, sex, and ethnicity, to ensure the sample reflects the intended patient population. The laboratory then performs the test on these individuals and statistically analyzes the results to establish the range within which 95% of healthy individuals fall, typically using methods such as the mean ± 2 standard deviations for normally distributed data. Any significant deviations or anomalies in the data should be reviewed, and the reference range may be adjusted accordingly. Verification of the reference range ensures that test results are interpreted accurately, supporting reliable clinical decision-making and patient care.
What does this all mean in practice?
At the very least this will mean at least 20 repetitions of quality control to assess accuracy and precision, 20 patients to verify the reference range, and using known materials to (such as calibration verification samples) to verify the reportable range (linearity). You can accomplish much of this data collection through method comparison.
What is a method comparison?
If possible, a lab should perform a method comparison. This may be impossible in practice with certain tests and analytes (Just try sending a blood gas sample to a reference laboratory and see what happens to the values). But a method comparison is a great way to gather useful data.
To conduct a method comparison, the laboratory first selects a set of patient samples that cover the full range of expected analyte concentrations, including both normal and pathological levels. These samples are then tested using both the new and the reference methods under similar conditions. The results are compared using statistical techniques such as linear regression analysis, Bland-Altman plots, or Passing-Bablok regression, which assess the degree of agreement and identify any systematic biases or proportional errors between the methods.
The goal is to demonstrate that the new method’s results are within clinically acceptable limits of variation compared to the reference method. Any significant discrepancies should be investigated and resolved before the new method is implemented. This comparison is crucial for ensuring continuity in patient care, as it minimizes the risk of misinterpretation of results due to method differences, thereby supporting accurate diagnosis and treatment decisions.
Other things to remember before you go live
The Laboratory Medical Director must sign off on the verification studies for the testing to “go live”.
The lab must also create a procedure for the testing and subscribe to proficiency testing for that test.
Check that your reference range is correct in the LIS and that high/low values are reporting as high/low. Include critical values if needed (some tests may not have critical values).
If an IQCP is desired, begin collecting daily QC data to support your quality control plan (part of the IQCP). Often 30 days is desired for creating a monthly IQCP. The test manufacturer often has a risk assessment available to help develop your IQCP.
Always check with the manufacturer to see if they will perform any or all of the verification testing for you. This is super important when lab staff is either short or unreliable. But also, the manufacturer should have software to analyze the data, so even if you do the testing yourself, you may send the data to an application specialist at the manufacturer.

Validation testing
Keep in mind, most labs will NOT be doing this. But here is what it would entail in addition to the studies above.
Analytical sensitivity: Also known as the detection limit, analytical sensitivity refers to the smallest concentration of a substance that can be reliably detected by the test. It is particularly important for tests designed to detect low levels of analytes, such as tumor markers or hormones. Verifying analytical sensitivity ensures that the test can accurately identify patients who have the analyte of interest, even at low concentrations.
Analytical specificity: Analytical specificity measures the test’s ability to exclusively detect the target analyte without cross-reacting with other substances present in the sample. This is critical for tests that may be affected by interfering substances, such as hemoglobin, lipids, or other proteins. Ensuring high specificity helps minimize false-positive results, which can lead to unnecessary further testing or treatments.
Large patient studies: Tons of patient samples should be run in comparison with a reference method to establish a reference range. Recruit a sufficient number of healthy volunteers, typically between 120 to 200 individuals, to ensure statistical validity. Screen these participants using clinical assessments and questionnaires to confirm they meet the inclusion criteria and are free from conditions that could influence the test results. Analyze the test results using statistical methods to determine the central tendency (mean or median) and dispersion (standard deviation or percentiles). For normally distributed data, the reference range is typically set as the mean ± 2 standard deviations, which includes approximately 95% of the healthy population. For non-normally distributed data, non-parametric methods, such as the 2.5th to 97.5th percentiles, are used to establish the range.
Conclusion
Test verification is a vital component of quality assurance in clinical laboratories. By thoroughly evaluating a test’s performance characteristics—such as accuracy, precision, and reportable range—laboratories can ensure that their tests provide reliable and accurate results. This process not only supports high standards of patient care but also helps maintain regulatory compliance and operational efficiency. As the field of laboratory medicine continues to evolve with new technologies and testing methods, the importance of rigorous test verification remains paramount in ensuring the highest standards of diagnostic accuracy and patient safety. At the very least, you’ll keep inspectors happy!
To review, do these things to bring a lab test on board:
Speak with the manufacturer to determine what they recommend and if they can help you.
Verify accuracy, precision, reportable range, and reference range by the methods above
Have your laboratory medical director approve (sign) the studies. Sometimes, the medical director may want more samples run.
Make sure your LIS is reporting the reference range correctly and that everything is crossing correctly and reporting high/low values as high/low and any critical values (if desired) are reporting as such.
Write a procedure.
Initiate proficiency testing.
Inform any medical staff or other personnel in who will be affected by the new testing change.
GO LIVE!




Comments