Force Calibration
Force calibration is a necessary process used for testing materials used for manufacturing equipment, machines, and other devices. All forms of metals and other materials can expand and contract during their use...
Please fill out the following form to submit a Request for Quote to any of the following companies listed on
This article provides a detailed look at.
Read further to answer questions like:
A calibration service focuses on identifying inaccuracies and uncertainties in measuring instruments or equipment. During calibration, the device under test (DUT) is compared to a known reference standard to assess how much the measurements deviate from the true value. This deviation is referred to as error. Once the error is identified, the DUT can be adjusted to improve measurement accuracy, although this adjustment process is separate and known as tuning or trimming.
Calibration is a formal process carried out in a controlled and accredited calibration laboratory. Some service providers offer on-site calibration to avoid disrupting operations by calibrating the equipment where it is used.
After calibration, the service provider issues a certificate to the requesting organization. This certificate details the calibration results and is specific to the calibrated instrument for documentation and traceability. Additionally, a calibration sticker may be affixed to the equipment to easily identify that it has been calibrated.
The International System of Units (SI) is a standardized measurement system. The abbreviation "SI" comes from the French term "Système International d'Unités," commonly known as the metric system. The purpose of the SI system is to provide a precise and consistent method for expressing measurements of physical quantities. It acts as a universal language for measurement, widely adopted across organizations, businesses, and research institutions globally.
The SI system was established in 1960 by the 11th General Conference on Weights and Measures (CGPM, Conférence Générale des Poids et Mesures). It operates under the authority of the Bureau International des Poids et Mesures (BIPM), an international organization based in Paris, France, responsible for maintaining global consistency in physical measurements.
The SI system comprises seven base units, which define the 22 derived units. These base units represent seven fundamental quantities, with the most stable reference being the distance light travels in a vacuum in 1/299,792,458 second, defined as one meter. This definition of the meter provides a constant standard for measuring length. The base units are updated based on the latest international definitions and scientific advancements. By combining these base units, the 22 derived units are established. The following table outlines the seven fundamental quantities:
Name of Physical Quantity | SI Unit |
---|---|
Length (l) | meter (m) |
Mass (M) | Kilogram (kg) |
Time (T) | Second (s) |
Electric Current (I) | Ampere (A) |
Thermodynamic Temperature (S) | Kelvin (K) |
Amount of Substance (N) | Mole (mol) |
Luminous Intensity (J) | Candela (cd) |
Prefixes are added to base units to denote their magnitude, with each prefix representing a multiple of ten. This system facilitates the easy conveyance and comparison of quantities, as it allows for straightforward conversion between units.
Metrological traceability, also known as measurement traceability, is defined by the International Vocabulary of Metrology as the "property of a measurement result that allows the result to be related to a reference through a documented, unbroken chain of calibrations, each contributing to the measurement uncertainty." This concept is crucial for ensuring that measurements and calibrations meet international standards. To illustrate this chain of calibrations, refer to the measurement traceability pyramid:
The calibration chain begins with working standards or high-accuracy process calibrators used to calibrate the device under test (DUT). These working standards and calibrators are considered the most accurate within a plant or site. Before being used on DUTs, these instruments are sent to an accredited calibration laboratory for calibration against a higher-accuracy standard or calibrator. The standards and calibrators in these laboratories are then sent to national metrology institutes (NMIs) of the respective countries. Finally, NMIs collaborate with international metrological agencies to ensure that SI units and calibration practices align with global definitions and standards. Each level of calibration must declare its associated uncertainties.
SI units are the cornerstone of all measurement standards and occupy the top position in the measurement traceability pyramid. They represent the ultimate standard of accuracy and "true value" for all measurements.
The BIPM and NMIs of participating countries play a crucial role in maintaining the accuracy of SI units as the calibration process descends through the hierarchy—from primary and secondary standards down to the process DUTs at the base of the traceability pyramid. As the calibration lineage moves from NMI-level standards, the "true value" is communicated downwards. While accuracy approaches the true value dictated by the SI system as you move up the pyramid, the cost of standards increases. Conversely, moving down the pyramid amplifies measurement errors and uncertainties.
In summary, for a measurement to be traceable, it must be linked to a higher-accuracy reference within the hierarchy. A measurement is considered traceable if it meets these three conditions:
Note: In the United States, the National Institute of Standards and Technology (NIST), a division of the U.S. Department of Commerce, serves as the national metrology institute (NMI).
There's a saying: "If you can't measure it, you can't improve it." Accurate measurement is fundamental to quality, safety, efficiency, and overall progress. Many industries depend on precise measuring instruments to enhance quality and ensure optimal performance. The primary objective of calibration services is to reduce measurement errors and increase confidence in the accuracy of measurements.
Factors such as the environment, frequency of use, and handling can increase measurement uncertainty and error. Therefore, timely calibration of measuring devices is essential. Calibration enhances the repeatability and reproducibility of the data produced.
The accuracy of measuring instruments does not inherently deteriorate over time. Instruments that remain stable over many years can serve as excellent standards. Their long history of stability and well-documented uncertainty makes them valuable, even if they may not represent the true value exactly.
Calibration is crucial when a measuring instrument significantly impacts the accuracy and validity of testing procedures, which is vital for the credibility of data from laboratories and testing facilities. This importance is particularly evident in fields like medicine and legal testing.
The ISO/IEC 17025 standard, known as the General Requirements for the Competence of Testing and Calibration Laboratories, outlines the competencies required for accreditation. It includes measurement uncertainty analysis and traceability, and many laboratories face challenges in these areas. Adherence to this standard also provides numerous intangible benefits and facilitates smoother operations for accredited testing facilities.
Calibration is also necessary if the measuring instrument is critical for detecting variations in processes that could impact product quality, health, and safety. Reliable measurements enable engineers to identify and mitigate assignable causes of variation, thus improving prevention and early detection.
Minimizing variation is essential for meeting product specifications. Large variations can pose risks to safety, particularly in manufacturing sectors such as aerospace, automotive, and pharmaceuticals, where deviations can jeopardize health and safety.
Finally, calibration ensures geographical consistency in measurements and aligns with international standards and agreements. It is crucial for international and domestic trade, where accurate measurement of goods impacts revenues. For instance, a cubic meter of gasoline exported must be accurately measured as a cubic meter when received in the importing country.
The BIPM defines calibrators as the "measurement standard used in calibration." These calibrators can be high-accuracy instruments used to calibrate a device under test (DUT), a source, or a Certified Reference Material (CRM).
A source is an instrument that generates a known and precise output. The DUT measures this output, and the settings of the source equipment are considered the true values.
A Certified Reference Material (CRM) is a type of standard characterized by a metrologically validated procedure with a known exact measurement value. Each CRM sample must be stable and accompanied by a certificate verifying its authenticity. CRMs are commonly used in analytical and clinical chemistry.
The typical procedure for calibrating a process parameter involves the following steps:
When comparing measurements from the DUT and the calibrator, there are two possible outcomes:
Common metrics used to determine calibration results include the Test Accuracy Ratio (TAR) and Test Uncertainty Ratio (TUR). Many calibration laboratories aim for a TAR or TUR of 4:1. This implies that the DUT's accuracy is at least 25% of the reference standard's accuracy, or the reference standard is four times more accurate than the DUT.
Test Accuracy Ratio (TAR): TAR is the ratio of the DUT tolerance to the tolerance of the reference standard. While it provides a basic pass-or-fail assessment in calibration, it does not account for measurement uncertainties associated with the calibration process.
Test Uncertainty Ratio (TUR): TUR is the ratio of the DUT tolerance to the estimated calibration uncertainty. Unlike TAR, TUR considers various factors that can impact measurement accuracy, such as environmental conditions, process variations, technician errors, and the instruments used. The estimated calibration uncertainty is usually expressed with a confidence level of 95% or 99%.
Source calibrators are devices that generate a known and precise output for the parameter being calibrated. These instruments are commonly used in calibration and metrology laboratories to assist personnel in verifying the accuracy of measuring instruments.
Electrical calibrators are devices designed to provide or measure electrical signals such as current, voltage, frequency, pulses, and resistance. Examples of electrical calibrators include multifunction process calibrators, oscilloscope calibrators, and power calibrators.
Dry block calibrators are employed for calibrating temperature measuring devices. They feature a metal block housed in an insulated container, which is accurately heated or cooled to a specified temperature. Once the temperature is stable, it is maintained, and the temperature readings are taken by inserting the probes into the dry block calibrator's vessel.
Calibration baths are another method for calibrating measuring devices. While similar in concept to dry block calibrators, calibration baths use an insulated container filled with a liquid that is precisely heated or cooled to a specific temperature. Once the temperature within the liquid has stabilized, probes are inserted into the bath to obtain readings.
Calibration baths provide greater temperature stability and precision, making them suitable for calibrating devices that require high sensitivity.
Pressure calibrators are instruments used to measure, apply, and regulate pressure to a device under test (DUT). The DUT then measures the pressure generated. Examples of pressure calibrators include digital pressure controllers and pressure comparators.
A deadweight tester is a specialized pressure calibrator that uses calibrated, traceable weights along with a piston and cylinder assembly to apply known pressures to a device under test (DUT). The DUT then measures and records the applied pressure.
Humidity calibrators feature a chamber that is precisely set and maintained at a specific relative humidity or dew point. The device under test (DUT) then measures the relative humidity and dew point within the chamber.
Flow calibrators are designed to regulate the flow rate to a known and precise value, allowing a device under test (DUT) to measure it. They are commonly used to calibrate flow meters and flow controllers in liquid and gas distribution systems.
Laser interferometry employs a laser and electronic controls to assess machine components for straightness, parallelism, and flatness. This method can measure extremely small dimensions and is frequently used to calibrate machine tables, slides, and axis movements. It relies on the interference of light waves and their interaction with various materials to perform measurements.
The interferometer process involves splitting a single light beam into two separate beams that then create an interference pattern when recombined. Due to the short wavelengths of light, even minute differences in the path lengths of these beams can be detected with high precision. Although the basic technique has been around for over a century, the advent of laser interferometers has significantly improved the accuracy of this calibration method.
The choice of calibration equipment depends on the specific type of service being performed. There are numerous calibration services available, including:
Pressure calibration services focus on calibrating devices that measure pressure, including pressure switches, pressure transmitters, relief valves, and barometers used in gas and liquid systems operating at various pressures, whether above or below atmospheric levels.
Temperature calibration services are designed to calibrate devices that measure temperature, such as thermocouples, RTDs, thermistors, PRTs, bi-metal thermometers, thermal cameras, and infrared meters. This calibration is carried out in a controlled environment to ensure accuracy.
Humidity calibration services involve calibrating instruments that measure humidity, such as humidity recorders, probes, sensors, psychrometers, and thermohygrographs. During humidity calibration, parameters like relative humidity and dew point are assessed, similar to temperature calibration, in a controlled environment.
Flow calibration services are intended to calibrate volumetric and mass flow meters as well as flow controllers used in gas and liquid distribution systems. Regular calibration is crucial as it directly affects the quality and safety of the fluids flowing through process equipment and pipelines.
For calibrating machines that handle helium or hydrogen, the leak standard should be traceable to NIST, with calibration ranges of 2 x 10-10 atmcc/sec or higher for helium, and other gas leak standards should be 1 x 10-8 atmcc/sec or higher.
Pipette calibration services are used to ensure the accuracy of single-channel, multi-channel, and electronic pipettes in dispensing precise liquid volumes. These pipettes are commonly used in analytical laboratories.
Pipette calibration is performed by weighing a liquid dispensed by the pipette at a known temperature. The volume dispensed is calculated by multiplying the weight of the liquid by its density, and this value is compared to the theoretical volume to verify accuracy.
Electrical calibration services are designed to calibrate instruments that measure electrical parameters, including voltage, resistance, current, inductance, and capacitance. This service typically covers devices such as oscilloscopes, multimeters, data loggers, and clamp meters.
Dimensional calibration services focus on calibrating instruments that measure dimensional properties like length, volume, flatness, and angle. This includes devices such as micrometers, calipers, and height gauges, among others.
Force calibration services are conducted to calibrate devices that measure force-related parameters such as weight, torque, and both tensile and compressive forces. This involves comparing the force measurements of the device under test (DUT) to a calibration standard. Adapters are used during calibration to ensure that the applied force is accurately centered on the DUT, minimizing measurement errors.
Traceable deadweights serve as the standards for force calibration, which is carried out in a controlled environment. Instruments commonly calibrated in this service include tensiometers, load cells, scales and balances, force gauges, compression and tensile testers, force dynamometers, hardness testers, and proving rings.
Once calibration is completed by an accredited service provider, a calibration certificate is issued. This certificate provides a detailed summary of the calibration process, including the procedure followed and the results obtained. Essential information included in the certificate typically consists of:
A calibration sticker is affixed to the equipment to indicate the status and validity of its calibration. It typically displays the equipment's serial number and the date when the next calibration is due. While the sticker provides a convenient visual reference for checking the calibration status, it does not substitute for the authenticity and traceability provided by a calibration certificate.
Unaccredited calibration refers to methods used by owners or in-house teams to calibrate instruments. This is often called commercial calibration, standard calibration, quality assurance calibration, or NIST traceable calibration. Such calibration is conducted following the standards set by a calibration laboratory but lacks formal endorsement by an official accrediting body. The accompanying calibration certificate outlines the basic equipment and standard traceability used, but it does not provide official accreditation or legal documentation.
While unaccredited calibration services may be more affordable or convenient, they lack adherence to formal calibration regulations, are not subject to audits, and generally do not meet the rigorous standards of accredited services. This can result in inaccurate measurements, poor quality assurance, and unreliable results, potentially leading to process errors, fines, or product recalls.
When evaluating a calibration certificate, consider the following key details:
Calibration correction refers to the discrepancy between the measurements obtained by the DUT during calibration and the exact value of the reference standard. This information is detailed on the calibration certificate. The calibration correction value is applied to future measurements taken by the DUT to adjust its readings. This adjustment helps the DUT approach the true value more closely, thereby enhancing its accuracy.
Expanded uncertainty represents the range specified in the calibration report within which the true values are expected to fall with a certain level of confidence. This value is calculated statistically and encompasses all sources of uncertainty. A lower expanded uncertainty indicates higher precision in the measurements of the DUT.
The coverage factor, or K-factor, reflects the confidence level associated with the expanded uncertainty. Commonly used K-factors are 2 or 3 in most industries.
A K-factor of 2 corresponds to a 95.45% confidence level, meaning that 95.45% of the time, the measurements are expected to fall within the expanded uncertainty range. Similarly, a K-factor of 3 corresponds to a 99.73% confidence interval. Higher K-factors are used for devices performing critical measurements, where measurement errors can be costly and potentially hazardous.
A crucial aspect of calibration acceptance is measurement decision risk, which is assessed through metrics such as false accept risk and false reject risk. These metrics are used to evaluate the quality of the calibration process.
False accept risk is categorized into two types: unconditional and conditional. Unconditional false accept refers to the risk of equipment being out of tolerance but incorrectly classified as within tolerance. Conditional false accept, on the other hand, acknowledges the potential for equipment to be out of tolerance, but this is considered within the context of the measurement decision. High false accept risk can lead to significant negative consequences for the performance of the equipment.
False reject risk occurs when equipment that is actually within tolerance is incorrectly deemed out of tolerance. This can result in unnecessary costs from adjustments, repairs, re-calibrations, and more frequent recalibrations, potentially impacting operational efficiency and increasing overall expenses.
Force calibration is a necessary process used for testing materials used for manufacturing equipment, machines, and other devices. All forms of metals and other materials can expand and contract during their use...
A dynamometer is a measuring device used to determine the torque, force, speed, and power required to operate the drive on a machine or motor, which can be measured by evaluating the torque and rotational speed of a motor simultaneously...
An engine dynamometer is a device used to test an internal combustion engine that has been removed from a car, ship, generator, or any other accessory that uses one. The goal is to verify performance before reinstalling the engine in the equipment...
Force sensors are transducers that transform mechanical input forces like weight, tension, compression, torque, strain, stress, or pressure into an electrical output signal whose value can be used to...
A load cell is a transducer which converts mechanical energy (tensile and compressive forces) into electrical signals. There are different transducer operating principles that can be utilized to convert forces...
A load pin is a sensor utilized to measure force or weight in various research, control, measurement and testing applications. The load pin force sensor converts a force into an electrical signal. The load pins provide...
Machine vision systems are assemblies of integrated electronic components, computer hardware, and software algorithms that offer operational guidance by processing and analyzing the images captured from their environment. The data acquired from the vision system are...
An optical comparator is a measurement system that offers extremely accurate and repeatable measurement data. Optical measuring tools include optical comparators. This gadget employs the principles of optics by utilizing...
A platform scale is a scale that measures the weight of objects loaded on a flat platform. The function of the platform is to transmit the weight of the object to the internal measuring device and to support the object during weighing...
A strain gauge, or strain gage, is a sensing device used for measuring strain experienced by an object. It is made from a long, thin piece of conductor bonded to an elastic backing material called a carrier...
At the heart of every weighing device is a sensor called a load cell. When an item is put on a load cell, it senses the force of the gravitational pull of the weight, which an electronic circuit processes to display...