You’re serious about your electrical test instruments. You buy top
brands, and you expect them to be accurate. You know some people send
their digital instruments to a metrology lab for calibration, and you
wonder why. After all, these are all electronic — there’s no meter
movement to go out of balance. What do those calibration folks do,
anyhow — just change the battery?
These are valid concerns, especially since you can’t use your
instrument while it’s out for calibration. But, let’s consider some
other valid concerns. For example, what if an event rendered your
instrument less accurate, or maybe even unsafe? What if you are
working with tight tolerances and accurate measurement is key to
proper operation of expensive processes or safety systems? What if you
are trending data for maintenance purposes, and two meters used for
the same measurement significantly disagree?
Many people do a field comparison check of two meters, and call them
“calibrated” if they give the same reading. This isn’t calibration.
It’s simply a field check. It can show you if there’s a problem, but
it can’t show you which meter is right. If both meters are out of
calibration by the same amount and in the same direction, it won’t
show you anything. Nor will it show you any trending — you won’t know
your instrument is headed for an “out of cal” condition.
For an effective calibration, the calibration standard must be more
accurate than the instrument under test. Most of us have a microwave
oven or other appliance that displays the time in hours and minutes.
Most of us live in places where we change the clocks at least twice a
year, plus again after a power outage. When you set the time on that
appliance, what do you use as your reference timepiece? Do you use a
clock that displays seconds? You probably set the time on the
“digitschallenged” appliance when the reference clock is at the “top”
of a minute (e.g., zero seconds). A metrology lab follows the same
philosophy. They see how closely your “whole minutes” track the
correct number of seconds. And they do this at multiple points on the
measurement scales.
Calibration typically requires a standard that has at least 10
times the accuracy of the instrument under test. Otherwise, you are
calibrating within overlapping tolerances and the tolerances of your
standard render an “in cal” instrument “out of cal” or vice-versa.
Let’s look at how that works.
Two instruments, A and B, measure 100 V within 1 %. At 480 V, both
are within tolerance. At 100 V input, A reads 99.1 V and B reads 100.9
V. But if you use B as your standard, A will appear to be out of
tolerance. However, if B is accurate to 0.1 %, then the most B will
read at 100 V is 100.1 V. Now if you compare A to B, A is in
tolerance. You can also see that A is at the low end of the tolerance
range. Modifying A to bring that reading up will presumably keep A
from giving a false reading as it experiences normal drift between
calibrations.
Calibration, in its purest sense, is the comparison of an
instrument to a known standard. Proper calibration involves use of a
NIST-traceable standard — one that has paperwork showing it compares
correctly to a chain of standards going back to a master standard
maintained by the National Institute of Standards and Technology.
In practice, calibration includes correction. Usually when you send
an instrument for calibration, you authorize repair to bring the
instrument back into calibration if it was “out of cal.” You’ll get a
report showing how far out of calibration the instrument was before,
and how far out it is after. In the minutes and seconds scenario,
you’d find the calibration error required a correction to keep the
device “dead on,” but the error was well within the tolerances
required for the measurements you made since the last calibration.
If the report shows gross calibration errors, you may need to go
back to the work you did with that instrument and take new
measurements until no errors are evident. You would start with the
latest measurements and work your way toward the earliest ones. In
nuclear safety-related work, you would have to redo all the
measurements made since the previous calibration.
What knocks a digital instrument “out of cal?” First, the major
components of test instruments (e.g., voltage references, input
dividers, current shunts) can simply shift over time. This shifting is
minor and usually harmless if you keep a good calibration schedule,
and this shifting is typically what calibration finds and corrects.
But, suppose you drop a current clamp — hard. How do you know that
clamp will accurately measure, now? You don’t. It may well have gross
calibration errors. Similarly, exposing a DMM to an overload can throw
it off. Some people think this has little effect, because the inputs
are fused or breaker-protected. But, those protection devices may not
trip on a transient. Also, a large enough voltage input can jump
across the input protection device entirely. This is far less likely
with higher quality DMMs, which is one reason they are more
cost-effective than the less expensive imports.
The question isn’t whether to calibrate — we can see that’s a given.
The question is when to calibrate. There is no “one size fits all”
answer. Consider these calibration frequencies:
- Manufacturer-recommended calibration interval.
Manufacturers’ specifications will indicate how often to calibrate
their tools, but critical measurements may require different
intervals.
- Before a major critical measuring project. Suppose you
are taking a plant down for testing that requires highly accurate
measurements. Decide which instruments you will use for that
testing. Send them out for calibration, then “lock them down” in
storage so they are unused before that test.
- After a major critical measuring project. If you reserved
calibrated test instruments for a particular testing operation, send
that same equipment for calibration after the testing. When the
calibration results come back, you will know whether you can
consider that testing complete and reliable.
- After an event. If your instrument took a hit — something
knocked out the internal overload or the unit absorbed a
particularly sharp impact — send it out for calibration and have the
safety integrity checked, as well.
- Per requirements. Some measurement jobs require
calibrated, certified test equipment — regardless of the project
size. Note that this requirement may not be explicitly stated but
simply expected — review the specs before the test.
- Monthly, quarterly, or semiannually. If you do mostly
critical measurements and do them often, a shorter time span between
calibrations means less chance of questionable test results.
- Annually. If you do a mix of critical and non-critical
measurements, annual calibration tends to strike the right balance
between prudence and cost.
- Biannually. If you seldom do critical measurements and
don’t expose your meter to an event, calibration at long frequencies
can be cost-effective.
- Never. If your work requires just gross voltage checks
(e.g., “Yep, that’s 480V”), calibration seems like overkill. But
what if your instrument is exposed to an event? Calibration allows
you to use the instrument with confidence.