Process variables and the art of calibrating instruments

mccarty-jim-form

1660318529921 Coefficientofdetermination

Jim McCarty is systems developer in Minneapolis for Optimation, a system integrator headquartered in Rush, New York. He is a certified LabView developer and certified professional National Instruments instructor. McCarty has more than eight years of experience in application areas such as the medical device, solar, semiconductor, defense, transportation and mining industries. Contact him at [email protected].

Let’s start with the basics of calibration: the output of an instrument is measured under one or more known conditions (for example, the current output of a pressure transducer may be measured at 0 and 100 psig), and then a function of the sensor output (typically a linear function, so just a slope and an intercept) is generated to calculate the measured value of the sensor anywhere in that measurement range by knowing the output of the sensor.

Theoretically, in the example of a pressure transducer, this means that you can throw a digital multimeter (DMM) on a transducer and measure it sitting in the open, connect it to a shop air line, measure it with the DMM again, and be done after the calculation of the slope and intercept of the line that you draw through the points on a pressure vs. output current plot. However, there are critical points for not only performing a good calibration, but also making sure that it remains good:

Let’s step through each of these and discuss how they are relevant to a cleaning skid, so that we can build an effective and efficient calibration procedure. I’ll spend the most time on the first two topics, as they are arguably the most important, as well as a good chunk of time on the last topic, as I feel that it’s often overlooked. I’ll be using flow meters, pressure transducers, and thermocouples as examples.

Before diving into the details, first determine what your measurement accuracy must be, as calibration accuracy has a huge influence on measurement accuracy.

The stimulus (input) is very accurately known

Accurately knowing what you’re measuring is possibly the single most important point when it comes to calibration because the instrument being calibrated will carry whatever this error is (difference between what you think you’re measuring during calibration and what you’re really measuring) into the test system. Simply stated, if you think you’re measuring 100 psig in calibration, but the pressure is really 98 psig, you’ll report 98 psig whenever the pressure in your system is actually 100 psig, even if you’ve done everything else perfect.

I will also mention here that, in your analysis, you should always use the exact measured stimulus and sensor output values in your calibration calculations and analysis. The way I see this not implemented is when the procedure calls for nominal data points every so many psig, sccm or degrees at some sensor output current. It’s only important that you hit these points very approximately, but if you’re going through the trouble of measuring exact quantities, use the exacted measured quantities, not some nominal value a procedure calls for.

This typically stipulates that a known, accurate reference instrument (often informally called a “golden unit”) must be used in the calibration setup to be measured alongside the sensor you’re calibrating, or the device under calibration (DUC), or at least be used somewhere in your overall calibration process. Some sensors can be calibrated at some values without one—for example, static torque can be applied using a fixture arm and a weight and then measured by a torque cell, and there are some “stakes in the ground,” such as the density of water—however, in many industries, these gems are few and far between. These are also how primary standards are generated.

Golden units are typically only used for calibration in order to minimize usage and therefore decrease the probability of breaking them, and furthermore are stored in a safe place when not in use. As you can probably imagine, a faulty golden unit can cause quite a stir, as it serves as your “sanity check,” and for this reason many labs calibrate reference sensors (think of them as clones of your golden unit) off of a single golden unit. Typically, these references are used when either running a test that requires a reference instrument or to calibrate test systems, and the golden unit is only used to calibrate the references monthly, or less frequently.

You may hear the terms “primary,” “secondary” and “working” standard. A primary standard is a sensor that is calibrated against some known stimulus, boiling down to manipulating quantities like mass, time and length, if possible, instead of another sensor—so instead of minimizing calibration error due to the stimulus being unknown, this error is removed entirely. Secondary and working standards are sometimes, but rarely, used interchangeably in the context of calibration standards, but the secondary standard usually implies a much closer approximation or much more similarity to some primary standard, as more care is taken to keep them in consistently the same working condition, often implying far less usage than a working standard.

Having a single golden unit is common because it’s usually either costly, time-consuming or both to maintain a sensor’s golden status. “Golden” may also imply that it’s a primary standard, but not always. A typical process may be to send it to the manufacturer or other third-party calibration lab annually (the costs are typically paid by you if they do this, and it’s often not cheap), with the device being gone for about six weeks. Whenever the golden unit is recalibrated, the references should be recalibrated against it. The best way to do this, to avoid tolerance stacking, is to calibrate all references against the golden unit in the same calibration run, compared to calibrating Reference A against the golden unit, then calibrating Reference B against Reference A, then Reference C against Reference B, D against C, and so on. In the latter case, the error accumulates run to run.

A golden sensor doesn’t necessarily have to be a more accurate or precise model compared to the sensors it is used to calibrate, but you need to be assured, somehow, that its reading is accurate (so, at the same time, more accurate and precise doesn’t hurt). If the sensors you are calibrating are known to be not so great when it comes to accuracy or precision, then you probably need a nicer-model sensor for the golden unit. It may be worth it, for example, if your pressure-measurement accuracy requirement is tight and your boss’ or customer’s pockets are deep enough, to shell out for a few very nice pressure gauges or transducers.

I swear by Ashcroft for gauges and Druck for transducers, if you’re able to find them, as they’re not commercially available in the United States anymore.

So, armed with a golden reference sensor, how do you induce some meaningful condition for calibration?

Establishing some stimuli, such as pressure, is relatively easy—simply measure the DUC on a vessel with a vent or other outlet open, or with the transducer off of the vessel at ambient pressure, then attach the DUC and golden transducer, pressurize the vessel, wait for stabilization and measure. To get your in-between points, use a pressure regulator or pump attached to the vessel.

Establishing some stimuli, such as a stable, accurately known flow, can be difficult. However, establishing a known volume or mass and a known period of time is significantly easier. By definition, flow is how much mass or volume is passed in how much time, so most flow meters are calibrated by passing a known volume of fluid through the meter over a known period of time, and then sensitivity is calculated, assuming volume, as:

Establishing-some-stimuli

Think simple, like pouring water from a beaker or bucket into a funnel and through a flow meter, or into a system with a flow meter. If your fluid is a gas, you can use a similar approach with a known volume or mass over time, but the implementation isn’t as simple, of course. Using this method, maintain a steady flow over the course of the measurement period, and use a high sampling rate, if possible. Both will minimize the effect of transients—spikes and dips in the real flow—not being measured.

With flow meters, it’s important to use the same or similar fluid that will be measured in the test system, as viscosity, density and temperature will all affect a flow measurement. This may be a matter of starting the data acquisition at, say, 20 Hz, and then pouring exactly 1 gal of water into a funnel and through the meter and then terminating the acquisition. When you’re using a 4-20 mA output sensor, this implies that there’s some non-zero nominal offset at zero flow, so you should also perform a null measurement, in which you measure the output of the flow meter with zero flow for some period of time and then take the average, which is the term in the equation above.

Some conditions, like temperature, can be very difficult to induce at all, depending on the setup, let alone inducing them in a known way. Luckily, for temperature calibration, we don’t care so much that we’re hitting exactly some temperature in calibration; we only care to know exactly what we’re hitting in calibration, and, of course, that we’re not melting, vaporizing or letting the smoke out of anything in the process. For some temperature sensors, you only need to correct for some offset in the sensor or the system, and this will be sufficient. This simply means that you measure the DUC output at any single known temperature. Can you get away with this?

Find out—measure the DUC output at some known temperature. To find the temperature, use your golden temperature sensor; if the sensor you’re calibrating is removable, a water bath or a metal plate are both simple and will stabilize your DUC and golden unit to the same temperature, assuming you’re not doing anything funny like heating on end of the bath or plate. If not, you still need to get the golden unit to the same temperature as your DUC somehow, so be creative—thermally conductive paste or putty will most likely be very useful. Now heat or cool both the DUC and golden unit to approximately the upper or lower temperature extreme—again, think simple—hot or cold water, a hot plate or even your thumb or a piece of ice may do, as we’re just trying to get it hot or cold at this point.

If your DUC isn’t removable, remember that an electric heater is just current flowing through something that has non-zero resistance, so heaters can come in small, simple, portable, convenient packages, such as cartridge heaters. Using the calculated offset from the ambient temperature case before you heated or cooled the DUC, how far off is your hot or cold measurement from your DUC vs. your golden unit? If it’s acceptable, then all you need to do is the first ambient measurement. If not, then you’ll need to repeat what you just did—the ambient and the hot or cold measurement—for routine calibration. Again, the important parts are that the DUC reaches a temperature close to one of the limits (far enough away from ambient) and that the DUC and golden unit are at the same temperature, so it doesn’t have to be pretty, nor does it have to be at the exact same temperature for every calibration you run; of course, it does have to be safe and probably not terribly messy, so don’t get too crazy.

Recap: Have one very accurate golden unit for each sensor type, and clone references whenever it’s recalibrated. When creating these clones, avoid tolerance/error stacking whenever possible. Induce your stimulus in such a way that it’s accurately known. If it’s not possible to induce the stimulus in such a way that it’s induced when the sensor is in normal operation with it being accurately known, be creative and induce it in a way that you might know some other quantity that correlates to the quantity of interest. Above all else, at this point, focus not so much on how you induce a stimulus, but instead how you’ll most accurately know what exactly you’re inducing on your DUC.

The calibration procedure yields repeatable and reproducible results

Let’s quickly recap what “repeatable” and “reproducible” mean.

• If a process is repeatable, that means that you can go back into the same lab with the same part and, using the same test station, get the same results. You can repeat what you did.

• If a process is reproducible, that means that I can follow that same procedure in a different lab and different part (same model though, of course) and, on a presumably different test station, get the same results you did. I can reproduce what you did.

This means that any measures or processes that force us to do the same things in the same ways from calibration to calibration are good.

Software repeatability and reproducibility

Putting the calibration into the software is probably the single best thing you can do for repeatability and reproducibility (R&R). This also helps to enforce good laboratory practices when it comes to data collection, and most software can also force the physical calibration procedure to be executed the same way.

Without calibration being part of your software, most software can be operated just by entering sensitivity and offset values, and you effectively lose any enforcement of traceability, as well. Think about the user just entering values on a user interface or in a text file and saving it. If you suspect that you got bad test results because of a bad calibration, that's all you know, or all you think—your information trail ends there. You have no record of what reference sensor was used (maybe a specific reference is suspected of going bad); what the calibration data looked like (maybe there were outliers in the data that threw off your sensitivity and/or offset); what range it was calibrated over (maybe whoever did it was in a rush and measured at two convenient points at the lower end); or if the sensitivity and/or offset were even calculated correctly (did somebody fat-finger an extra digit in one of the values?). This, of course, assumes that things like DUC serial numbers, reference serial numbers and the raw data itself are all stored in some log file for each calibration.

Of course, the question "but is it worth it?" will come up, with "it" being the effort required to add this into your software or build another application to handle calibration. Personally, as somebody who's writing a lot of software lately, I think this is a no-brainer—the math, the most complicated part being the least squares fit, is relatively straightforward, and any submodules required to at least set up and read your DUCs are most likely already written. Within the medical-device environment, for example, where, in my experience, you need to go through a fairly exhaustive validation process whenever you breathe on previously validated code, the calibration functionality can be pretty well isolated from other parts of the code, so you likely won’t need to re-validate much of the existing code.

Hardware repeatability and reproducibility

Are your DUCs, or DUCs-to-be, removable from the test system? If so, then it is preferable to remove them and place them all in one fixture for calibration; fixtures are key to repeatable calibration measurements. The emphasis of the designs I’ll describe is on the references and DUCs seeing the exact same stimulus—that is, the same pressure or temperature.

For pressure calibrations, a simple, inexpensive and small fixture I’ve used is what I call a transducer tree. It’s made up of several female quick-disconnect (QD) fittings piped together; of course, this assumes that your transducer fittings are male QDs, so use whatever mates with your DUCs. Two fittings are reserved for your reference and regulator, and the rest are for your DUCs, although experience has taught me that it would behoove you to build a ball valve to use as a vent on there, too. It doesn’t matter if you have enough DUCs to fill the slots on the tree, as unconnected female QDs don’t bleed air, unless you’re doing this considerably hotter or colder than room temperature, in which case they may leak as badly as a screen door on a submarine, depending on the brand. Because of the small total volume of the QDs with piping, stabilization wait times are minimized over using a big tank or other reservoir.

For temperature calibrations, think of something, maybe somewhat large, that will have a uniform temperature inside of it or on the surface of it that you can attach all DUCs and the golden unit to, and, if necessary (if you determined from the last section that you need to measure at two temperatures), something that you can make hot or cold enough without its breaking or causing other bad things to happen. Again, the goal is just to get the DUCs to a temperature that is known, so this can be as simple and as unpretty as a metal plate or water bath.

Flow meters typically can’t be removed from a setup too easily if they’re already installed, so putting together a fixture may only be worthwhile if you’re calibrating uninstalled sensors. If this is the case, the connections of the DUCs will drive a lot of the design. Try to use as few adapters as possible, and keep the whole thing straight. Adapters and turns in the piping create disturbances that affect flow measurements, so keep the flow though the entire fixture as laminar/straight/undisturbed as possible. You could use a reference for this and attach it to the fixture, or you could use the same approach from the previous section—known volume over a known period defines the flow, not a reference measurement—and save some money and most likely get better accuracy, too. Another seemingly obvious pitfall to avoid meters on branched paths. Make sure that the flow through each meter is identical.

Are your DUCs permanently mounted in your test system, or would it be a complete pain to get them out? If so,, you’ll need to be creative in delivering the stimulus to the DUCs.

For pressure measurements, can you essentially short all DUCs together so that they’re all at the same pressure? For example, a freight train braking system is designed such that the brake cylinder, starting at some relatively lower pressure or at atmosphere, and the various reservoirs, starting at some high pressure, typically 90 psig, equilibrate when a leak or a rapid decrease is detected in the pneumatic signal line pressure—which is also the system supply, called the brake pipe, and runs the length of the train—by opening various paths between the reservoirs, so that they all end up at around 70 psig.

Is it possible to short all the pressure DUCs together like this? If you can’t get the whole system at one pressure at one time, can you open a series of valves or relays that would allow you to do it in parts, where some valve sequence shorts the reference to DUCs A, B and C, and another sequence shorts the reference to DUCs D, E, and F? This may be your next best option. And is there a place—quick-disconnect or other fitting—on your test system for your reference? If not, you’ll have to either make/attach one or move it to multiple spots during the calibration.

Can you do something similar with the system flow? If each flow meter is positioned such that all fluid flow serially through it and no DUCs are on any parallel/branched paths, you’re in luck. And, if the system also lends itself nicely to having a known volume pass through it in a known period, then take the same approach as outlined in the previous section. Just make sure that fluid in = fluid out. You don’t accumulate fluid you pour in or start with an empty system.

If not, look for any way you can induce a known flow and measure the DUC via some unique system feature—for example, if you can allow fluid at a known pressure go through a choke of a known size. If you’re still out of luck, then your best options may be to bite the bullet and either pull the DUC out of the system or measure the flow with some reference unit next to the DUC. Both will likely involve breaking connections, unlike going with the first option and using an ultrasonic flow sensor, which attaches to the outside of the pipe, but can cost between $2,000 and $10,000. This cost can be pretty easily justified by calculating the cost of labor associated with making and breaking connections, if calibration is done frequently enough.

If a temperature sensor is stuck on a system, then thermal paste or grease is very helpful. Apply some to your reference and you can put it next to the DUC and get an accurate reading. Need to make a second measurement at an elevated temperature? A cartridge heater with a thermocouple on the end of it may do the trick. If your DUC is stuck in some component of the system, then you need to do some head scratching. The reality is that you can’t measure temperature at a point in space that isn’t accessible, so there is no easy way out of validating a method for this one. However, there are close approximations that can be proven with data. For example, if a thermocouple is embedded in a ½-in plate, place reference thermocouples mounted on both sides of the plate and heat one side. How far off are the plate temperatures from each other? If they are within some acceptable tolerance, then you can measure with your reference on the surface of the plate to measure the DUC. From here on out, without getting into thermal modeling of the system, it boils down to trying things like this and seeing what will be a close-enough approximation.

The “brute force” approach to temperature calibration, which I usually equate to extra time and effort, would be to heat up the entire system while it’s idle, perhaps in an environmental chamber, and wait for thermal stabilization before making your elevated measurement. Establishing this wait time requires some investigation, for the sake of repeatability and reproducibility. A good place to start would be taking a test system at ambient temperature, putting it in an environmental chamber and recording the sensor temperatures vs. time. Uncalibrated signals are fine; you’re just looking for the signals to plateau. If your lab is set up for it, to get a worst-case stabilization time, try first stabilizing the test system at some considerably lower temperature and then put it in the hotter environmental chamber and determine the stabilization time. Just be aware of the possibility of thermal shock the hotter and colder you go when you try this. Because accuracy is fairly well guaranteed, all sensors will be at the same temperature simultaneously, and not much, if any, re-wiring or attaching/re-attaching is required; this certainly isn’t a terrible idea, but nonetheless it requires the most time for thermal stabilization, and I highly doubt you’ll find an environmental chamber at the dollar store.

With the proper fixturing and the process being controlled in your software, you’ll be guaranteed good repeatability and reproducibility for your calibrations, which typically means the same for your measurements.

Stable conditions and stimuli

You need to be patient and aware of the nuances that can affect particular measurements. However, there are some important general pointers. It’s a good idea to at least be able to measure the output of a sensor continuously at a relatively high sampling rate for an indefinite amount of time in order to study your system.

Short-term stability: The high sampling rate will give you a good indication of the nature of your measurement noise, and measuring for a long time will yield information on stability. Noise is sometimes well understood (for example, 50/60 Hz common mode), and sometimes not. The key to understanding the nature of the noise unique to your test environment and sensors is to select a high enough sampling rate. If you zoom in on the time axis and autoscale the amplitude axis of just about any idle sensor (measuring a constant value) and the points appear to have no flow, then the sampling rate is probably too low for the goals of this pseudo-study. Increase the sampling rate until the points appear to form a more natural looking line, even if the line appears to go up and down at random. Next, look at the periodicity in the noise, imagining the dips and spikes as half cycles of a sine wave. If you were to take a guess at the minimum and maximum frequencies of these imaginary, superimposed half-sine cycles, what would they be?

Your raw sampling rate should be, at the very least, twice the maximum frequency of the noise, per the Nyquist Theorem. Your reporting sampling rate will be much lower—half of the minimum noise frequency, at most. One reported point will be the average of multiple raw points. This way, you’ll be assured that you’re taking the average across at least one “noise cycle,” and the data that you report won’t be so noisy. If you need to report data as quickly as possible, repeat this for measurements under a variety of conditions to find worst-case noise scenarios. You never know when or how noise will affect your measurement. For example, are there any semiconductor devices in your instruments or that affect your instrument readings? Noise in semiconductor circuits generally goes up with temperature, so repeating this at the upper end of the operating temperature range is normally a good idea.

Long-term stability: Sampling indefinitely will give you an idea as to what kinds of stabilization times are required. Think of slowly varying things that could possibly affect your measurements and experimentally determine how much it actually does. Write down anything you can think of, even if it sounds stupid, that could possibly have an effect on the measurement.

For example, do you need to let your pressure transducers warm up in order for them to be accurate? Find out by electrically connecting to one that isn’t pressurized and waiting about 20 minutes while recording data. Look for any changes in the data—does the noise get better, get worse or stay about the same? Does the transducer output rise or fall and then stabilize at some value? What about if you repeat this with a transducer that’s attached to a pressurized vessel? If you’re using the transducer tree from the previous section and, starting at atmospheric pressure, pressurize the tree to 100 psig and record data the whole time, how long does the current signal take to stabilize? What if you used your system to tie all the transducers together at one pressure instead of the tree? Would you have to wait longer for stabilization because of the increased vessel volume? If you just tried that with a transducer that was off just before you started, try it again—does the behavior change if you’re using a transducer that’s already warmed up? What about doing everything with a flow meter? With a thermocouple?

Ask more questions like this, and then go out and get the answers. Theory can answer some questions that may pop up—for example, the answer to “do you have to wait longer for stabilization because of the increased vessel volume?” is a definite yes because of the laws of compressible flow—and guide you to some factors that are somewhat likely to have an effect. Remember what I said about semiconductors and temperature? Knowing this, wouldn’t you think you should at least look into instrument stabilization time after power-up? This will also weed out things that don’t matter. If you’re wondering whether thermocouple warm-up time has any effect, a little research on the Seebeck effect will show you that the thermocouple itself is never powered. However, if it’s converted to a 4-20 mA signal, the signal conditioners may require some warm-up time.

Stepping through these experiments will determine your instrument warm-up times; identify the stabilization period required at a condition such as pressure, flow or temperature to measure a data point; and also shed light on things that should be avoided during calibration, as well as during the measurement itself.