Change in Calibration Factor due to Filters or Leads

A community of technical professionals involved in the calibration of RF power Sensors.
Post Reply
abrush1
Posts: 57
Joined: Fri Feb 29, 2008 2:13 pm
Real Name: Andy Brush

Change in Calibration Factor due to Filters or Leads

Post by abrush1 »

This Article is to discuss the potential for finding a different calibration factor after changing the resistance of the bolometer leads in the part of the circuit where the connection is two-wire. Examples of such cases are the insertion of a filter such as the TEGAM 1000027 common-mode choke, or use of long leads in a two-wire hookup as with the TEGAM 1806A.

A quick review of the background: the bridge balancers, whether of Type II, Type IV, or 1830A all use low-offset, high-gain servo loops to maintain the resistance between the points where the source and sense leads connect to be the set level, typically 200 Ohms. We assume that the user connects source and sense together (the "Kelvin" point) as close, electrically, to the actual thermistor as possible. With the 1806 ro 1830A balancer, this connection is normally at the thermistor mount. With an 1806A or a Type II bridge such as 1805B or H-P 432, the Kelvin point is in the meter.

Any resistance between the Kelvin point and the Thermistors effectively reduces the resistance that the Thermistors will settle at. For an extreme example, 1 Ohm of line resistance between the Kelvin point and the Thermistor would settle the thermistors at 199 Ohms, so the entire circuit will be 200 Ohms.

It is complicated to exactly calculate the impact of adding the resistance, because it will take just slightly more power to settle the thermistors at a lower resistance. However, by assuming a small added resistance, and therefore assuming that the power change is negligible, we find that the error in reading that will result from adding resistance will be nearly exactly (for small resistance again) the ratio of the added resistance to the sesired balanced resistance. For example, a 1 Ohm insertion causes a 0.5% error in a 200 Ohm circuit. The attached spreadsheet gives an example of calculating the error of a power reading based on user-supplied conditions.

In the case of the TEGAM 1000027 Common-Mode Filter, the filter itself adds approximately 10 mOhm in each leg, or 20 mOhm total to the circuit. In a typical power reading using a 200 Ohm sensor, this will change the reading by 0.01%, which is a negligible amount for most measurements. Even if used for 50 MHz port calibrations, this shift will not change the uncertainty by a material amount, however since the impact is a systematic bias rather than random, it could be removed by calculation.

This discussion applies to the difference between readings made with and without filters. Thereofre, it applies also to the use of standards that were calibrated without filters, when used with filters, but NOT to the use of standards that are both calibrated and used with filters. If a standard that has had its cal factors charted prior to adding a filter, then it is reasonable to expect a shift in the charted values ar described above for calibrations after adding the filter.
You do not have the required permissions to view the files attached to this post.
Post Reply