cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Thermocouple

ptc-1368288
1-Newbie

Thermocouple

Leslie,

Superb your work sheet.

I have added the second method. It goes well with
Robert technique to SQUARE ROOT for the index of
refraction. In the case of T/C, the Y/X technique
is of extraordinary efficiency.
We use commonly in finding best approximations.
Usually, one more term in an approximation
increases accuracy by 10 (typical of polynomials)
Here, we don't add one term, but only one operation.
We increase accuracy by ~ 200 ! ... remarkable.
I know that for over 15 years.

Hope you will add to your work sheet and show NI.

jmG
15 REPLIES 15

I'm not seeing the spikes in the data when I use the ThieleCoeff and ThielePoly functions in the last RationalFunctions dll I posted. I'm not sure where the issue lies, but it merits further investigation on things such as data ordering and Pade approximations.

Robert

On 8/24/2003 7:34:45 PM, study wrote:
>Usually spikes don't happen
>except when the point of
>interpolation is extremely
>close to one of the roots in
>the rational fraction
>expression. When the one in
>the numerator & the one in the
>denominator are close in value
>to each other, not much of a
>spike takes place. Spikes are
>commonly when a point is close
>to the root in the
>denominator.
>
>It is good to use the common
>rational fraction algorithm
>that doesn't post the coeffs.
>& a vast search is needed with
>it to find any spikes.
>

I assume by this you want to use the rational fraction algorithm that returns polynomial coefficients. That's fine, but it fails catastrophically more often than the continued fraction evaluation.

>From a practical engineering &
>lab standpoint, spikes in
>smooth data can just be
>ignored.

Spikes can cause major problems if the data is used in a numerical integration. Also smooth data can create spikes that are not nearly as sharp as they are in this example, and they can cause issues in interpolated values in practical engineering and lab usage.

It seemed that
>exporting the coeffs. instead
>of a massive worksheet would
>be better.
>

I'm not sure what you want. The coefficients are exported to the worksheet. I suppose could just list them without using them, but what's the point. I only did the comparison with large numbers of points to see if there are algorithm issues. It is common to test for errors in the design stage.

>However, I cannot use MathCad
>to write my BASIC worksheet.

Was the dll implementation of the Basic program I did wrong? ThielePoly returns the polynomial coefficients. The first column is P and the second column is Q. I normalized to the largest value of Q, so one of the coefficients of Q is 1. The current version of the dll uses a different algorithm for Thiele than the Basic program because it seems to fail catastrophically slightly less often, so you need to grab one of the earlier posts for the Basic implementation.

Robert

Theodore,

We want the simplest possible formula for export.
We don't want to see splines.
For a good use, go back few days an conserve the
last algorithm (reverse Engineer).
Your algorithm must calculate �C for mV(input).
Conformity of � 0.01�C is more than enough.
Then you need to recalculate the coefficients,
but there is no point because it is done already.

jmG

Theodore,

Base your work ont the "thermoII(030824).mcd
just attached in the previous posting.
Start from scratch, mV is the input variable from which you organise the algorithm and calculate the coefficients that will reproduce T.

Forget cubic spline: in their best they are equal to 3rd order polynomials. This data set is not approximatable by so low order polynomials. Therefore cubic spline can't not be introduced for whatever use in this work sheet.

jmG

Hello Theodore,

You made it wrong again !

The column mV (milliVolts) is the input (x axis).
The project objective was to define an algorithm that will convert and linearise the mV input into T �C.
Top part of the work sheet gives an very good algorithm. But, an extra twist from ... educated smart cat does better, i.e: below the first Marlett, we treat the original data set for greater accuracy. It gets rid of what could happen near 0. Then, for your program to be of interest in comparing results, take the reduced data set (below first Marlett) and calculate the coefficients with same order as we did, 6, 6.

Note that below second Marlett we verify the segments in their full digits contain. We have to distribute to accommodate the the max plot points.
As you can see, there are no spikes, bumps, glitches ... (no unexpected results).

jmG




On 8/26/2003 1:59:06 AM, study wrote:
>I get it OK now. I have
>re-worked everything &
>searched for glitches/spikes,
>etc. This worksheet replaces
>the previous one.
>
>Thanks for setting me
>straight.
____________________________
Very nice work, Theodore.
The problem encountered is caused by not true tabulation and using methods that pass through points. We could still use Thiele but truncating the data mV for one less digit.
No problem, the least square based 6 x 6 is fool proof.

Thanks Theodore, what's your version ?
I suppose 11.1

jmG



I did the "J" the same way as "T".
For consistency, we should conserve the well
proven method, i.e: the reverse Engineer method,
same as in last part of the T work sheet.

jmG

Theodore,

I didn't look, but no need,,, de Prony: no use.
Thermodynamic points are not equally distributed.
Therefore, from the start, de Prony can't be used.
High accuracy is senseless too, because not all
points (thermodynamic scale) are known
with = accuracy. Some 3 digits some only 2.
A fit @ � 0.01 �C is the best we can expect.

jmG


>High accuracy is senseless
>too, because not all
>points (thermodynamic scale)
>are known
>with = accuracy. Some 3 digits
>some only 2.
>A fit @ � 0.01 �C is the best
>we can expect.
>
>jmG

This raises a point I actually hadn't looked at closely in the Thermocouple data. If the reported calibration values are accurate to the last significant figure, then the calibration function shouldn't interpolate them because they are off, on average, by 1/2 digit (digital noise). The calibration function should round to the calibration values rather than interpolate them, so something like a parametric B-spline using the Spline2 statistics might be a good way to go. The statistics help to insure that the noise in the final data is random rather than exhibiting structure associated with the fit points.

Robert

On 8/23/2003 10:41:13 AM, jmG wrote:
>I have added the second
>method. It goes well with
>Robert technique to SQUARE
>ROOT for the index of
>refraction. In the case of
>T/C, the Y/X technique
>is of extraordinary
>efficiency.
>We use commonly in finding
>best approximations.
>Usually, one more term in an
>approximation
>increases accuracy by 10
>(typical of polynomials)
>Here, we don't add one term,
>but only one operation.
>We increase accuracy by ~ 200
>! ... remarkable.

Remarkable indeed. A few things about this - first, why does this work?

Second, although it's not your problem, I used the transformed data with rationalfit and I couldn't get the same accuracy out, which is very disturbing indeed. Supposedly the rationalfit function does exactly what Robert's "magic genfit" thing does, but with a "better" solver, only it's not. I've reported it to the developers, but it's deeply disturbing to me...

On 9/4/2003 2:12:47 PM, lbond wrote:
>On 8/23/2003 10:41:13 AM, jmG wrote:
>>I have added the second
>>method. It goes well with
>>Robert technique to SQUARE
>>ROOT for the index of
>>refraction. In the case of
>>T/C, the Y/X technique
>>is of extraordinary
>>efficiency.
>>We use commonly in finding
>>best approximations.
>>Usually, one more term in an
>>approximation
>>increases accuracy by 10
>>(typical of polynomials)
>>Here, we don't add one term,
>>but only one operation.
>>We increase accuracy by ~ 200
>>! ... remarkable.
___________________________

> Leslie replied:

>Remarkable indeed. A few things about
>this - first, why does this work?
>
>Second, although it's not your problem,
>I used the transformed data with
>rationalfit and I couldn't get the same
>accuracy out, which is very disturbing
>indeed. Supposedly the rationalfit
>function does exactly what Robert's
>"magic genfit" thing does, but with a
>"better" solver, only it's not. I've
>reported it to the developers, but it's
>deeply disturbing to me...
>____________________________

Leslie,

I think you will be Grand Mother before we finish the DataAnalysis pack !
You are right, there is miscarriage in the rationalfit. It does not implement completely Robert's "magic genfit". If it would be so, we would then get ~ same fit in the transformed data set with same NUM/DENOM order. Like you say: disturbing.
Check that one too: in theory we could increase the order up to rows(data)-1... at order 10, it start telling "undetermined fit" ???

For your first question: why does it work ?
Rational interpolation is the ideal method for curves that exhibit 1/x function. Which is also the best format for Thiele. Those format, polynomials don't approximate. Then the transformation Y/X for that particular case produces a very good new set, for rational fit.
The problem is the algorithm that is unclear versus Robert method. This is demonstrated in the attached work sheet.

Hope it will work sometimes.

jmG






I just thought I'd comment that the square root option isn't a magic method I apply to get better results. It's the method that best matches the physics model. Any filtering method that biases the data towards a straight line before doing the rational function fit will add considerable convergence speed to the rational function and may have a side benefit of getting rid of unwanted poles.

Robert

Jean, I hooked up the reduced data set calculation in the Thermocouple document, but here's the thing that seems odd to me: The chi-squared does indeed improve for the reduced data, from 0.1 to about 0.009. However, if you go back to the original data, by multiplying by x, you wind up with larger residuals, a larger chi-squared, and a pole in the fit function. All of these seem undesirable to me since you're trying to calculate new values in the original data, not in the reduced, data, right? It seems as though for any particular point you've actually reduced the accuracy you get in the temperature measurement. So I'm not sure what the real advantage is here. Can you clarify?

Leslie,

Please attach your work sheet,
that's the best we can do, at the moment.

Jean

On 9/7/2003 2:29:15 PM, jmG wrote:
>Leslie,
>
>Please attach your work sheet,
>that's the best we can do, at
>the moment.
>
>Jean
_____________________________
Type_T_Mathcad RationalFit.mcd:

Below Marlett is what I just did few minutes ago.
It is true that RationalFit does not implement Robert's genfit method. If it would be so, then the two would match. Maybe RationalFit does not include the second refine coefficients ?

RationalFit with parameters has no chance to work, then.
It seems that the parameters act on the data set (or at some place in the process) as noise.
In other words, it manipulates the convergents.
This is not possible in this type of interpolation and by conclusion in the construct of the rational function.

Worse is the no pole proposal: red after the RationalFit with parameters.

If a data set is not too noisy, there is no possibility of poles. Poles can only be generated by round effects or equivalently perturbed points or convergents. OK, poles could exist outside the limits of the data set. That I have not seen in any of the rational approximation of functions in the Hart collection.

Step 1: have Robert Genfit programmed OK.

jmG





Announcements