cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can subscribe to a forum, label or individual post and receive email notifications when someone posts a new topic or reply. Learn more! X

Smoothing Data with an imposed condition

ClaudioPedrazzi
11-Garnet

Smoothing Data with an imposed condition

Hi everyone,

Here is my next challenge. The physical/mathematical problem behind it is to correct an artificially generated spectrum compatible accelerogramm for earthquake structural simulation. I need to smooth (fit) the data in order to remove the "drifting" that inherently exists when the data are integrated to obtain a velocity, but the initial velocity have to be zero.

So I want to use the sophisticated smoothing capabilities of MathCAD, for example loess() or kgltt() but I also would like to be able to impose some condition, like y(0)=0. I see no way, so I decided to just fit an appropriate polynomial (without the constant term), and that works of course, but is not exactly what I wanted (loess, for example, makes many polynomials, so piecewise quadratic approximation).

Attached you find a small "playground" for testing what I mean.

Thanks a lot for any hints.

Claudio

1 ACCEPTED SOLUTION

Accepted Solutions

If you don't like the idea of shifting the whole smoothed function as fred suggested you might consider a sort of morphing from the shifted to the non shifted function just at the beginning. See attached.

You might combine this with Valery's suggestion of padding the data with a lot of (0/0)'s.

But you have to be aware that padding with too much zeros may affect the result in an undesirable way.

See here the result of 1, 1000 and 2000 padded zeros on the same dataset. I guess a value of 500 might be a good compromise.

View solution in original post

12 REPLIES 12

May by the attach tells you something

Not bad Valery 🙂 , not bad!

that is: you just added a lot of dummy points at (0,0), so in order to "push" the resulting smoothed function towards the origin.

Is a good idea, if nothing else comes out I'll implement it... I just was hoping in a real imposed condition, like one does in a solve block, in your example it does not "impose" anything (in fact the smoothed function is not exactly 0 in x=0).

On the other side, with a greater number of dummy points (0,0) I can practically force the interpolating function to go as near to (0,0) as I like.

Thanks a lot and have a nice day

Claudio

with a greater number of dummy points (0,0) I can practically force the interpolating function to go as near to (0,0) as I like.

No, thats an illusion. Increasing 0/0 dummies sometimes even result in moving the smoothed curve away from the origin. Don't know why, but see in the file I posted below.

Also consider the effect that heavy padding might have on the overall appearance of the curve. It may make the whole smooting action useless as the calcuated function has a very bad fit with respect to the valid, real data points.

Hallo Werner,

Werner Exinger schrieb:

Also consider the effect that heavy padding might have on the overall appearance of the curve. It may make the whole smoothing action useless as the calculated function has a very bad fit with respect to the valid, real data points.

I absolutely agree... I believe the reason for the changing shape of the curve with more "dummy" points is in the internal numerical method of fitting. We are probably giving more and more weight to the zero, that is not what I want. Finally I decided to implement your morphing function (thanks a lot) with the cosines transfer function inside, but I morphed between the "loess"-interpolated curve and a carefully chosen linear function, that gives in my case better results than using a shifted (your "Fred") one.

Unbenannt.PNG

thanks a lot

Regards

Claudio

We are probably giving more and more weight to the zero, that is not what I want.

Obviously loess() is not a locolized smoothing operator. Not sure how it works.

Finally I decided to implement your morphing function (thanks a lot) with the cosines transfer function inside,

The cosine sure was a quick hack and is by far not the best transfer function we can think of. We would need something which has at best the derivatives of all orders zero at 0 and 1. Maybe a clothoide or a probability cumulative distribution function will do better.

You could smooth, then force zero. If your example is representative, the error would be quite small.

If you don't like the idea of shifting the whole smoothed function as fred suggested you might consider a sort of morphing from the shifted to the non shifted function just at the beginning. See attached.

You might combine this with Valery's suggestion of padding the data with a lot of (0/0)'s.

But you have to be aware that padding with too much zeros may affect the result in an undesirable way.

See here the result of 1, 1000 and 2000 padded zeros on the same dataset. I guess a value of 500 might be a good compromise.

Perhaps the first question to ask is whether there is a good reason to even do this.

First, if your data is only simulated, why do you need the smoothing? Where does the error come from?

Second, smoothing algorithms are typically very localized, and imposing a constraint at 0 will have negligible effect except close to 0. In fact, if your data errors are as large as those you show in the fake data then I would argue that even the correction at 0 is not significant. So what improvement in your end result do you expect to get by doing this?

Hi everyone,

I'm only now reading all your kind and useful answers. A few answers from my side:

1) the sample data are just sample data, I just invented them and they are not particularly representative

2) the real underlying physical/mathematical problem is to generate a spectrum compatible earthquake motion (for structural analysis) and a typical fitting curve may look like this:Unbenannt2.PNG

(I did not provide the real data because the method is long and cumbersome and I wanted my problem clear and simply exposed). The shown curve is the integral of an artificially generated accelerogram, that is it represents ground velocity. These artificially generated accelerogramms do respect the spectrum, but they tend to "drift away" in time when you integrate them in order to get the velocity and then the displacement of the ground (you do not want that the final simulated ground motion is 10 meters away from the initial point :-)).

The standard correction method is to make a fit on the first integral (velocity), finding what is called "baseline", subtract it from the uncorrected velocity and finally derive it back to get a "corrected acceleration", and integrate it to get a "corrected displacement".

It is a little complicated to explain, I know, but I can add two following references:

- eBook "DynamAssist"Version 6.1 ("Spectrum Compatible Time History") in order to see how the accelerogramm is generated

- Newmark-A Study of Vertical and Horizontal Earthquake spectra, where the "baseline correction method" is used to correct experimentally measured accelerogramms (Page 131, Appendix A). But the idea of correcting the velocity, so that it is 0 at the beginning of the simulation, is good and to my knowledge still in use today for artificially generated accelerogramms. Here an example from the classic paper of Newmark, where it can be seen why I would like to use the "loess" MathCAD interpolation, namely because it would use multiple segments.

Unbenannt3.PNG

So I hope having satisfied your curiosity and thank everybody for the insightful help. I think I can live with the idea of fitting + morphing like Werner Exinger suggested. Or, of course, fitting with an higher order polynomial with zero constant term (but this I knew already, was not my purpose).

Best regards

Claudio

To remove the drift, what you need is a high pass filter. As far as that goes, you can take your pick. Without some specific reason to pick one filtering mehod over another, they are all equal (applying a low pass filter, i.e smoothing or polynomial fitting, and subtracting that from the original is one way to high pass filter). Any high pas filter will set the mean of the data at zero, because you have removed the zero frequency component. What you are asking for is a high pass filter that not only leaves the zero frequency component, but in fact adjusts it so that the starting point is at zero. I doubt there is any better way to do that than filtering the data and then offsetting the result.

One reason to pick a different filtering method would be to have better control over the filter cutoff (location and steepness), so that can more objectively define what is drift and what is not. A Fourier filter would probably work well. I've attached a worksheet that implements such a filter. It's not quite what you are asking for (it's a notch filter, not a high pass), but it could be easily adapted.

Hi Richard,

your answer (and example) is very interesting to me, because my background is in structural analysis, I do not know (almost) anything of signal processing, and I can see here I would have needed it. I will have a deep 🙂 look at your filter and example as soon as I can. Just for your info, in the core of the procedure for the generation of the accelerogramm (that I extended based on the example of DynamAssist e-Book) there is a filter function, exactly for the purpose of reducing drift... but then we are still in the acceleration vs. time domain.

Unbenannt.PNG

I attached the (freely downloadable) example for everybody that has interest in it. The drift elimination made at the end of the example is not very recommended, it is not the way it is usually done (because it works on displ(t) and not on veloc(t), as it is commonly the case to my knowledge). Also, it does not care for having initially zero velocity and displacement, like the physics of the problem suggests (before the earthquake ... everything is at rest!!).

Unbenannt1.PNG

Best regards and thanks a lot to everyone.

Claudio

Hello Pedrazzi

Would you mind to send me a function to get ride of long duration drift as shown in the plot below?

You help will be really appreciated and acknowledged.

data with long duration drift.JPG

@

Top Tags