This is slightly math related as well as igor. I have a bunch of decaying expotentials that I want to fit to obtain the time constant from; however, the decay of the signal doesn't start at the beginning, it starts part way through. To say it another way I have a constant value for the first second (but it isn't always the same, it could be 3 seconds) (say 10) then it starts a expotential decay and then falls to some background level B (say 1). So I can't just fit it to a decaying exponential unless I specify a range, but I have a thousand of these files so I need to write a procedure to do and I can't look at everyone and tell it where the decay starts.
Does anyone know of a different way to fit the exponential in my case with igor? I know there is a way to fit them using a FFT, although, I don't know if that would work in my case.
Hi Michael. What I would try to do here is write a procedure that automatically finds the edges for me. I would store these edge locations in a wave (or two, my personal preference). I would then apply my fitting function over a range that calls the values from the storage wave. If you look at this thread, you can find an example of edge finding. Hope this helps. I am little concerned by your off-hand mention of thousands of files... any way to concatenate your data for a smaller number of analyses?
I don't see that there is anything wrong with analysing thousands of files, it's quite a common occurence. One just needs to have a robust analysis that copes with the unexpected.
How about this fit function. This should do the trick. When x is less than the delay, then the fit function returns a constant value. When x is greater than the delay time then you get an exponential decay.
You may want to set constraints on the offset, to make sure it's positive. You can do that in the curvefitting dialogue.
Function const_exp(w,tt)fitfunc Wave w variable tt //w[0] is the time offset before acquisition. You can make sure it's positive by using constraints //w[1] is the intensity of the exponential/constant part //w[2] is the decay constant
if(xx < w[0]) return w[1] else return w[1]*exp(-(tt-w[0])/w[2]) endif End
You may want to set constraints on the offset, to make sure it's positive.
You will probably also need to use an epsilon wave to set the epsilon for w[0] to something larger than the X difference between successive points. Otherwise, with a small epsilon, you will probably get zero derivatives, and that causes singular matrix errors.
Another potential problem is that if the start of the decay is a long way down the data set, the part that's not decay may swamp the part it is decaying.
I like the level-finding solution. You can use the Y values at the start and end of the data set as a guide for choosing the level. Look for a level crossing that is, say, one tenth of the way toward the ending value from the starting value.
Things to watch out for:
1) if the data are noisy compared to the decay, you may need to do the search for a level crossing on smoothed data. Otherwise you may find noise crossings.
2) If some of the data sets have very small decays, you may have to be quite careful how you search for it.
Sometimes a bunch of data sets like this will all be very similar, so if you find the decay in the first, that's a good guide to finding it in the rest. Then the problem of manual intervention becomes one of just finding the decay in the first data set, then let the batch-fitting routine do the rest.
November 25, 2008 at 12:35 pm - Permalink
November 25, 2008 at 03:19 pm - Permalink
You may want to set constraints on the offset, to make sure it's positive. You can do that in the curvefitting dialogue.
Wave w
variable tt
//w[0] is the time offset before acquisition. You can make sure it's positive by using constraints
//w[1] is the intensity of the exponential/constant part
//w[2] is the decay constant
if(xx < w[0])
return w[1]
else
return w[1] * exp(-(tt-w[0])/w[2])
endif
End
November 25, 2008 at 05:30 pm - Permalink
You will probably also need to use an epsilon wave to set the epsilon for w[0] to something larger than the X difference between successive points. Otherwise, with a small epsilon, you will probably get zero derivatives, and that causes singular matrix errors.
Another potential problem is that if the start of the decay is a long way down the data set, the part that's not decay may swamp the part it is decaying.
I like the level-finding solution. You can use the Y values at the start and end of the data set as a guide for choosing the level. Look for a level crossing that is, say, one tenth of the way toward the ending value from the starting value.
Things to watch out for:
1) if the data are noisy compared to the decay, you may need to do the search for a level crossing on smoothed data. Otherwise you may find noise crossings.
2) If some of the data sets have very small decays, you may have to be quite careful how you search for it.
Sometimes a bunch of data sets like this will all be very similar, so if you find the decay in the first, that's a good guide to finding it in the rest. Then the problem of manual intervention becomes one of just finding the decay in the first data set, then let the batch-fitting routine do the rest.
John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
December 2, 2008 at 10:33 am - Permalink