Diagnosing IntegrateODE timestep getting very small
Hi there,
I have a set of simulations where the timestep gets very small and I can't figure out why.
Here is the code, modeled very closely on examples for IntegrateODE in the manual:
Wave pw // pw[0] = speed, pw[1] = CantStiff, pw[2] = mu0, pw[3] = a, pw[4] = b, pw[5] = Dc, pw[6] = V0, pw[7] = m
Variable tt // time value at which to calculate derivatives
Wave yw // yw[0]-yw[2] containing V, TipX,theta
Wave dydt // wave to receive dV/dt, dTipX/dt, dTheta/dt (output)
dydt[0] = ( (pw[0]*tt-yw[1])*pw[1]-(pw[2]+pw[3]*log(1+yw[0]/pw[6])+pw[4]*log(1+pw[6]*yw[2]/pw[5])) )/pw[7] //dV/dt
if (dydt[0]<-1e5 || dydt[0]>1e5)
String str
sprintf str, "dydt %g %g %g, yw %g %g %g tt %g \r" ,dydt[0],dydt[1],dydt[2],yw[0],yw[1],yw[2],tt
Notebook Notebook1 text=str
endif
if (dydt[0]<0 && yw[0]<=0) //dV/dt<0 && V=0, so tip can't go backwards
dydt[0]=0
endif
dydt[1] = yw[0] // dTip/dt
if (dydt[1]<0) //prevent tip from going backwards
dydt[1]=0
endif
dydt[2] = 1-yw[0]*yw[2]/pw[5] // dtheta/dt
return 0
End
Function Solve_RSF_ODE_Dietrich_Free_Shroff(Points,Scale,Speed,CantStiff,mu0,a,b,Dc,V0,m)
Variable Points,Scale,Speed,CantStiff, ,mu0,a,b,Dc,V0,m
Make/D/O/N=(Points,3) RSFSol
Make/O/D/N=(Points) FreeRunX // same length as RSFSol
FreeRunX = NaN // prevent display of extra points
FreeRunX[0] = 0 // initial value of X
//SetScale/P x 0,Scale,RSFSol // calculate every Scale s
RSFSol = NaN //don't put empty cells in graph
SetDimLabel 1,0,V,RSFSol // set dimension labels to variables
SetDimLabel 1,1,TipX,RSFSol //
SetDimLabel 1,2,Theta,RSFSol //
RSFSol[0][%V] = 0 // initial conditions: V[0]=0
RSFSol[0][%TipX] = 0 // TipX[0]=0
RSFSol[0][%Theta] = 0.0001 // Theta[0]=0.000001 to allow evolution in Ruina case
Make/D/O Params={Speed,CantStiff,mu0,a,b,Dc,V0,m} // sim parameters
//Display RSFSol[][%Theta] // graph speed profile
Display RSFSol[][%Theta] vs FreeRunX // make an XY graph
ModifyGraph mode=3, marker=19 // plot with dots to show the points
ModifyGraph msize(RSFSol)=1
SetAxis left 0,0.0014 //Change these to relevant ranges
SetAxis bottom 0,0.01
// Note graph made with subrange of wave
Make/O/D errScale={1,1e3,1e4}
IntegrateODE/M=2/X=FreeRunX/XRUN={1e-6,0.1}/E=1e-6/S=errScale RSFSim_Dietrich_Shroff, Params, RSFSol
//NEED TO CHANGE XRUN AND POSSIBLY /E and /M(=1 at 100nm/s which worked well)
Make/D/O/N=(Points) SimLF //Need to calculate LatForce
SetScale/P x 0,Scale,SimLF
SimLF[]= (FreeRunX[p]*Speed-RSFSol[p][1])*CantStiff
end
I run it with parameters like the following:
Solve_RSF_ODE_Dietrich_Free_Shroff(1e6,1e-6,10000,0.005,0.052,0.025,0.028,5,3,1e-12)
The system is just an anchor moving along at a constant speed, attached to a spring, attached to a slider which experiences some friction coefficient as it moves along which depends on the sliding history of the block. The system seems like it should be pretty well-behaved once the block starts moving and the constraint where I artificially set the block acceleration dydt[0]=0 and velocity dydt[1]=0 if the friction coefficient exceeds the spring force becomes irrelevant. But the simulation gets bogged down with very small timesteps in this region or sometimes stops and warns me about too-small timesteps. Increasing the global error value doesn't always or even usually help, often it doesn't help at all until at e.g. /E=1e2 it will complete instantly and spit out a bunch of nonsense. Changing the integration method has a big effect but doesn't really solve the problem in my case. I added an error scaling wave since the output values span a few orders of magnitude and played around with the relative error limits but couldn't seem to speed things up that way. I also added some code which is still visible which prints various relevant variables for the integration into a notebook when the dydt[0] is acting strangely, but it turns out that this portion of the integration doesn't seem to be causing the slowdown, it is mostly later on when the block is speeding up but hasn't achieved the same speed as the puller.
I'm a little unclear on what is being compared to what when deciding the timestep. My understanding is: the truncation error must be smaller than my /E= limits, which from what I can figure out means that (following my code) V, tipX, and Theta have their values updated by having V+dydt[0]*timestep,tipX+dydt[1]*timestep,Theta+dydt[2]*timestep. The same values are also updated by the same timestep but with the dydt's calculated by some low-cost lower order integration method.
These pairs of V's,tipX's, and Theta's are subtracted from eachother and this is the value that is compared to the error level, which causes a reduction in timestep and recalculation if the error exceeds the threshold value.
I guess my question is whether my understanding of how the truncation error is calculated is correct, and is there a way to print out the sort of parameters during runtime that would help me figure out where the big errors that are slowing things down are coming from? I think the code I already have which prints parameters to the notebook gives me the values I need to calculate the output of each integration step by hand, but I don't think it gives me the lower order approximation values which are part of the truncation error calculation.
I should mention, especially now that I have discovered that one of the Igor example experiments is about my physical system and that this experiment mentions that BDF is the only good algorithm for a stiff system like this one, that I have tried /M=3 and this method seems to increase the time step very quickly leading to gross errors. /M=1 does this too. I have had the best luck with /M=2 especially if I increase the mass of the slider. I assume this helps by not letting the dV/dt change too quickly, but I get obvious inertial effects at the masses where the integration completes in a reasonable amount of time (<30min).
December 29, 2020 at 09:01 pm - Permalink
BDF is the only method that has a real hope of doing the job. It is a classic stiff system- if you don't use BDF, at slow speeds other methods will have to take tiny steps that compute the fact that the inertial force is zero. Those tiny steps are required by the time constant of the inertial system, which is related to the period of the spring-mass system. If you use a non-stiff method for a stiff system, you will take a huge number of tiny steps that accumulate numerical errors at every step. That can, indeed, make the solution quite different from the BDF solution.
You say that BDF leads to gross errors. What sort of errors? You are doing the solution in free-run X mode. Are the points simply too far apart in time?
January 4, 2021 at 01:08 pm - Permalink
I believe the BDF solution is just wrong (with /E=1e-12 anyway). It gives very sparse points, which is not ideal, but I know I can just run it non-freerun to avoid that. I've attached plots of the results of sims using the code above except for changes I make to the method /M=XX and the mass of the slider which are noted in the legend. The ones run with M=1 and M=2 behave as I'd expect for the puller moving at 10,000 nm/s and the slider initially stopped. At the highest mass, inertial effects are obvious and they go away as I lower the mass, and the simulations take a lot longer. I used M=2 for the lowest mass because that was the only method that would give a reasonable result in <15min or so on my laptop. The BDF simulations don't really evolve at all as you can see from the graph where they are by themselves. /M=0 gives this seeming nonsense as well, if I recall correctly.
I'm working on trying to do a massless simulation as was done in the example, but I'm running into problems there as well. I think letting the slider stop completely, as happens in my experimental system, causes problems there. So I'd rather find a way to get these simulations with inertia working if possible, since I have gotten pretty far with them already.
January 4, 2021 at 02:46 pm - Permalink
I just tried running in BDF after turning off freerun, in hopes that forcing a higher density of integration steps would solve the problem, but it doesn't make any difference at all. I attach the comparison. I wouldn't expect the traces to look like it is just interpolating between the sparse data points from the freerun, which I would assume are somewhat arbitrary and don't represent real kinks in the slider speed curve.
January 4, 2021 at 03:30 pm - Permalink
No, it shouldn't just interpolate, it should make a smooth curve (presumably). If it were oscillatory, I think the free-run version would have more points such that the oscillations were shown but at poor resolution. I can't really tell what you have done just from the pictures, so I can't really tell you what's going on there.
My fading recollection of my work 25 years ago was that zero velocity simply wasn't within the regime of the equations I was using (there is a log(v) term in there somewhere). The systems we were modelling were rock friction systems where mu0 (the constant term in the friction equation) had a value around 0.6 and the dynamic effects were just a few percent. That made it possible to display instability (we were interested in earthquakes) but the overall friction was too high for the spring to cause oscillations, the velocity was always positive and greater than zero. Is that possibly part of the problem?
In my own tests, for not-very-stiff systems BDF gives answers that are quite close to other methods, but not as efficiently. For stiff systems, they agree up to where a non-stiff method starts stepping so slowly that it doesn't progress. So I'm not sure what's going wrong here. But I could imagine that if the derivative function gave bad results (nan, or inf or something like that) the different methods might go wrong in different ways.
January 4, 2021 at 03:51 pm - Permalink
I think this is likely part of the problem. I was just emailing with a geologist at Sandia who was explaining to me that the overall friction coefficient just never becomes negative because for that to happen in macroscale rocks would require speeds of like 10^-16 m/s which is experimentally unresolvable. I'm working at the nanoscale where the friction coefficient is very low to start with in this case, so the standard RSF law actually gives negative friction coefficients at speeds attainable/measurable in my experiments. I am actually using a modified RSF law that avoids outright negative friction coefficients by adding a one to the direct effect i.e. mu=mu0+a*log(1+V/Vo)+... but the simulations still seem to be getting stuck even though the modified RSF behavior should be pretty smooth, and BDF is fast but seems to be giving me nonsense for some reason I can't figure out. I figure printing the parameters to a notebook like I was doing every integration step would be helpful, but I can't figure out how to access the parameters that are calculated with a lower order method to calculate the truncation error. If I could calculate the truncation error by hand from the notebook parameters when it slows down I figure I can figure out where the slowdown is coming from, or the outright error in the case of the BDF method.
January 4, 2021 at 04:12 pm - Permalink
You really only have access to the info passed to your derivative function. All that truncation error stuff is computed internally in Igor. In the case of the Adams-Moultin method, it is inside the SUNDIALS ODE package, which isn't even my code. If you are getting some "funny" numbers, you might check the inputs to your derivative function using NumType() to check for non-zero type (which mean NaN or Inf).
I've been out of this for a very long time- what does RSF stand for? I worked with Terry Tullis at Brown U. working on a big torsion rig. Lots of fun!
January 4, 2021 at 05:06 pm - Permalink
I actually knew you worked with Terry Tullis. One of my advisors is David Goldsby, and when I told him about another problem you helped me with last year, he knew who you were and told me to say hi. RSF is rate-and-state friction.
Thanks for the insight. I guess I'll just keep looking at the parameters that get passed back and forth to the derivative function and maybe have it print the parameters to a notebook every time NumType is non-zero. I'm also going to rerun the experiments at higher loads, which in the case of atomic force microscope experiments can push the friction coefficient quite a bit higher, and perhaps keep my simulations out of trouble.
January 4, 2021 at 05:44 pm - Permalink
Ah, David Goldsby. He was my successor in Terry's lab!
And now that you have refreshed my memory, RSF makes perfect sense. Good luck with it; my bet is that making it possible to oscillate will be tricky mathematically. As I recall, one of Jim Dietrich's experiments was to slide and then stop for varying lengths of time. But that wasn't actually testing zero sliding velocity because it continued to creep while under load. I think you are getting into new territory in this stuff!
What is the loading spring in an AFM?
January 5, 2021 at 11:20 am - Permalink
That's him. David is a professor at UPenn now. The experiment you mention, called slide-hold-slide these days, is pretty standard in rock friction research now, and probably the main one we do in the AFM as well. While I think asperity plastic creep is still the primary mechanism geologists use to explain how the static (or near-static) friction increases over time via real contact area increase, my predecessor on the project showed pretty clearly that silica contacts can also strengthen via chemical bond formation across the interface in truly static aging, so my work is still pushing along those lines. I am planning to simulate variations on the slide-hold-slide experiments I have done to see if the experimental response violates the empirical rate and state friction laws, while some collaborators are doing atomistic simulations to get at the physical mechanisms. The spring we use to measure forces in AFM is the cantilever the sharp tip is attached to. By sliding the tip on the surface perpendicular to the long axis of the cantilever, the friction induces a torsion of the cantilever that we can measure via a laser beam reflected off the cantilever into a position-sensitive detector. It's typically a very soft spring, so you can induce the sort of instabilities you used to investigate with the right experimental conditions.
January 5, 2021 at 11:43 am - Permalink
This is cool! It's fun to get a 25-year refresher.
January 5, 2021 at 01:20 pm - Permalink
I'm glad you enjoyed it. I was hoping it might be interesting.
January 5, 2021 at 01:46 pm - Permalink
One final postscript. You actually did solve my problem. I figured out how to modify your example experiment massless simulation for the particular rate and state law I am using and now it works like a charm. It integrates very very fast and matches the simulations I have done with my code including inertia for the smallest masses that took forever to integrate.
January 5, 2021 at 07:54 pm - Permalink
Terrific! My examples were written maybe 20 years ago. I'm glad they were of use.
January 5, 2021 at 10:43 pm - Permalink
By the way, we don't ship the original RK solver I wrote while I was still at Brown U. It was entirely written in Igor code. Perhaps in Igor 2.4. Igor has come a long way since then.
January 5, 2021 at 10:45 pm - Permalink
It's pretty great. I gifted myself a personal copy at the academic price for Christmas a couple years ago so I'd always have it if I even went to a lab or industry job that didn't already have a license for staff. My boss bought me NIDAQ Tools MX last year and that has let me do customized experiments that are impossible with standard AFM control software, except for the Asylum MFP-3D AFM where the customizability of experiments is due to being built within Igor. I run basically all my AFM experiments through NIDAQ Tools MX these days.
January 6, 2021 at 07:59 am - Permalink
Wow! That's pretty terrific. You are heavily into code I wrote myself.
January 6, 2021 at 09:44 am - Permalink