How is the contraints are applied during fitting?

I did a 5-variable linear regression with positive constraints. Like this:

•T_Constraints[0] = {"K0 > 0","K1 > 0","K2 > 0","K3 > 0","K4 > 0"}
•FuncFit/M=2 MLR_5var W_coef ::Data:yData:b_abs_365 /X={wv0, wv1, wv2, wv3, wv4} /D /C=T_Constraints

One coefficient is calculated as 0.038 ± 0.16. This seems that during the fit process, negative coefficients were derived. 

As described in the manual Ⅲ-204 Estimates of Error:

"The coefficients and their sigma values are estimates (usually remarkably good estimates) of what you would get if you performed the same fit an infinite number of times on the same underlying data (but with different noise each time) and then calculated the mean and standard deviation for each coefficient."

I wonder if the constraints are only checked after all the fit runs finished, or they are applied for each time of the fitting process?


 

Gosh, with that many parameters, you have an elephant wagging its trunk.

https://www.johndcook.com/blog/2011/06/21/how-to-fit-an-elephant/

But with the constraints set, the elephant is probably wagging its trunk only in a plane perpendicular to the page (so you won't see it moving).

Seriously though ...

The high level of uncertainty may be saying that the specific parameter is perhaps best set at zero for an initial guess.

The quoted statement applies to reasonably well-behaved fit functions. That is, ones that are smooth near the solution point. The coefficient errors are computed from the (linearized) quadratic approximation of your fit function chi-square surface near the solution point. If you consider values of the independent variable far from the solution point, the error bars will not apply with accuracy.

When you apply constraints, the function becomes highly non-linear. The error bars computed from the linearized chi-square surface cannot be valid outside of the constraint region.

A constrained fit is computed iteratively: from the current solution point, the fit function is (effectively) extrapolated to a new solution point using the function's gradient with respect to the coefficients. At this point, the constraints are checked; if there are violations, then the new solution point is moved inside the constraint region by solving a simple optimization problem. That will adjust the solution to comply with the constraints (if the constraints are "feasible", that is, they don't conflict). This new, adjusted, solution is then used as the starting point for a new iteration of unconstrained extrapolation followed by application of constraints.

So as jjweimer says, the uncertainty says that you can't tell the coefficient from zero. The fact that the uncertainty allows negative values simply means that the highly nonlinear constraints invalidate the linearized computation of the uncertainty.