Use non-linear least squares to fit a function, f, to data. Assumes The model function, f[x, …]. It must take the independent variable as the first argument and the parameters to fit as separate remaining arguments. The independent variable where the data is measured. Should usually be an M-length sequence or an [k,M]-shaped array for functions with k predictors, but can actually be any object. The dependent data, a length M array - nominally Initial guess for the parameters [length N]. If None, then the initial values will all be 1 [if the number of parameters for the
function can be determined using introspection, otherwise a ValueError is raised]. Determines the uncertainty in ydata. If we define residuals as A 1-D sigma should contain values of standard deviations of errors in ydata. In this case, the optimized function is A 2-D sigma
should contain the covariance matrix of errors in ydata. In this case, the optimized function is New in version 0.19. None [default] is equivalent of 1-D sigma filled with ones. If True, sigma is used in an absolute sense and the estimated parameter covariance pcov reflects these absolute values.ydata = f[xdata, *params] + eps
.f[xdata, ...]
.r = ydata - f[xdata, *popt]
, then the interpretation of sigma depends on its number of dimensions:chisq = sum[[r / sigma] ** 2]
.chisq = r.T @ inv[sigma] @ r
.
If False [default], only the relative magnitudes of the sigma values
matter. The returned parameter covariance matrix pcov is based on scaling sigma by a constant factor. This constant is set by demanding that the reduced chisq for the optimal parameters popt when using the scaled sigma equals unity. In other words, sigma is scaled to match the sample variance of the residuals after the fit. Default is False. Mathematically, pcov[absolute_sigma=False] = pcov[absolute_sigma=True] * chisq[popt]/[M-N]
If True, check that the input arrays do not contain nans of infs, and raise a ValueError if they do. Setting this parameter to False may silently produce nonsensical results if the input arrays do contain nans. Default is True.
bounds2-tuple of array_like, optionalLower and upper bounds on parameters. Defaults to no bounds. Each element of the tuple must be either an array with the length equal to the number of parameters, or a scalar [in which case the bound is taken to be the same for all parameters]. Use np.inf
with an
appropriate sign to disable bounds on all or some parameters.
New in version 0.17.
method{‘lm’, ‘trf’, ‘dogbox’}, optionalMethod to use for optimization. See least_squares
for more details. Default is ‘lm’ for unconstrained problems and ‘trf’ if bounds are provided. The method ‘lm’ won’t work when the number of observations is less than the number of variables, use ‘trf’ or ‘dogbox’ in this case.
New in version 0.17.
jaccallable, string or None, optionalFunction with signature jac[x, ...]
which computes the Jacobian matrix of the model function with respect to parameters as a dense array_like structure. It will be scaled according to provided sigma. If None [default], the Jacobian will be estimated numerically. String keywords for ‘trf’ and ‘dogbox’ methods can be used to select a finite difference scheme, see least_squares
.
New in version 0.18.
full_outputboolean, optionalIf True, this function returns additioal information: infodict, mesg, and ier.
New in version 1.9.
**kwargsKeyword arguments passed to leastsq
for method='lm'
or least_squares
otherwise.
Optimal values for the parameters so that the sum of the squared residuals of f[xdata, *popt] - ydata
is minimized.
The estimated covariance of popt. The diagonals provide
the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt[np.diag[pcov]]
.
How the sigma parameter affects the estimated covariance depends on absolute_sigma argument, as described above.
If the Jacobian matrix at the solution doesn’t have a full rank, then ‘lm’ method returns a matrix filled with np.inf
, on the other hand ‘trf’ and ‘dogbox’ methods use Moore-Penrose pseudoinverse to compute the covariance matrix.
a dictionary of optional outputs with the keys:
nfev
The number of function calls. Methods ‘trf’ and ‘dogbox’ do not count function calls for numerical Jacobian approximation, as opposed to ‘lm’ method.
fvec
The function values evaluated at the solution.
fjac
A permutation of the R matrix of a QR factorization of the final approximate Jacobian matrix, stored column wise. Together with ipvt, the covariance of the estimate can be approximated. Method ‘lm’ only provides this information.
ipvt
An integer array of length N which defines a permutation matrix, p, such that fjac*p = q*r, where r is upper triangular with diagonal elements of nonincreasing magnitude. Column j of p is column ipvt[j] of the identity matrix. Method ‘lm’ only provides this information.
qtf
The vector [transpose[q] * fvec]. Method ‘lm’ only provides this information.
New in version 1.9.
mesgstr [returned only if full_output is True]A string message giving information about the solution.
New in version 1.9.
An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, the solution was not found. In either case, the optional output variable mesg gives more information.
New in version 1.9.
RaisesValueErrorif either ydata or xdata contain NaNs, or if incompatible options are used.
RuntimeErrorif the least-squares minimization fails.
OptimizeWarningif covariance of the parameters can not be estimated.
Notes
Users should ensure that inputs xdata, ydata, and the output of f are float64
, or else the optimization may return incorrect results.
With method='lm'
, the algorithm uses the Levenberg-Marquardt algorithm through leastsq
. Note that this algorithm can only deal with unconstrained problems.
Box constraints can be handled by methods ‘trf’ and ‘dogbox’. Refer to the docstring of least_squares
for more information.
Examples
>>> import matplotlib.pyplot as plt >>> from scipy.optimize import curve_fit
>>> def func[x, a, b, c]: ... return a * np.exp[-b * x] + c
Define the data to be fit with some noise:
>>> xdata = np.linspace[0, 4, 50] >>> y = func[xdata, 2.5, 1.3, 0.5] >>> rng = np.random.default_rng[] >>> y_noise = 0.2 * rng.normal[size=xdata.size] >>> ydata = y + y_noise >>> plt.plot[xdata, ydata, 'b-', label='data']
Fit for the parameters a, b, c of the function func:
>>> popt, pcov = curve_fit[func, xdata, ydata] >>> popt array[[2.56274217, 1.37268521, 0.47427475]] >>> plt.plot[xdata, func[xdata, *popt], 'r-', ... label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple[popt]]
Constrain the optimization to the region of 0 popt
array[[2.43736712, 1. , 0.34463856]]
>>> plt.plot[xdata, func[xdata, *popt], 'g--',
... label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple[popt]]
>>> plt.xlabel['x']
>>> plt.ylabel['y']
>>> plt.legend[]
>>> plt.show[]