|
Mujpy /
Gradients< Mujpy._add_multirun_ | Index | Mujpy.grad > Why [only] in global fits Typical global fits, at the time of writing, are multi run (say 10 runs) fit with a couple of global parameters and, say, 5 local parametes. An example is a longitudinal geometry GPS fit in transverse field at 200 mT, one grouping, 5000 bins (20000 rebinned 4 times) with three Gaussian components, proxy for an asymmetric superconductor flux-line-lattice static broadening plus cryostat muons. This fit requires 60 Minuit parameters and is performed by In the end the following analysis proves wrong. Leave it for future memory. Profiling shows that 6% of the time is spent calculating the plain fit functions ( Therefore the strategy of computing analytic gradients should provide a gain by roughly a factor 10 (it does not!). The analytic grad strategy described below takes slightly longer time and is much more sensitive to the guess distance-to-minimum. This statement comes after extensive debugging, when the two strategies finally get exactly to the same result. How is the analytic gradient implemented The next page describes the analytic calculation. In short all components of the present library allow analytic calculations by means of the component itself, and of its derivative with respect to its argument (see next page for examples). Minuit passes an array of parameter values (say 52), representing a point in the domain of the fit cost function. Since asymmetries and errors are stored as 2d numpy arrays of shape (runs,time) both functions (components and their derivatives) can be stored for the entire run x time set of points (calculated only once for each domain point). Then it is just a matter of book-keeping to reassemble the gradient of the cost function as a sum over runs and time bins. One detail is however crucial: the user can provide user functions Likewise, for gradient we must compute the partial derivatives of the same user function with respect some minuit parameters. Thus partial derivatives od
iMinuit examples are described here a bit obsolete (Minuit key seems seems grad=... not grad_fcn=...) Description is grad_fcn: Optional. Provide a function that calculates the gradient analytically and returns an iterable object with one element for each dimension. If None is given minuit will calculate the gradient numerically. (Default None) Info in Minuit User Guide 4.1.3 FCN function with gradient By default first derivatives are calculated numerically by M INUIT . In case the user wants to supply his own gradient calculator (e.g. analytical derivatives), he needs to implement the FCNGradientBase interface. The size of the output vector is the same as of the input one. The same is true for the position of the elements (first derivative of the function with respect to the n th variable has index n in the output vector). < Mujpy._add_multirun_ | Index | Mujpy.grad > |