class cv::ConjGradSolver

Overview

This class is used to perform the non-linear non-constrained minimization of a function with known gradient,. Moreā€¦

#include <optim.hpp>

class ConjGradSolver: public cv::MinProblemSolver
{
public:
    // methods

    static
    Ptr<ConjGradSolver>
    create(
        const Ptr<MinProblemSolver::Function>& f = Ptr<ConjGradSolver::Function>(),
        TermCriteria termcrit = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5000, 0.000001)
        );
};

Inherited Members

public:
    // classes

    class Function;

    // methods

    virtual
    void
    clear();

    virtual
    bool
    empty() const;

    virtual
    String
    getDefaultName() const;

    virtual
    void
    read(const FileNode& fn);

    virtual
    void
    save(const String& filename) const;

    virtual
    void
    write(FileStorage& fs) const;

    template <typename _Tp>
    static
    Ptr<_Tp>
    load(
        const String& filename,
        const String& objname = String()
        );

    template <typename _Tp>
    static
    Ptr<_Tp>
    loadFromString(
        const String& strModel,
        const String& objname = String()
        );

    template <typename _Tp>
    static
    Ptr<_Tp>
    read(const FileNode& fn);

    virtual
    Ptr<Function>
    getFunction() const = 0;

    virtual
    TermCriteria
    getTermCriteria() const = 0;

    virtual
    double
    minimize(InputOutputArray x) = 0;

    virtual
    void
    setFunction(const Ptr<Function>& f) = 0;

    virtual
    void
    setTermCriteria(const TermCriteria& termcrit) = 0;

protected:
    // methods

    void
    writeFormat(FileStorage& fs) const;

Detailed Documentation

This class is used to perform the non-linear non-constrained minimization of a function with known gradient,.

defined on an n -dimensional Euclidean space, using the Nonlinear Conjugate Gradient method. The implementation was done based on the beautifully clear explanatory article [An Introduction to the Conjugate Gradient Method Without the Agonizing Pain](http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf) by Jonathan Richard Shewchuk. The method can be seen as an adaptation of a standard Conjugate Gradient method (see, for example http://en.wikipedia.org/wiki/Conjugate_gradient_method) for numerically solving the systems of linear equations.

It should be noted, that this method, although deterministic, is rather a heuristic method and therefore may converge to a local minima, not necessary a global one. What is even more disastrous, most of its behaviour is ruled by gradient, therefore it essentially cannot distinguish between local minima and maxima. Therefore, if it starts sufficiently near to the local maximum, it may converge to it. Another obvious restriction is that it should be possible to compute the gradient of a function at any point, thus it is preferable to have analytic expression for gradient and computational burden should be born by the user.

The latter responsibility is accompilished via the getGradient method of a MinProblemSolver::Function interface (which represents function being optimized). This method takes point a point in n -dimensional space (first argument represents the array of coordinates of that point) and comput its gradient (it should be stored in the second argument as an array).

class ConjGradSolver thus does not add any new methods to the basic MinProblemSolver interface.

term criteria should meet following condition:

termcrit.type == (TermCriteria::MAX_ITER + TermCriteria::EPS) && termcrit.epsilon > 0 && termcrit.maxCount > 0
// or
termcrit.type == TermCriteria::MAX_ITER) && termcrit.maxCount > 0

Methods

static
Ptr<ConjGradSolver>
create(
    const Ptr<MinProblemSolver::Function>& f = Ptr<ConjGradSolver::Function>(),
    TermCriteria termcrit = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5000, 0.000001)
    )

This function returns the reference to the ready-to-use ConjGradSolver object.

All the parameters are optional, so this procedure can be called even without parameters at all. In this case, the default values will be used. As default value for terminal criteria are the only sensible ones, MinProblemSolver::setFunction() should be called upon the obtained object, if the function was not given to create(). Otherwise, the two ways (submit it to create() or miss it out and call the MinProblemSolver::setFunction()) are absolutely equivalent (and will drop the same errors in the same way, should invalid input be detected).

Parameters:

f Pointer to the function that will be minimized, similarly to the one you submit via MinProblemSolver::setFunction.
termcrit Terminal criteria to the algorithm, similarly to the one you submit via MinProblemSolver::setTermCriteria.