#include "vl/Solve.h"
Include dependency graph for Solve.cc:
Go to the source code of this file.
Functions | |||
![]() | ![]() | TMReal | SolveOverRelax ( const TMat &A, TVec &x, const TVec &b, TMReal epsilon, TMReal omega, Int *steps ) |
![]() | ![]() | Solves Ax = b via gaussian elimination. More... | |
![]() | ![]() | TMReal | SolveOverRelax ( const TSparseMat &A, TVec &x, const TVec &b, TMReal epsilon, TMReal omega, Int *steps ) |
![]() | ![]() | Solves Ax = b via gaussian elimination for a sparse matrix. More... | |
![]() | ![]() | TMReal | SolveConjGrad ( const TMat &A, TVec &x, const TVec &b, TMReal epsilon, Int *steps ) |
![]() | ![]() | Solve Ax = b by conjugate gradient method, for symmetric, positive definite A. More... | |
![]() | ![]() | TMReal | SolveConjGrad ( const TSparseMat &A, TVec &x, const TVec &b, TMReal epsilon, Int *steps ) |
![]() | ![]() | Solves Ax = b via conjugate gradient for a sparse matrix. More... |
TMReal SolveConjGrad (const TSparseMat & A, TVec & x, const TVec & b, TMReal epsilon, Int * steps) |
Solves Ax = b via conjugate gradient for a sparse matrix.
See the dense version above for details.
TMReal SolveConjGrad (const TMat & A, TVec & x, const TVec & b, TMReal epsilon, Int * steps) |
Solve Ax = b by conjugate gradient method, for symmetric, positive definite A.
x is the initial guess on input, and solution vector on output.
Returns squared length of residual vector.
If A is not symmetric, this will solve the system (A + At)x/2 = b
[Strang, "Introduction to Applied Mathematics", 1986, p. 422]
TMReal SolveOverRelax (const TSparseMat & A, TVec & x, const TVec & b, TMReal epsilon, TMReal omega, Int * steps) |
Solves Ax = b via gaussian elimination for a sparse matrix.
See the dense version above for details.
TMReal SolveOverRelax (const TMat & A, TVec & x, const TVec & b, TMReal epsilon, TMReal omega, Int * steps) |
Solves Ax = b via gaussian elimination.
Each iteration modifies the current approximate solution x. Omega controls overrelaxation: omega = 1 gives Gauss-Seidel, omega somewhere beteen 1 and 2 gives the fastest convergence.
x is the initial guess on input, and solution vector on output.
If steps is zero, the routine iterates until the residual is less than epsilon. Otherwise, it performs at most *steps iterations, and returns the actual number of iterations performed in *steps.
Returns approximate squared length of residual vector: |Ax-b|^2.
[Strang, "Introduction to Applied Mathematics", 1986, p. 407]