Rank (linear algebra)

In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns.[1][2][3] This corresponds to the maximal number of linearly independent columns of A.This, in turn, is identical to the dimension of the vector space spanned by its rows.A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations.can be put in reduced row-echelon form by using the following elementary row operations:When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead.An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination.Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra.One of the most elementary ones has been sketched in § Rank from row echelon forms.It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.The first uses only basic properties of linear combinations of vectors, and is valid over any field.[9] The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995).To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr:But recall that the xi were chosen as a basis of the row space of A and so are linearly independent.Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.This definition has the advantage that it can be applied to any linear map without need for a specific matrix.Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank–nullity theorem states that this definition is equivalent to the preceding one.The rank of A is the maximal number of linearly independent columnsThe rank of A equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition(The order of a minor is the side-length of the square sub-matrix of which it is the determinant.)A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward.The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent).of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition.We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above.One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations.If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution.The solution is unique if and only if the rank equals the number of variables.Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank.More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.
linear algebramatrixdimensionvector spacespannedlinearly independentnondegeneratenesssystem of linear equationslinear transformationcolumn spacerow spacelinear maptransposeGaussian eliminationrow echelon formelementary row operationspivotsfloating pointLU decompositionsingular value decompositionQR decompositionrank-revealing QR factorizationelementary row operationreduced row echelon formidentity matrixlinear combinationsorthogonalityreal numbersSteinitz exchange lemmaRank factorizationorthogonallinear mappingkernelrank–nullity theoremsingular valuesTensor rank decompositionTensor ranknonnegativeintegerzero matrixinjectivesurjectiveinvertibleSylvesterFrobeniusnullityGram matrixnull spacescomplex numbersadjointRouché–Capelli theoremaugmented matrixcoefficient matrixcontrol theorylinear systemcontrollableobservablecommunication complexitytensorssmooth mapssmooth manifoldsderivativetensor ordertensorTensor (intrinsic definition)simple tensorsMatroid rankNonnegative rank (linear algebra)Rank (differential topology)MulticollinearityLinear dependenceMathematics MagazineAxler, SheldonUndergraduate Texts in Mathematics SpringerHalmos, Paul RichardHefferon, JimKatznelson, YitzhakAmerican Mathematical SocietyRoman, StevenSpringerOutlineGlossaryScalarVectorScalar multiplicationVector projectionLinear spanLinear projectionLinear independenceLinear combinationMultilinear mapChange of basisRow and column vectorsRow and column spacesEigenvalues and eigenvectorsLinear equationsMatricesDecompositionMultiplicationTransformationCramer's ruleProductive matrixBilinearDot productHadamard productInner product spaceOuter productKronecker productGram–Schmidt processMultilinear algebraDeterminantCross productTriple productSeven-dimensional cross productGeometric algebraExterior algebraBivectorMultivectorOutermorphismQuotientSubspaceTensor productNumericalFloating-pointNumerical stabilityBasic Linear Algebra SubprogramsSparse matrixComparison of linear algebra libraries