Kernel (linear algebra)

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain.This is the generalization to linear operators of the row space, or coimage, of a matrix.The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field.The domain of the mapping is a module, with the kernel constituting a submodule.If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator L: V → W is continuous if and only if the kernel of L is a closed subspace of V. Consider a linear map represented as a m × n matrix A with coefficients in a field K (typically), that is operating on column vectors x with n components over K. The kernel of this linear map is the set of solutions to the equation Ax = 0, where 0 is understood as the zero vector.The kernel of a m × n matrix A over a field K is a linear subspace of Kn.By the above reasoning, the kernel of A is the orthogonal complement to the row space.The kernel also plays a role in the solution to a nonhomogeneous system of linear equations:Thus, the difference of any two solutions to the equation Ax = b lies in the kernel of A.Geometrically, this says that the solution set to Ax = b is the translation of the kernel of A by the vector v. See also Fredholm alternative and flat (geometry).The illustration also touches on the row space and its relation to the kernel.which can be expressed as a homogeneous system of linear equations involving x, y, and z:The same linear equations can also be written in matrix form as:Rewriting the matrix in equation form yields:The elements of the kernel can be further expressed in parametric vector form, as follows:Since c is a free variable ranging over all real numbers, this can be expressed equally well as:The kernel of A is precisely the solution set to these equations (in this case, a line through the origin in R3).These two (linearly independent) row vectors span the row space of A—a plane orthogonal to the vector (−1,−26,16)T. With the rank 2 of A, the nullity 1 of A, and the dimension 3 of A, we have an illustration of the rank-nullity theorem.A basis of the kernel of a matrix may be computed by Gaussian elimination.Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrixIn fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact thatIf the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination.It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication).[citation needed] For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.[citation needed] For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank.[5][citation needed] Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result.As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems.A state of the art software for this purpose is the Lapack library.
Kernel and image of a linear map L from V to W
Kernel (disambiguation)mathematicslinear mapdomainco-domainlinear subspacevector spaceszero vectorisomorphicquotientfinite-dimensionalrank–nullity theoreminner product spaceorthogonal complementrow spaceModule (mathematics)homomorphismsmodulessubmoduleTopological vector spacetopological vector spacescontinuouscloseddimensionset-builder notationsystem of linear equationsscalardot productorthogonalcokerneltransposecolumn spacetranslationFredholm alternativeflat (geometry)Gauss–Jordan eliminationparametric vector formfree variablevector spacedifferentiation operatorconstant functionsdirect productshift operatororthogonal projectionGaussian eliminationaugmented matrixidentity matrixcolumn echelon formzero columnBareiss algorithmmodular arithmeticChinese remainder theoremfinite fieldscomputational complexitycryptographyGröbner basiscomputer hardwarefloating-point numbersrounding errorsfull rankwell conditionedcondition numberLapackKernel (algebra)Zero setRow and column spacesRow reductionFour fundamental subspacesLinear operatorFunction spaceLang, SergeEncyclopedia of MathematicsEMS PressKhan AcademyLinear algebraOutlineGlossaryVectorScalar multiplicationVector projectionLinear spanLinear projectionLinear independenceLinear combinationMultilinear mapChange of basisRow and column vectorsEigenvalues and eigenvectorsLinear equationsMatricesDecompositionInvertibleMultiplicationTransformationCramer's ruleProductive matrixBilinearOrthogonalityHadamard productOuter productKronecker productGram–Schmidt processMultilinear algebraDeterminantCross productTriple productSeven-dimensional cross productGeometric algebraExterior algebraBivectorMultivectorTensorOutermorphismSubspaceTensor productNumericalFloating-pointNumerical stabilityBasic Linear Algebra SubprogramsSparse matrixComparison of linear algebra libraries