Diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices.Elements of the main diagonal can either be zero or nonzero.That is, the matrix D = (di,j) with n columns and n rows is diagonal ifHowever, the main diagonal entries are unrestricted.More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a square diagonal matrix.The same operator is also used to represent block diagonal matrices asrepresents the Hadamard product and 1 is a constant vector with elements 1.The inverse matrix-to-vector diag operator is sometimes denoted by the identically namedwhere the argument is now a matrix and the result is a vector of its diagonal entries.Its effect on a vector is scalar multiplication by λ.(since one can divide by mij), so they do not commute unless the off-diagonal terms are zero.[1] For an abstract vector space V (rather than the concrete vector space Kn), the analog of scalar matrices are scalar transformations.(from a scalar λ to its corresponding scalar transformation, multiplication by λ) exhibiting End(M) as a R-algebra.For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group GL(V).The former is more generally true free modulesThis can be expressed more compactly by using a vector instead of a diagonal matrix,This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix.This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF,[2] since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an.The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all nonzero.As explained in determining coefficients of operator matrix, there is a special basis, e1, ..., en, for which the matrix A takes the diagonal form., all coefficients ai, j with i ≠ j are zero, leaving only one term per sum.The surviving diagonal elements, ai, j, are known as eigenvalues and designated with λi in the equation, which reduces toDiagonal matrices occur in many areas of linear algebra.Over the field of real or complex numbers, more is true.Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V such that U∗AV is diagonal with positive entries.In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation.Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable.An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation.Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.