# application of matrices in the analysis of time autonomous system

CHAPTER ONE
INTRODUCTION
1.1      BACKGROUND OF THE

Linear algebra and the concept of a matrix were both developed in response to the discovery of determinants while studying linear equations’ coefficients. A determinant-based method for solving linear equation systems was introduced by Cramer in 1750 and was based on work by Leibnitz, one of the two calculus founders. Late in his work on bilinear forms, Lagrange made an explicit use of matrixed functions. Maxima and minima of multivariate functions were important to Lagrange. The technique of Lagrange multipliers, which he developed, is named after him.
A condition on the matrix of second order partial derivatives, which Lagrange did not explicitly mention, is termed positive or negative definiteness; this condition is now known as definiteness, although Lagrange did not explicitly use the term “matrices.” About a century after he created Gaussian elimination to solve least squares issues in astronomical calculations, Maxwell used it to determine the form of the Earth and its surface. Geodesy is the discipline of applied mathematics that deals with measuring or determining these things.

This method for removing variables from linear equation systems, known as “Gaussian elimination,” has been used by Chinese mathematicians for hundreds of years, despite the fact that Gauss’ name is linked with it. For many years, the growth of geodesy, not mathematics, was regarded to include Gaussian elimination. Gauss-Jordan elimination appeared in for the first time in Wilhelm Jordan’s geodesy manual.

Matrix algebra and determinants were linked early on by the relationship det(AB) = det(A)det(B). According to Cayley, “There would be many things to say about this theory of matrices which should; it seems to me, precede the theory of determinants.”” Though mathematical efforts were made in this ion, no universally applicable natural definition of the product of two vectors could be found. First, Hermann Grassmann presented a noncommutative vector product (that is, v x w does not have to be the same as w x v) (1844). In addition, Grassmann’s work introduced the “simple” or “rank-one” matrix, which is today known as the product of a column matrix and a row matrix.

American mathematician Willard Gibbs authored a book on vector analysis in the late 19th century. General matrices were represented in that work as the sum of dyads, which Gibbs also named dyadics. Dirac later coined the terms “bra-ket’ for what we now refer to as the scalar product of rows and columns and “ket-bra for what we now refer to as the product of rows and columns, creating what we now refer to as a simple matrix. Column matrices and vectors were first identified by physicists in the early twentieth century and have been widely used ever since. Linear transformations have always been strongly related with matrices. A finite-dimensional subcase of linear transformations was all they were by 1900.

In 1888, Peano proposed the standard notion of a vector space. Abstract vector spaces with functions as their members quickly followed.. After World War II, the emergence of powerful digital computers sparked a fresh interest in matrices, especially the numerical analysis of matrices. Condition numbers were established by John von Neumann and Herman Goldstine in 1947 to analyze roundoff mistakes. Stored-program computers were pioneered by Alan Turing and von Neumann in the twentieth century. The LU decomposition of a matrix was developed by Turing in 1948. The U is an echelon matrix, whereas the L is a lower triangular matrix with a single 1 in the middle of each row and column. To solve several linear equation systems with the same coefficient matrix, LU decomposition is often used. A decade later, the QR decomposition’s advantages were discovered. There are positive entries in the diagonal of the square upper triangular invertible matrix Q, which has columns that are orthonormal vectors. Various calculations, such as solving equations and finding eigenvalues, are carried out using the QR factorization in computer algorithms that use it.

1.3 PURPOSE AND OF THE RESEARCH
Matrix-based analysis of time-independent systems is the primary goal of this study. The study’s precise goals are:

1. to examine what an autonomous system is all about; and 2. to examine what an autonomous system is not.
2 to ascertain the matrices’ law and algebra
3 to use matrices to solve first- and second-order differential equations. study how matrices may be used to solve time-dependent systems
1.4 QUESTIONS IN THE RESEARCH
In order to determine the aforementioned study goals, a list of research questions was compiled.
1. What is a time-autonomous system?

What are the for this study?
2 What are the matrices’ algebraic rules and laws?
3 Is it beneficial to use matrices to solve first and second order differential equations?
What influences the usage of matrices to solve time-independent systems?
INTEREST IN THE ’S IMPORTANCE
The math, physics, and s will all benefit greatly from our research on the use of matrices in time autonomous system analysis. The research will look at time-independent systems and investigate how matrices may be used to determine their initial solution. Biological, medical, and radiological applications will all be possible thanks to this research. Those who are interested in doing comparable research on the subject matter might use this study as a resource. Finally, the research will add to the growing body of work on the use of matrices in time autonomous system analysis.

## Not What You Were Looking For? Send Us Your Topic

TRUCTIONS AFTER PAYMENT

After making payment, kindly send the following: