diff --git a/source/linear-algebra/source/01-LE/01.ptx b/source/linear-algebra/source/01-LE/01.ptx index c57f4e96f..89638228e 100644 --- a/source/linear-algebra/source/01-LE/01.ptx +++ b/source/linear-algebra/source/01-LE/01.ptx @@ -76,7 +76,7 @@ with m rows and n columns: \left[\begin{array}{cccc} \vec v_1 & \vec v_2 & \cdots & \vec v_n\end{array}\right] . Frequently we will use matrices to describe an ordered list of -its column vectors: +its column vectorsvector: \left[\begin{array}{c} a_{11} \\ @@ -127,7 +127,7 @@ When order is irrelevant, we will use set notation:

-A Euclidean vectorEuclidean vectorvectorEuclidean is an ordered +A Euclidean vectorEuclideanvector is an ordered list of real numbers \left[\begin{array}{c} @@ -260,13 +260,13 @@ Following are some examples of addition and scalar multiplication in \mathbb

-A linear equationlinear equation is an equation of the variables x_i of the form +A linear equationlinearequation is an equation of the variables x_i of the form a_1x_1+a_2x_2+\dots+a_nx_n=b .

- A solutionlinear equationsolution for a linear equation is a Euclidean vector + A solutionsolutionof a linear equation for a linear equation is a Euclidean vector \left[\begin{array}{c} s_1 \\ @@ -298,7 +298,7 @@ variables, we will often write our variables as x_i, and assume

A system of linear equations - system of linear equationslinear system (or a linear system for short) + system of linear equationslinearsystem (or a linear system for short) is a collection of one or more linear equations. @@ -315,7 +315,7 @@ is a collection of one or more linear equations.

- Its solution setsolution set is given by + Its solution setsolutionset is given by \setBuilder { @@ -340,7 +340,7 @@ is a collection of one or more linear equations.

When variables in a large linear system are missing, we prefer to - write the system in one of the following standard forms: + write the system in one of the following standard forms:standardform of a linear system

@@ -415,9 +415,9 @@ is equivalent to the system of equations

-A linear system is consistentlinear systemconsistent if its solution set +A linear system is consistentconsistent linear system if its solution set is non-empty (that is, there exists a solution for the -system). Otherwise it is inconsistent.linear systeminconsistent +system). Otherwise it is inconsistent.inconsistent linear system

@@ -425,7 +425,7 @@ system). Otherwise it is inconsistent.linear systemi

- All linear systems are one of the following: + All linear systems are one of the following:consistent linear systeminconsistent linear system

  1. Consistent with one solution: its solution set contains a single vector, e.g. @@ -782,7 +782,7 @@ Then use these to describe the solution set Sometimes, we will find it useful to refer only to the coefficients of the linear system (and ignore its constant terms). We call the m\times n array consisting of these coefficients - a coefficient matrixcoeefficient matrix. + a coefficient matrixcoefficient matrix. \left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n}\\ diff --git a/source/linear-algebra/source/01-LE/02.ptx b/source/linear-algebra/source/01-LE/02.ptx index 2d16ffb70..c7898d19d 100644 --- a/source/linear-algebra/source/01-LE/02.ptx +++ b/source/linear-algebra/source/01-LE/02.ptx @@ -360,7 +360,7 @@ solution set?

    - The following three row operationsrow operations produce equivalent + The following three row operationsrowoperations produce equivalent augmented matrices.

    1. @@ -556,10 +556,10 @@ matrix to the next:

      -A matrix is in reduced row echelon form (RREF)Reduced row echelon form if +A matrix is in reduced row echelon form (RREF)reduced row echelon form if pivot

      1. The leftmost nonzero term of each row is 1. - We call these terms pivots.pivot + We call these terms pivots.
      2. Each pivot is to the right of every higher pivot.
      3. @@ -638,7 +638,7 @@ we use technology to do so. However, it is also important to understand the Gauss-Jordan eliminationGauss-Jordan elimination algorithm that a computer or calculator uses to convert a matrix (augmented or not) into reduced row echelon form. Understanding this algorithm will help us better understand how to interpret the results -in many applications we use it for in . +in many applications we use it for in .rowreduction

        @@ -841,7 +841,7 @@ Select "Octave" for the Matlab-compatible syntax used by this text.

        Type rref([1,4,6;2,5,7]) and then press the Evaluate button to compute the \RREF of -\left[\begin{array}{ccc} 1 & 4 & 6 \\ 2 & 5 & 7 \end{array}\right]. +\left[\begin{array}{ccc} 1 & 4 & 6 \\ 2 & 5 & 7 \end{array}\right].rowreduction

        diff --git a/source/linear-algebra/source/01-LE/03.ptx b/source/linear-algebra/source/01-LE/03.ptx index 5899d5d9f..808f7e746 100644 --- a/source/linear-algebra/source/01-LE/03.ptx +++ b/source/linear-algebra/source/01-LE/03.ptx @@ -258,7 +258,7 @@ How many solutions must this system have?

        By finding \RREF(A) from a linear system's corresponding augmented matrix A, -we can immediately tell how many solutions the system has. +we can immediately tell how many solutions the system has. consistent linear systeminconsistent linear systemsolutionset

        • @@ -467,7 +467,7 @@ different solutions. We'll learn how to find such solution sets in Mathematical Writing Explorations -

          A system of equations with all constants equal to 0 is called homogeneous. These are addressed in detail in section +

          A system of equations with all constants equal to 0 is called homogeneoushomogeneous. These are addressed in detail in section

          • Choose three systems of equations from this chapter that you have already solved. Replace the constants with 0 to make the systems homogeneous. Solve the homogeneous systems and make a conjecture about the relationship between the earlier solutions you found and the associated homogeneous systems.
          • diff --git a/source/linear-algebra/source/01-LE/04.ptx b/source/linear-algebra/source/01-LE/04.ptx index 7af85e7b3..02645bf35 100644 --- a/source/linear-algebra/source/01-LE/04.ptx +++ b/source/linear-algebra/source/01-LE/04.ptx @@ -194,7 +194,7 @@ in the following set-builder notation.

            -Don't forget to correctly express the solution set of a linear system. +Don't forget to correctly express the solution setsolutionset of a linear system. Systems with zero or one solutions may be written by listing their elements, while systems with infinitely-many solutions may be written using set-builder notation. diff --git a/source/linear-algebra/source/02-EV/01.ptx b/source/linear-algebra/source/02-EV/01.ptx index cbcd8876b..64b84d13d 100644 --- a/source/linear-algebra/source/02-EV/01.ptx +++ b/source/linear-algebra/source/02-EV/01.ptx @@ -68,7 +68,7 @@ we refer to this real number as a scalar.

            - A linear combination linear combinationof a set of vectors + A linear combination linearcombinationof a set of vectors \{\vec v_1,\vec v_2,\dots,\vec v_n\} is given by c_1\vec v_1+c_2\vec v_2+\dots+c_n\vec v_n for any choice of scalar multiples c_1,c_2,\dots,c_n. diff --git a/source/linear-algebra/source/02-EV/02.ptx b/source/linear-algebra/source/02-EV/02.ptx index 1c9c865b4..af6066a3f 100644 --- a/source/linear-algebra/source/02-EV/02.ptx +++ b/source/linear-algebra/source/02-EV/02.ptx @@ -347,7 +347,7 @@ D. The set \{\vec v_1,\dots,\vec v_n\} spans all of \IR^m exactly when the vector equation x_1 \vec{v}_1 + \cdots + x_n\vec{v}_n = \vec{w} - is consistent for every vector \vec{w}\in\IR^m. + is consistent for every vector \vec{w}\in\IR^m.span

            Likewise, the set \{\vec v_1,\dots,\vec v_n\} fails to span all of \IR^m diff --git a/source/linear-algebra/source/02-EV/03.ptx b/source/linear-algebra/source/02-EV/03.ptx index 6b74848a8..fc272a88e 100644 --- a/source/linear-algebra/source/02-EV/03.ptx +++ b/source/linear-algebra/source/02-EV/03.ptx @@ -317,12 +317,12 @@

          • - the subset is closed under additionvector spaceclosed under addition: for any \vec{u},\vec{v} \in W, the sum \vec{u}+\vec{v} is also in W. + the subset is closed under additionclosedunder addition: for any \vec{u},\vec{v} \in W, the sum \vec{u}+\vec{v} is also in W.

          • - the subset is closed under scalar multiplicationvector spaceclosed under scalar multiplication: for any \vec{u} \in W and scalar c \in \IR, the product c\vec{u} is also in W. + the subset is closed under scalar multiplicationclosedunder scalar multiplication: for any \vec{u} \in W and scalar c \in \IR, the product c\vec{u} is also in W.

          diff --git a/source/linear-algebra/source/02-EV/04.ptx b/source/linear-algebra/source/02-EV/04.ptx index 1c7032283..8d930b18f 100644 --- a/source/linear-algebra/source/02-EV/04.ptx +++ b/source/linear-algebra/source/02-EV/04.ptx @@ -79,9 +79,9 @@

          - We say that a set of vectors is linearly dependentlinearly dependent if one vector + We say that a set of vectors is linearly dependentlineardependence if one vector in the set belongs to the span of the others. Otherwise, we say the set - is linearly independent.linearly independent + is linearly independent.linearindependence

          A linearly dependent set of three vectors diff --git a/source/linear-algebra/source/02-EV/05.ptx b/source/linear-algebra/source/02-EV/05.ptx index bdef470e9..a4cb6920f 100644 --- a/source/linear-algebra/source/02-EV/05.ptx +++ b/source/linear-algebra/source/02-EV/05.ptx @@ -175,7 +175,7 @@ In this analogy, a recipe was defined to be a list of amounts of each i

          - A basis basis of a vector space V is a set of vectors S contained in V for which + A basisbasisof a vector space of a vector space V is a set of vectors S contained in V for which

          1. @@ -244,7 +244,7 @@ that equals the vector

            - The standard basisbasisstandard of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where + The standard basisstandardbasis of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where \vec{e}_1 &= \left[\begin{array}{c}1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{array}\right] & diff --git a/source/linear-algebra/source/02-EV/06.ptx b/source/linear-algebra/source/02-EV/06.ptx index cfe6e05f2..dfd2d91ed 100644 --- a/source/linear-algebra/source/02-EV/06.ptx +++ b/source/linear-algebra/source/02-EV/06.ptx @@ -128,7 +128,7 @@ is now independent.

            - Let W be a subspace of a vector space. A basis for + Let W be a subspacesubspace of a vector space. A basisbasisof a subspace for W is a linearly independent set of vectors that spans W (but not necessarily the entire vector space).

            @@ -236,9 +236,9 @@ Thus a given basis for a subspace need not be unique.

            - Any non-trivial real vector space has infinitely-many different bases, but all + Any nontrivial real vector space has infinitely-many different bases, but all the bases for a given vector space are exactly the same size. - So we say the dimension of a vector space or subspace is equal to the + So we say the dimensiondimension of a vector space or subspace is equal to the size of any basis for the vector space.

            diff --git a/source/linear-algebra/source/02-EV/07.ptx b/source/linear-algebra/source/02-EV/07.ptx index b97f62ea5..8c27b0226 100644 --- a/source/linear-algebra/source/02-EV/07.ptx +++ b/source/linear-algebra/source/02-EV/07.ptx @@ -274,7 +274,7 @@ Find a basis for this solution space. \left[\begin{array}{c} 1 \\ \textcolor{blue}{0} \\ -2 \\ -6\\\textcolor{blue}{0}\\\textcolor{blue}{0}\\\textcolor{blue}{1} \end{array}\right] } - is a basis for the solution space. +is a basis for the solution space.basisof a solution space

            @@ -458,7 +458,7 @@ shape of the data that gets projected onto this single point of the screen? Mathematical Writing Explorations -

            An n \times n matrix M is non-singularnon-singular if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular. +

            An n \times n matrix M is non-singularnon-singular matrix if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular.

            • Prove that the reduced row echelon form of M is the identity matrix. diff --git a/source/linear-algebra/source/03-AT/01.ptx b/source/linear-algebra/source/03-AT/01.ptx index 55c1f0e69..a8af810ba 100644 --- a/source/linear-algebra/source/03-AT/01.ptx +++ b/source/linear-algebra/source/03-AT/01.ptx @@ -51,7 +51,7 @@

              -A linear transformation linear transformation(also called a linear map) +A linear transformation lineartransformation(also called a linear map) is a map between vector spaces that preserves the vector space operations. More precisely, if V and W are vector spaces, a map T:V\rightarrow W is called a linear transformation if @@ -79,8 +79,8 @@ can be applied before or after the transformation without affecting the result.

              Given a linear transformation T:V\to W, -V is called the domain of T and -W is called the co-domain of T. +V is called the domaindomain of T and +W is called the co-domainco-domain of T.

              A linear transformation with a domain of \IR^3 and a co-domain of \IR^2 diff --git a/source/linear-algebra/source/03-AT/02.ptx b/source/linear-algebra/source/03-AT/02.ptx index 6e214ad0b..5d4fd7a04 100644 --- a/source/linear-algebra/source/03-AT/02.ptx +++ b/source/linear-algebra/source/03-AT/02.ptx @@ -286,7 +286,7 @@ to compute

              -Consider any basis \{\vec b_1,\dots,\vec b_n\} for V. Since every +Consider any basis \{\vec b_1,\dots,\vec b_n\} for V.basis Since every vector \vec v can be written as a linear combination of basis vectors, \vec v = x_1\vec b_1+\dots+ x_n\vec b_n, we may compute T(\vec v) as follows: @@ -298,7 +298,7 @@ Therefore any linear transformation T:V \rightarrow W can be defined by just describing the values of T(\vec b_i).

              -Put another way, the images of the basis vectors completely determinedetermine the transformation T. +Put another way, the images of the basis vectors completely determine the transformation T.

              @@ -308,7 +308,7 @@ Put another way, the images of the basis vectors completely determine Since a linear transformation T:\IR^n\to\IR^m is determined by its action on the standard basis \{\vec e_1,\dots,\vec e_n\}, it is convenient to -store this information in an m\times n matrix, called the standard matrixstandard matrix of T, given by +store this information in an m\times n matrix, called the standard matrixstandardmatrix of T, given by [T(\vec e_1) \,\cdots\, T(\vec e_n)].

              diff --git a/source/linear-algebra/source/03-AT/03.ptx b/source/linear-algebra/source/03-AT/03.ptx index bb0bcb58b..d2a32b378 100644 --- a/source/linear-algebra/source/03-AT/03.ptx +++ b/source/linear-algebra/source/03-AT/03.ptx @@ -84,7 +84,7 @@ the set of all vectors that transform into \vec 0?

              Let T: V \rightarrow W be a linear transformation, and let \vec{z} be the additive -identity (the zero vector) of W. The kernelkernel of T +identity (the zero vector) of W. The kernelkernelof T (also known as the null spacenull space of T) is an important subspace of V defined by @@ -225,7 +225,7 @@ Let T: \IR^3 \rightarrow \IR^2 be the linear transformation with the foll

              In particular, the kernel is a subspace of the transformation's domain, and has a basis which may be found as in - : + : basisof a kernel \ker T=\left\{\left[\begin{array}{c}3a\\-2a\\a\end{array}\right]\middle| a\in\IR\right\} \hspace{2em} @@ -532,7 +532,7 @@ spans \Im T, we can obtain a basis for \Im T by finding = \left[\begin{array}{cccc} 1 & 0 & 1 & -1\\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right] -and only using the vectors corresponding to pivot columns: +and only using the vectors corresponding to pivot columns: basisof an image \setList{ \left[\begin{array}{c}3\\-1\\2\end{array}\right], @@ -656,7 +656,7 @@ Which of the following is equal to the dimension of the image of T?

              -Combining these with the observation that the number of columns is the dimension of the domain of T, we have the rank-nullity theorem: +Combining these with the observation that the number of columns is the dimension of the domain of T, we have the rank-nullity theoremrank-nullity theorem:

              @@ -664,7 +664,7 @@ The dimension of the domain of T equals \dim(\ker T)+\dim(\Im T).

              -The dimension of the image is called the rank of T (or A) and the dimension of the kernel is called the nullity. +The dimension of the image is called the rankrankof a linear transformation of T (or A) and the dimension of the kernel is called the nullitynullity.

              @@ -744,11 +744,11 @@ The dimension of the image is called the rank of T (or A<

              We can use our notation of span in relation to a matrix, not just in relation to a set of vectors. -Given a matrix M +Given a matrix Mcolumn spacerowspace

                -
              • the span of the set of all columns is the column spacecolumn space
              • +
              • the span of the set of all columns is the column space
              • -
              • the span of the set of all rows is the row spacerow space
              • +
              • the span of the set of all rows is the row space

              \mbox{Let } M = \left[\begin{array}{ccc|c}1&-1&0\\2&2&4\\-1&0&-1\end{array}\right] diff --git a/source/linear-algebra/source/03-AT/04.ptx b/source/linear-algebra/source/03-AT/04.ptx index 33609a6ff..74ea551a9 100644 --- a/source/linear-algebra/source/03-AT/04.ptx +++ b/source/linear-algebra/source/03-AT/04.ptx @@ -39,7 +39,7 @@

              Let T: V \rightarrow W be a linear transformation. -T is called injective or one-to-one if T does not map two +T is called injectiveinjective or one-to-oneone-to-oneinjective if T does not map two distinct vectors to the same place. More precisely, T is injective if T(\vec{v}) \neq T(\vec{w}) whenever \vec{v} \neq \vec{w}.

              @@ -208,7 +208,7 @@ Is T injective?

              Let T: V \rightarrow W be a linear transformation. -T is called surjective or onto if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}. +T is called surjectivesurjective or ontoontosurjective if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}.

              A surjective transformation and a non-surjective transformation @@ -429,7 +429,7 @@ Let T: V \rightarrow W be a linear transformation where

              A linear transformation T is injective if and only if \ker T = \{\vec{0}\}. Put another way, an injective linear transformation may be -recognized by its trivial kernel. +recognized by its trivial kernel.injective

              A linear transformation with trivial kernel, which is therefore injective @@ -508,7 +508,7 @@ What can you conclude?

              A linear transformation T:V \rightarrow W is surjective if and only if \Im T = W. Put another way, a surjective linear transformation may be -recognized by its identical codomain and image. +recognized by its identical codomain and image.surjective

              @@ -742,7 +742,7 @@ means T is (A) injective, (B) surjective, or

            • If \dim(V)<\dim(W), then T is not surjective.

            • -
            +
        dimension Basically, a linear transformation cannot reduce dimension without collapsing vectors into each other, and a linear transformation cannot increase dimension from its domain to its image. diff --git a/source/linear-algebra/source/03-AT/05.ptx b/source/linear-algebra/source/03-AT/05.ptx index 30afa27dd..6e726efbf 100644 --- a/source/linear-algebra/source/03-AT/05.ptx +++ b/source/linear-algebra/source/03-AT/05.ptx @@ -281,13 +281,13 @@ Which of the following properties of \IR^2 Euclidean vectors is NOT true?
      4. An additive identity exists: There exists some \vec z - where \vec v\oplus \vec z=\vec v.additive identity + where \vec v\oplus \vec z=\vec v.

      5. Additive inverses exist: There exists some -\vec v - where \vec v\oplus (-\vec v)=\vec z.additive inverse + where \vec v\oplus (-\vec v)=\vec z.

      6. diff --git a/source/linear-algebra/source/03-AT/06.ptx b/source/linear-algebra/source/03-AT/06.ptx index 305521f3e..e7f0ed3d9 100644 --- a/source/linear-algebra/source/03-AT/06.ptx +++ b/source/linear-algebra/source/03-AT/06.ptx @@ -555,7 +555,7 @@ as a linear combination of polynomials from the set

        - Given a matrix M the rankrank of a matrix is the dimension of the column space. + Given a matrix M the rankrankof a matrix of a matrix is the dimension of the column space. diff --git a/source/linear-algebra/source/04-MX/01.ptx b/source/linear-algebra/source/04-MX/01.ptx index 58a53dfd4..ecc685144 100644 --- a/source/linear-algebra/source/04-MX/01.ptx +++ b/source/linear-algebra/source/04-MX/01.ptx @@ -147,7 +147,7 @@ What are the domain and codomain of the composition map S \circ T?

        -We define the product AB of a m \times n matrix A and a +We define the productmatrixproduct AB of a m \times n matrix A and a n \times k matrix B to be the m \times k standard matrix of the composition map of the two corresponding linear functions. diff --git a/source/linear-algebra/source/04-MX/02.ptx b/source/linear-algebra/source/04-MX/02.ptx index 14fe9c8eb..0f04aac3d 100644 --- a/source/linear-algebra/source/04-MX/02.ptx +++ b/source/linear-algebra/source/04-MX/02.ptx @@ -108,7 +108,7 @@ Let T: \IR^n \rightarrow \IR^n be a linear bijection with standard matrix

        By item (B) from -we may define an inverse mapinverse map +we may define an inverse mapinversemapinvertiblelinear transformation T^{-1} : \IR^n \rightarrow \IR^n that defines T^{-1}(\vec b) as the unique solution \vec x satisfying T(\vec x)=\vec b, that is, T(T^{-1}(\vec b))=\vec b. @@ -117,8 +117,8 @@ that defines T^{-1}(\vec b) as the unique solution \vec x satisfyi Furthermore, let A^{-1}=[T^{-1}(\vec e_1)\hspace{1em}\cdots\hspace{1em}T^{-1}(\vec e_n)] be the standard matrix for T^{-1}. We call A^{-1} the -inverse matrixinverse matrix of A, -and we also say that A is an invertibleinvertible +inverse matrixinversematrix of A, +and we also say that A is an invertibleinvertiblematrix matrix.

        diff --git a/source/linear-algebra/source/04-MX/03.ptx b/source/linear-algebra/source/04-MX/03.ptx index 582acd20e..e957eb965 100644 --- a/source/linear-algebra/source/04-MX/03.ptx +++ b/source/linear-algebra/source/04-MX/03.ptx @@ -145,7 +145,7 @@

        - Given a basis \mathcal{B}=\setList{\vec{b}_1,\dots, \vec{b}_n} of \IR^n and corresponding matrix B=\begin{bmatrix}\vec b_1&\cdots&\vec b_n\end{bmatrix}, the change of basis/coordinate transformation from the standard basis to \mathcal{B} is the transformation C_\mathcal{B}\colon\IR^n\to\IR^n defined by the property that, for any vector \vec{v}\in\IR^n, the vector C_\mathcal{B}(\vec{v}) describes the unique way to write + Given a basis \mathcal{B}=\setList{\vec{b}_1,\dots, \vec{b}_n} of \IR^n and corresponding matrix B=\begin{bmatrix}\vec b_1&\cdots&\vec b_n\end{bmatrix}, the change of basis/coordinatechange of basis transformation from the standard basis to \mathcal{B} is the transformation C_\mathcal{B}\colon\IR^n\to\IR^n defined by the property that, for any vector \vec{v}\in\IR^n, the vector C_\mathcal{B}(\vec{v}) describes the unique way to write \vec v in terms of the basis, that is, the unique solution to the vector equation: \vec v=x_1\vec{b}_1+\dots+x_n\vec{b}_n. diff --git a/source/linear-algebra/source/05-GT/01.ptx b/source/linear-algebra/source/05-GT/01.ptx index ecba992c2..ceef0c974 100644 --- a/source/linear-algebra/source/05-GT/01.ptx +++ b/source/linear-algebra/source/05-GT/01.ptx @@ -252,7 +252,7 @@ begins with area 1).

        -We will define the determinant of a square matrix B, +We will define the determinantdeterminant of a square matrix B, or \det(B) for short, to be the factor by which B scales areas. In order to figure out how to compute it, we first figure out the properties it must satisfy.

        @@ -286,7 +286,7 @@ In order to figure out how to compute it, we first figure out the properties it The images from and are shown side by side. - The linear transformation B scaling areas by a constant factor, which we call the determinant + The linear transformation B scaling areas by a constant factor, which we call the determinantdeterminant
        @@ -429,7 +429,7 @@ the areas on the left may be partitioned and reorganized into the area on the ri

        -The determinant is the unique function +The determinantdeterminant is the unique function \det:M_{n,n}\to\IR satisfying these properties:

        1. \det(I)=1
        2. diff --git a/source/linear-algebra/source/05-GT/03.ptx b/source/linear-algebra/source/05-GT/03.ptx index 39dd2e973..4c256a6c4 100644 --- a/source/linear-algebra/source/05-GT/03.ptx +++ b/source/linear-algebra/source/05-GT/03.ptx @@ -173,7 +173,7 @@ is a vector \vec{x} \in \IR^n such that A\vec{x} is parallel to

          In other words, A\vec{x}=\lambda \vec{x} for some scalar \lambda. -If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectoreigenvectornontrivial +If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectornontrivial eigenvector and we call this \lambda an eigenvalueeigenvalue of A.

          @@ -272,7 +272,7 @@ And what else?

          The expression \det(A-\lambda I) is called the -characteristic polynomial of A. +characteristic polynomialcharacteristic polynomial of A.

          For example, when