From 0abc2e4d3e01685b760a5a1ad6a0de714989024c Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Thu, 23 Apr 2026 22:54:54 +0000 Subject: [PATCH 1/6] Clean up existing index --- source/linear-algebra/source/01-LE/01.ptx | 8 ++++---- source/linear-algebra/source/01-LE/02.ptx | 2 +- source/linear-algebra/source/02-EV/05.ptx | 2 +- source/linear-algebra/source/03-AT/02.ptx | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/source/linear-algebra/source/01-LE/01.ptx b/source/linear-algebra/source/01-LE/01.ptx index c57f4e96f..3162fbacc 100644 --- a/source/linear-algebra/source/01-LE/01.ptx +++ b/source/linear-algebra/source/01-LE/01.ptx @@ -127,7 +127,7 @@ When order is irrelevant, we will use set notation:

-A Euclidean vectorEuclidean vectorvectorEuclidean is an ordered +A Euclidean vectorEuclideanvectorvectorEuclidean is an ordered list of real numbers \left[\begin{array}{c} @@ -260,7 +260,7 @@ Following are some examples of addition and scalar multiplication in \mathbb

-A linear equationlinear equation is an equation of the variables x_i of the form +A linear equationlinear equationequationlinear is an equation of the variables x_i of the form a_1x_1+a_2x_2+\dots+a_nx_n=b . @@ -390,7 +390,7 @@ is a collection of one or more linear equations.

-It will often be convenient to think of a system of equations as a vector equation.vector equation +It will often be convenient to think of a system of equations as a vector equation.vector equationequationvector

By applying vector operations and equating components, it is straightforward to see that the vector equation @@ -782,7 +782,7 @@ Then use these to describe the solution set Sometimes, we will find it useful to refer only to the coefficients of the linear system (and ignore its constant terms). We call the m\times n array consisting of these coefficients - a coefficient matrixcoeefficient matrix. + a coefficient matrixcoefficient matrix. \left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n}\\ diff --git a/source/linear-algebra/source/01-LE/02.ptx b/source/linear-algebra/source/01-LE/02.ptx index 2d16ffb70..bfb115312 100644 --- a/source/linear-algebra/source/01-LE/02.ptx +++ b/source/linear-algebra/source/01-LE/02.ptx @@ -556,7 +556,7 @@ matrix to the next:

-A matrix is in reduced row echelon form (RREF)Reduced row echelon form if +A matrix is in reduced row echelon form (RREF)reduced row echelon form if

  1. The leftmost nonzero term of each row is 1. We call these terms pivots.pivot diff --git a/source/linear-algebra/source/02-EV/05.ptx b/source/linear-algebra/source/02-EV/05.ptx index bdef470e9..49f7314bb 100644 --- a/source/linear-algebra/source/02-EV/05.ptx +++ b/source/linear-algebra/source/02-EV/05.ptx @@ -244,7 +244,7 @@ that equals the vector

    - The standard basisbasisstandard of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where + The standard basisbasisstandardstandard basis of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where \vec{e}_1 &= \left[\begin{array}{c}1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{array}\right] & diff --git a/source/linear-algebra/source/03-AT/02.ptx b/source/linear-algebra/source/03-AT/02.ptx index 6e214ad0b..2facf296c 100644 --- a/source/linear-algebra/source/03-AT/02.ptx +++ b/source/linear-algebra/source/03-AT/02.ptx @@ -298,7 +298,7 @@ Therefore any linear transformation T:V \rightarrow W can be defined by just describing the values of T(\vec b_i).

    -Put another way, the images of the basis vectors completely determinedetermine the transformation T. +Put another way, the images of the basis vectors completely determine the transformation T.

    From b82f0a28ac3c466cbeb0e25c0ce3b4da64ac122f Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Thu, 23 Apr 2026 23:58:46 +0000 Subject: [PATCH 2/6] Add index items for most terms --- source/linear-algebra/source/01-LE/03.ptx | 2 +- source/linear-algebra/source/01-LE/04.ptx | 4 ++-- source/linear-algebra/source/02-EV/01.ptx | 4 ++-- source/linear-algebra/source/02-EV/03.ptx | 6 +++--- source/linear-algebra/source/02-EV/05.ptx | 2 +- source/linear-algebra/source/02-EV/06.ptx | 2 +- source/linear-algebra/source/02-EV/07.ptx | 4 ++-- source/linear-algebra/source/03-AT/01.ptx | 4 ++-- source/linear-algebra/source/03-AT/02.ptx | 2 +- source/linear-algebra/source/03-AT/03.ptx | 8 ++++---- source/linear-algebra/source/03-AT/04.ptx | 4 ++-- source/linear-algebra/source/03-AT/06.ptx | 2 +- source/linear-algebra/source/04-MX/01.ptx | 2 +- source/linear-algebra/source/04-MX/02.ptx | 4 ++-- source/linear-algebra/source/05-GT/01.ptx | 6 +++--- source/linear-algebra/source/05-GT/03.ptx | 4 ++-- 16 files changed, 30 insertions(+), 30 deletions(-) diff --git a/source/linear-algebra/source/01-LE/03.ptx b/source/linear-algebra/source/01-LE/03.ptx index 5899d5d9f..e5188a958 100644 --- a/source/linear-algebra/source/01-LE/03.ptx +++ b/source/linear-algebra/source/01-LE/03.ptx @@ -467,7 +467,7 @@ different solutions. We'll learn how to find such solution sets in Mathematical Writing Explorations -

    A system of equations with all constants equal to 0 is called homogeneous. These are addressed in detail in section +

    A system of equations with all constants equal to 0 is called homogeneoussystem of linear equationshomogeneouslinear systemhomogeneous. These are addressed in detail in section

    • Choose three systems of equations from this chapter that you have already solved. Replace the constants with 0 to make the systems homogeneous. Solve the homogeneous systems and make a conjecture about the relationship between the earlier solutions you found and the associated homogeneous systems.
    • diff --git a/source/linear-algebra/source/01-LE/04.ptx b/source/linear-algebra/source/01-LE/04.ptx index 7af85e7b3..880bbf56a 100644 --- a/source/linear-algebra/source/01-LE/04.ptx +++ b/source/linear-algebra/source/01-LE/04.ptx @@ -95,8 +95,8 @@ Recall that the pivots of a matrix in \RREF form are the leading

      The pivot columns in an augmented matrix correspond to the -bound variablesbound variables in the system of equations (x_1,x_3 below). -The remaining variables are called free variablesfree variables (x_2 below). +bound variablesbound variablesvariablesbound in the system of equations (x_1,x_3 below). +The remaining variables are called free variablesfree variablesvariablesfree (x_2 below). \left[\begin{array}{ccc|c} \markedPivot{1} & 2 & 0 & 4 \\ diff --git a/source/linear-algebra/source/02-EV/01.ptx b/source/linear-algebra/source/02-EV/01.ptx index cbcd8876b..b966fa722 100644 --- a/source/linear-algebra/source/02-EV/01.ptx +++ b/source/linear-algebra/source/02-EV/01.ptx @@ -37,7 +37,7 @@ x_3\left[\begin{array}{c} 1 \\ -1 \\ 1 \end{array}\right]= Class Activities

      -We've been working with Euclidean vector spacesEuclideanvector space +We've been working with Euclidean vector spacesEuclideanvector spacevector spaceEuclidean of the form \IR^n=\setBuilder{\left[\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right]}{x_1,x_2,\dots,x_n\in\IR} @@ -68,7 +68,7 @@ we refer to this real number as a scalar.

      - A linear combination linear combinationof a set of vectors + A linear combination linear combinationof a set of vectors \{\vec v_1,\vec v_2,\dots,\vec v_n\} is given by c_1\vec v_1+c_2\vec v_2+\dots+c_n\vec v_n for any choice of scalar multiples c_1,c_2,\dots,c_n. diff --git a/source/linear-algebra/source/02-EV/03.ptx b/source/linear-algebra/source/02-EV/03.ptx index 6b74848a8..11b1dffa3 100644 --- a/source/linear-algebra/source/02-EV/03.ptx +++ b/source/linear-algebra/source/02-EV/03.ptx @@ -100,7 +100,7 @@

      - A homogeneoushomogeneous system of linear equations is one of the form: + A homogeneoussystem of linear equationshomogeneouslinear systemhomogeneous system of linear equations is one of the form: a_{11}x_1 &\,+\,& a_{12}x_2 &\,+\,& \dots &\,+\,& a_{1n}x_n &\,=\,& 0 @@ -307,7 +307,7 @@

      - A subset W of a vector space is called a subspacesubspace + A subset W of a vector space is called a subspacesubspacevector spacesubspace provided that it satisfies the following properties:

      • @@ -1084,7 +1084,7 @@ that is, (kx)+(ky)=(kx)(ky). This is verified by the following calculatio Mathematical Writing Explorations

        - A square matrix M is symmetricsymmetric matrix if, for each index i,j, the entries m_{ij} = m_{ji}. That is, the matrix is itself when reflected over the diagonal from upper left to lower right. + A square matrix M is symmetricsymmetric matrixmatrixsymmetric if, for each index i,j, the entries m_{ij} = m_{ji}. That is, the matrix is itself when reflected over the diagonal from upper left to lower right. Prove that the set of n \times n symmetric matrices is a subspace of M_{n \times n}.

        diff --git a/source/linear-algebra/source/02-EV/05.ptx b/source/linear-algebra/source/02-EV/05.ptx index 49f7314bb..82ee7439c 100644 --- a/source/linear-algebra/source/02-EV/05.ptx +++ b/source/linear-algebra/source/02-EV/05.ptx @@ -563,7 +563,7 @@ Not a basis, because not only does it fail to span \IR^4, it's also linea

        That is, a basis for \IR^n must have exactly n vectors and - its square matrix must row-reduce to the so-called identity matrixidentity matrix + its square matrix must row-reduce to the so-called identity matrixidentity matrixmatrixidentity containing all zeros except for a downward diagonal of ones. (We will learn where the identity matrix gets its name in a later module.)

        diff --git a/source/linear-algebra/source/02-EV/06.ptx b/source/linear-algebra/source/02-EV/06.ptx index cfe6e05f2..5434a88be 100644 --- a/source/linear-algebra/source/02-EV/06.ptx +++ b/source/linear-algebra/source/02-EV/06.ptx @@ -128,7 +128,7 @@ is now independent.

        - Let W be a subspace of a vector space. A basis for + Let W be a subspace of a vector space. A basisbasis for W is a linearly independent set of vectors that spans W (but not necessarily the entire vector space).

        diff --git a/source/linear-algebra/source/02-EV/07.ptx b/source/linear-algebra/source/02-EV/07.ptx index b97f62ea5..bb5732a51 100644 --- a/source/linear-algebra/source/02-EV/07.ptx +++ b/source/linear-algebra/source/02-EV/07.ptx @@ -14,7 +14,7 @@ Warmup

        - Recall from that a homogeneoushomogeneous system of linear equations is one of the form: + Recall from that a homogeneoushomogeneoussystem of linear equationshomogeneouslinear systemhomogeneous system of linear equations is one of the form: a_{11}x_1 &\,+\,& a_{12}x_2 &\,+\,& \dots &\,+\,& a_{1n}x_n &\,=\,& 0 @@ -458,7 +458,7 @@ shape of the data that gets projected onto this single point of the screen? Mathematical Writing Explorations -

        An n \times n matrix M is non-singularnon-singular if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular. +

        An n \times n matrix M is non-singularnon-singularmatrixnon-singular if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular.

        • Prove that the reduced row echelon form of M is the identity matrix. diff --git a/source/linear-algebra/source/03-AT/01.ptx b/source/linear-algebra/source/03-AT/01.ptx index 55c1f0e69..005e93a98 100644 --- a/source/linear-algebra/source/03-AT/01.ptx +++ b/source/linear-algebra/source/03-AT/01.ptx @@ -79,8 +79,8 @@ can be applied before or after the transformation without affecting the result.

          Given a linear transformation T:V\to W, -V is called the domain of T and -W is called the co-domain of T. +V is called the domainlinear transformationdomain of T and +W is called the co-domainlinear transformationco-domain of T.

          A linear transformation with a domain of \IR^3 and a co-domain of \IR^2 diff --git a/source/linear-algebra/source/03-AT/02.ptx b/source/linear-algebra/source/03-AT/02.ptx index 2facf296c..17ff322fe 100644 --- a/source/linear-algebra/source/03-AT/02.ptx +++ b/source/linear-algebra/source/03-AT/02.ptx @@ -308,7 +308,7 @@ Put another way, the images of the basis vectors completely determine the transf

          Since a linear transformation T:\IR^n\to\IR^m is determined by its action on the standard basis \{\vec e_1,\dots,\vec e_n\}, it is convenient to -store this information in an m\times n matrix, called the standard matrixstandard matrix of T, given by +store this information in an m\times n matrix, called the standard matrixstandard matrixmatrixstandardlinear transformationstandard matrix of T, given by [T(\vec e_1) \,\cdots\, T(\vec e_n)].

          diff --git a/source/linear-algebra/source/03-AT/03.ptx b/source/linear-algebra/source/03-AT/03.ptx index bb0bcb58b..7990627df 100644 --- a/source/linear-algebra/source/03-AT/03.ptx +++ b/source/linear-algebra/source/03-AT/03.ptx @@ -84,7 +84,7 @@ the set of all vectors that transform into \vec 0?

          Let T: V \rightarrow W be a linear transformation, and let \vec{z} be the additive -identity (the zero vector) of W. The kernelkernel of T +identity (the zero vector) of W. The kernelkernellinear transformationkernelof T (also known as the null spacenull space of T) is an important subspace of V defined by @@ -295,7 +295,7 @@ Which of these subspaces of \IR^3 describes the set of all vectors that a

          Let T: V \rightarrow W be a linear transformation. -The imageimage of T +The imageimagelinear transformationimage of T is an important subspace of W defined by \Im T = \left\{ \vec{w} \in W\ \big|\ \text{there is some }\vec v\in V \text{ with } T(\vec{v})=\vec{w}\right\} @@ -656,7 +656,7 @@ Which of the following is equal to the dimension of the image of T?

          -Combining these with the observation that the number of columns is the dimension of the domain of T, we have the rank-nullity theorem: +Combining these with the observation that the number of columns is the dimension of the domain of T, we have the rank-nullity theoremrank-nullity theorem:

          @@ -664,7 +664,7 @@ The dimension of the domain of T equals \dim(\ker T)+\dim(\Im T).

          -The dimension of the image is called the rank of T (or A) and the dimension of the kernel is called the nullity. +The dimension of the image is called the rankranklinear transformationrank of T (or A) and the dimension of the kernel is called the nullitynullitylinear transformationnullity.

          diff --git a/source/linear-algebra/source/03-AT/04.ptx b/source/linear-algebra/source/03-AT/04.ptx index 33609a6ff..0baa34006 100644 --- a/source/linear-algebra/source/03-AT/04.ptx +++ b/source/linear-algebra/source/03-AT/04.ptx @@ -39,7 +39,7 @@

          Let T: V \rightarrow W be a linear transformation. -T is called injective or one-to-one if T does not map two +T is called injectiveinjectivelinear transformationinjective or one-to-one if T does not map two distinct vectors to the same place. More precisely, T is injective if T(\vec{v}) \neq T(\vec{w}) whenever \vec{v} \neq \vec{w}.

          @@ -208,7 +208,7 @@ Is T injective?

          Let T: V \rightarrow W be a linear transformation. -T is called surjective or onto if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}. +T is called surjectivesurjectivelinear transformationsurjective or onto if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}.

          A surjective transformation and a non-surjective transformation diff --git a/source/linear-algebra/source/03-AT/06.ptx b/source/linear-algebra/source/03-AT/06.ptx index 305521f3e..424f4ed00 100644 --- a/source/linear-algebra/source/03-AT/06.ptx +++ b/source/linear-algebra/source/03-AT/06.ptx @@ -555,7 +555,7 @@ as a linear combination of polynomials from the set

          - Given a matrix M the rankrank of a matrix is the dimension of the column space. + Given a matrix M the rankrankmatrixrank of a matrix is the dimension of the column space. diff --git a/source/linear-algebra/source/04-MX/01.ptx b/source/linear-algebra/source/04-MX/01.ptx index 58a53dfd4..ecc685144 100644 --- a/source/linear-algebra/source/04-MX/01.ptx +++ b/source/linear-algebra/source/04-MX/01.ptx @@ -147,7 +147,7 @@ What are the domain and codomain of the composition map S \circ T?

          -We define the product AB of a m \times n matrix A and a +We define the productmatrixproduct AB of a m \times n matrix A and a n \times k matrix B to be the m \times k standard matrix of the composition map of the two corresponding linear functions. diff --git a/source/linear-algebra/source/04-MX/02.ptx b/source/linear-algebra/source/04-MX/02.ptx index 14fe9c8eb..8fadd0b2f 100644 --- a/source/linear-algebra/source/04-MX/02.ptx +++ b/source/linear-algebra/source/04-MX/02.ptx @@ -108,7 +108,7 @@ Let T: \IR^n \rightarrow \IR^n be a linear bijection with standard matrix

          By item (B) from -we may define an inverse mapinverse map +we may define an inverse mapinverse maplinear transformationinvertible T^{-1} : \IR^n \rightarrow \IR^n that defines T^{-1}(\vec b) as the unique solution \vec x satisfying T(\vec x)=\vec b, that is, T(T^{-1}(\vec b))=\vec b. @@ -118,7 +118,7 @@ Furthermore, let A^{-1}=[T^{-1}(\vec e_1)\hspace{1em}\cdots\hspace{1em}T^{-1}(\vec e_n)] be the standard matrix for T^{-1}. We call A^{-1} the inverse matrixinverse matrix of A, -and we also say that A is an invertibleinvertible +and we also say that A is an invertibleinvertiblematrixinvertible matrix.

          diff --git a/source/linear-algebra/source/05-GT/01.ptx b/source/linear-algebra/source/05-GT/01.ptx index ecba992c2..ceef0c974 100644 --- a/source/linear-algebra/source/05-GT/01.ptx +++ b/source/linear-algebra/source/05-GT/01.ptx @@ -252,7 +252,7 @@ begins with area 1).

          -We will define the determinant of a square matrix B, +We will define the determinantdeterminant of a square matrix B, or \det(B) for short, to be the factor by which B scales areas. In order to figure out how to compute it, we first figure out the properties it must satisfy.

          @@ -286,7 +286,7 @@ In order to figure out how to compute it, we first figure out the properties it The images from and are shown side by side. - The linear transformation B scaling areas by a constant factor, which we call the determinant + The linear transformation B scaling areas by a constant factor, which we call the determinantdeterminant
          @@ -429,7 +429,7 @@ the areas on the left may be partitioned and reorganized into the area on the ri

          -The determinant is the unique function +The determinantdeterminant is the unique function \det:M_{n,n}\to\IR satisfying these properties:

          1. \det(I)=1
          2. diff --git a/source/linear-algebra/source/05-GT/03.ptx b/source/linear-algebra/source/05-GT/03.ptx index 39dd2e973..31988f936 100644 --- a/source/linear-algebra/source/05-GT/03.ptx +++ b/source/linear-algebra/source/05-GT/03.ptx @@ -173,7 +173,7 @@ is a vector \vec{x} \in \IR^n such that A\vec{x} is parallel to

            In other words, A\vec{x}=\lambda \vec{x} for some scalar \lambda. -If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectoreigenvectornontrivial +If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectornontrivialeigenvector and we call this \lambda an eigenvalueeigenvalue of A.

            @@ -272,7 +272,7 @@ And what else?

            The expression \det(A-\lambda I) is called the -characteristic polynomial of A. +characteristic polynomialcharacteristic polynomial of A.

            For example, when From bee449506725259a084af9831b08f252c9212000 Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Fri, 24 Apr 2026 00:23:11 +0000 Subject: [PATCH 3/6] Remove inversions --- source/linear-algebra/source/01-LE/01.ptx | 12 ++++++------ source/linear-algebra/source/01-LE/03.ptx | 2 +- source/linear-algebra/source/01-LE/04.ptx | 4 ++-- source/linear-algebra/source/02-EV/01.ptx | 2 +- source/linear-algebra/source/02-EV/03.ptx | 10 +++++----- source/linear-algebra/source/02-EV/05.ptx | 4 ++-- source/linear-algebra/source/02-EV/07.ptx | 4 ++-- source/linear-algebra/source/03-AT/01.ptx | 4 ++-- source/linear-algebra/source/03-AT/02.ptx | 2 +- source/linear-algebra/source/03-AT/03.ptx | 6 +++--- source/linear-algebra/source/03-AT/04.ptx | 4 ++-- source/linear-algebra/source/03-AT/06.ptx | 2 +- source/linear-algebra/source/04-MX/02.ptx | 4 ++-- 13 files changed, 30 insertions(+), 30 deletions(-) diff --git a/source/linear-algebra/source/01-LE/01.ptx b/source/linear-algebra/source/01-LE/01.ptx index 3162fbacc..4dfc95871 100644 --- a/source/linear-algebra/source/01-LE/01.ptx +++ b/source/linear-algebra/source/01-LE/01.ptx @@ -127,7 +127,7 @@ When order is irrelevant, we will use set notation:

            -A Euclidean vectorEuclideanvectorvectorEuclidean is an ordered +A Euclidean vectorEuclideanvector is an ordered list of real numbers \left[\begin{array}{c} @@ -260,13 +260,13 @@ Following are some examples of addition and scalar multiplication in \mathbb

            -A linear equationlinear equationequationlinear is an equation of the variables x_i of the form +A linear equationlinear equation is an equation of the variables x_i of the form a_1x_1+a_2x_2+\dots+a_nx_n=b .

            - A solutionlinear equationsolution for a linear equation is a Euclidean vector + A solutionsolutionof a linear equation for a linear equation is a Euclidean vector \left[\begin{array}{c} s_1 \\ @@ -390,7 +390,7 @@ is a collection of one or more linear equations.

            -It will often be convenient to think of a system of equations as a vector equation.vector equationequationvector +It will often be convenient to think of a system of equations as a vector equation.vector equation

            By applying vector operations and equating components, it is straightforward to see that the vector equation @@ -415,9 +415,9 @@ is equivalent to the system of equations

            -A linear system is consistentlinear systemconsistent if its solution set +A linear system is consistentconsistent linear system if its solution set is non-empty (that is, there exists a solution for the -system). Otherwise it is inconsistent.linear systeminconsistent +system). Otherwise it is inconsistent.inconsistent linear system

            diff --git a/source/linear-algebra/source/01-LE/03.ptx b/source/linear-algebra/source/01-LE/03.ptx index e5188a958..dc7cbda7d 100644 --- a/source/linear-algebra/source/01-LE/03.ptx +++ b/source/linear-algebra/source/01-LE/03.ptx @@ -467,7 +467,7 @@ different solutions. We'll learn how to find such solution sets in Mathematical Writing Explorations -

            A system of equations with all constants equal to 0 is called homogeneoussystem of linear equationshomogeneouslinear systemhomogeneous. These are addressed in detail in section +

            A system of equations with all constants equal to 0 is called homogeneoushomogeneous. These are addressed in detail in section

            • Choose three systems of equations from this chapter that you have already solved. Replace the constants with 0 to make the systems homogeneous. Solve the homogeneous systems and make a conjecture about the relationship between the earlier solutions you found and the associated homogeneous systems.
            • diff --git a/source/linear-algebra/source/01-LE/04.ptx b/source/linear-algebra/source/01-LE/04.ptx index 880bbf56a..7af85e7b3 100644 --- a/source/linear-algebra/source/01-LE/04.ptx +++ b/source/linear-algebra/source/01-LE/04.ptx @@ -95,8 +95,8 @@ Recall that the pivots of a matrix in \RREF form are the leading

              The pivot columns in an augmented matrix correspond to the -bound variablesbound variablesvariablesbound in the system of equations (x_1,x_3 below). -The remaining variables are called free variablesfree variablesvariablesfree (x_2 below). +bound variablesbound variables in the system of equations (x_1,x_3 below). +The remaining variables are called free variablesfree variables (x_2 below). \left[\begin{array}{ccc|c} \markedPivot{1} & 2 & 0 & 4 \\ diff --git a/source/linear-algebra/source/02-EV/01.ptx b/source/linear-algebra/source/02-EV/01.ptx index b966fa722..f96be75fd 100644 --- a/source/linear-algebra/source/02-EV/01.ptx +++ b/source/linear-algebra/source/02-EV/01.ptx @@ -37,7 +37,7 @@ x_3\left[\begin{array}{c} 1 \\ -1 \\ 1 \end{array}\right]= Class Activities

              -We've been working with Euclidean vector spacesEuclideanvector spacevector spaceEuclidean +We've been working with Euclidean vector spacesEuclideanvector space of the form \IR^n=\setBuilder{\left[\begin{array}{c}x_1\\x_2\\\vdots\\x_n\end{array}\right]}{x_1,x_2,\dots,x_n\in\IR} diff --git a/source/linear-algebra/source/02-EV/03.ptx b/source/linear-algebra/source/02-EV/03.ptx index 11b1dffa3..fc272a88e 100644 --- a/source/linear-algebra/source/02-EV/03.ptx +++ b/source/linear-algebra/source/02-EV/03.ptx @@ -100,7 +100,7 @@

              - A homogeneoussystem of linear equationshomogeneouslinear systemhomogeneous system of linear equations is one of the form: + A homogeneoushomogeneous system of linear equations is one of the form: a_{11}x_1 &\,+\,& a_{12}x_2 &\,+\,& \dots &\,+\,& a_{1n}x_n &\,=\,& 0 @@ -307,7 +307,7 @@

              - A subset W of a vector space is called a subspacesubspacevector spacesubspace + A subset W of a vector space is called a subspacesubspace provided that it satisfies the following properties:

              • @@ -317,12 +317,12 @@
              • - the subset is closed under additionvector spaceclosed under addition: for any \vec{u},\vec{v} \in W, the sum \vec{u}+\vec{v} is also in W. + the subset is closed under additionclosedunder addition: for any \vec{u},\vec{v} \in W, the sum \vec{u}+\vec{v} is also in W.

              • - the subset is closed under scalar multiplicationvector spaceclosed under scalar multiplication: for any \vec{u} \in W and scalar c \in \IR, the product c\vec{u} is also in W. + the subset is closed under scalar multiplicationclosedunder scalar multiplication: for any \vec{u} \in W and scalar c \in \IR, the product c\vec{u} is also in W.

              @@ -1084,7 +1084,7 @@ that is, (kx)+(ky)=(kx)(ky). This is verified by the following calculatio Mathematical Writing Explorations

              - A square matrix M is symmetricsymmetric matrixmatrixsymmetric if, for each index i,j, the entries m_{ij} = m_{ji}. That is, the matrix is itself when reflected over the diagonal from upper left to lower right. + A square matrix M is symmetricsymmetric matrix if, for each index i,j, the entries m_{ij} = m_{ji}. That is, the matrix is itself when reflected over the diagonal from upper left to lower right. Prove that the set of n \times n symmetric matrices is a subspace of M_{n \times n}.

              diff --git a/source/linear-algebra/source/02-EV/05.ptx b/source/linear-algebra/source/02-EV/05.ptx index 82ee7439c..0a79b1f41 100644 --- a/source/linear-algebra/source/02-EV/05.ptx +++ b/source/linear-algebra/source/02-EV/05.ptx @@ -244,7 +244,7 @@ that equals the vector

              - The standard basisbasisstandardstandard basis of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where + The standard basisstandardbasis of \IR^n is the set \{\vec{e}_1, \ldots, \vec{e}_n\} where \vec{e}_1 &= \left[\begin{array}{c}1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{array}\right] & @@ -563,7 +563,7 @@ Not a basis, because not only does it fail to span \IR^4, it's also linea

              That is, a basis for \IR^n must have exactly n vectors and - its square matrix must row-reduce to the so-called identity matrixidentity matrixmatrixidentity + its square matrix must row-reduce to the so-called identity matrixidentity matrix containing all zeros except for a downward diagonal of ones. (We will learn where the identity matrix gets its name in a later module.)

              diff --git a/source/linear-algebra/source/02-EV/07.ptx b/source/linear-algebra/source/02-EV/07.ptx index bb5732a51..b50569d66 100644 --- a/source/linear-algebra/source/02-EV/07.ptx +++ b/source/linear-algebra/source/02-EV/07.ptx @@ -14,7 +14,7 @@ Warmup

              - Recall from that a homogeneoushomogeneoussystem of linear equationshomogeneouslinear systemhomogeneous system of linear equations is one of the form: + Recall from that a homogeneoushomogeneous system of linear equations is one of the form: a_{11}x_1 &\,+\,& a_{12}x_2 &\,+\,& \dots &\,+\,& a_{1n}x_n &\,=\,& 0 @@ -458,7 +458,7 @@ shape of the data that gets projected onto this single point of the screen? Mathematical Writing Explorations -

              An n \times n matrix M is non-singularnon-singularmatrixnon-singular if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular. +

              An n \times n matrix M is non-singularnon-singular matrix if the associated homogeneous system with coefficient matrix M is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular.

              • Prove that the reduced row echelon form of M is the identity matrix. diff --git a/source/linear-algebra/source/03-AT/01.ptx b/source/linear-algebra/source/03-AT/01.ptx index 005e93a98..51858133d 100644 --- a/source/linear-algebra/source/03-AT/01.ptx +++ b/source/linear-algebra/source/03-AT/01.ptx @@ -79,8 +79,8 @@ can be applied before or after the transformation without affecting the result.

                Given a linear transformation T:V\to W, -V is called the domainlinear transformationdomain of T and -W is called the co-domainlinear transformationco-domain of T. +V is called the domaindomain of T and +W is called the co-domainco-domain of T.

                A linear transformation with a domain of \IR^3 and a co-domain of \IR^2 diff --git a/source/linear-algebra/source/03-AT/02.ptx b/source/linear-algebra/source/03-AT/02.ptx index 17ff322fe..2facf296c 100644 --- a/source/linear-algebra/source/03-AT/02.ptx +++ b/source/linear-algebra/source/03-AT/02.ptx @@ -308,7 +308,7 @@ Put another way, the images of the basis vectors completely determine the transf

                Since a linear transformation T:\IR^n\to\IR^m is determined by its action on the standard basis \{\vec e_1,\dots,\vec e_n\}, it is convenient to -store this information in an m\times n matrix, called the standard matrixstandard matrixmatrixstandardlinear transformationstandard matrix of T, given by +store this information in an m\times n matrix, called the standard matrixstandard matrix of T, given by [T(\vec e_1) \,\cdots\, T(\vec e_n)].

                diff --git a/source/linear-algebra/source/03-AT/03.ptx b/source/linear-algebra/source/03-AT/03.ptx index 7990627df..8e1af21a1 100644 --- a/source/linear-algebra/source/03-AT/03.ptx +++ b/source/linear-algebra/source/03-AT/03.ptx @@ -84,7 +84,7 @@ the set of all vectors that transform into \vec 0?

                Let T: V \rightarrow W be a linear transformation, and let \vec{z} be the additive -identity (the zero vector) of W. The kernelkernellinear transformationkernelof T +identity (the zero vector) of W. The kernelkernelof T (also known as the null spacenull space of T) is an important subspace of V defined by @@ -295,7 +295,7 @@ Which of these subspaces of \IR^3 describes the set of all vectors that a

                Let T: V \rightarrow W be a linear transformation. -The imageimagelinear transformationimage of T +The imageimage of T is an important subspace of W defined by \Im T = \left\{ \vec{w} \in W\ \big|\ \text{there is some }\vec v\in V \text{ with } T(\vec{v})=\vec{w}\right\} @@ -664,7 +664,7 @@ The dimension of the domain of T equals \dim(\ker T)+\dim(\Im T).

                -The dimension of the image is called the rankranklinear transformationrank of T (or A) and the dimension of the kernel is called the nullitynullitylinear transformationnullity. +The dimension of the image is called the rankrankof a linear transformation of T (or A) and the dimension of the kernel is called the nullitynullity.

                diff --git a/source/linear-algebra/source/03-AT/04.ptx b/source/linear-algebra/source/03-AT/04.ptx index 0baa34006..4f35d1323 100644 --- a/source/linear-algebra/source/03-AT/04.ptx +++ b/source/linear-algebra/source/03-AT/04.ptx @@ -39,7 +39,7 @@

                Let T: V \rightarrow W be a linear transformation. -T is called injectiveinjectivelinear transformationinjective or one-to-one if T does not map two +T is called injectiveinjective or one-to-oneinjective if T does not map two distinct vectors to the same place. More precisely, T is injective if T(\vec{v}) \neq T(\vec{w}) whenever \vec{v} \neq \vec{w}.

                @@ -208,7 +208,7 @@ Is T injective?

                Let T: V \rightarrow W be a linear transformation. -T is called surjectivesurjectivelinear transformationsurjective or onto if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}. +T is called surjectivesurjective or ontosurjective if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}.

                A surjective transformation and a non-surjective transformation diff --git a/source/linear-algebra/source/03-AT/06.ptx b/source/linear-algebra/source/03-AT/06.ptx index 424f4ed00..e7f0ed3d9 100644 --- a/source/linear-algebra/source/03-AT/06.ptx +++ b/source/linear-algebra/source/03-AT/06.ptx @@ -555,7 +555,7 @@ as a linear combination of polynomials from the set

                - Given a matrix M the rankrankmatrixrank of a matrix is the dimension of the column space. + Given a matrix M the rankrankof a matrix of a matrix is the dimension of the column space. diff --git a/source/linear-algebra/source/04-MX/02.ptx b/source/linear-algebra/source/04-MX/02.ptx index 8fadd0b2f..97c2a160f 100644 --- a/source/linear-algebra/source/04-MX/02.ptx +++ b/source/linear-algebra/source/04-MX/02.ptx @@ -108,7 +108,7 @@ Let T: \IR^n \rightarrow \IR^n be a linear bijection with standard matrix

                By item (B) from -we may define an inverse mapinverse maplinear transformationinvertible +we may define an inverse mapinverse mapinvertiblelinear transformation T^{-1} : \IR^n \rightarrow \IR^n that defines T^{-1}(\vec b) as the unique solution \vec x satisfying T(\vec x)=\vec b, that is, T(T^{-1}(\vec b))=\vec b. @@ -118,7 +118,7 @@ Furthermore, let A^{-1}=[T^{-1}(\vec e_1)\hspace{1em}\cdots\hspace{1em}T^{-1}(\vec e_n)] be the standard matrix for T^{-1}. We call A^{-1} the inverse matrixinverse matrix of A, -and we also say that A is an invertibleinvertiblematrixinvertible +and we also say that A is an invertibleinvertiblematrix matrix.

                From fcdc6c3c83fe6ca887a19573c798ee1d8d1d4982 Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Fri, 24 Apr 2026 15:31:27 +0000 Subject: [PATCH 4/6] nontrivial --- source/linear-algebra/source/02-EV/06.ptx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/linear-algebra/source/02-EV/06.ptx b/source/linear-algebra/source/02-EV/06.ptx index 5434a88be..5217f1685 100644 --- a/source/linear-algebra/source/02-EV/06.ptx +++ b/source/linear-algebra/source/02-EV/06.ptx @@ -236,7 +236,7 @@ Thus a given basis for a subspace need not be unique.

                - Any non-trivial real vector space has infinitely-many different bases, but all + Any nontrivial real vector space has infinitely-many different bases, but all the bases for a given vector space are exactly the same size. So we say the dimension of a vector space or subspace is equal to the size of any basis for the vector space. From 931f3d9ffcdabec9b3dad307f9013f2c5dc4fd8d Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Fri, 24 Apr 2026 15:33:45 +0000 Subject: [PATCH 5/6] remove grouping --- source/linear-algebra/source/05-GT/03.ptx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/linear-algebra/source/05-GT/03.ptx b/source/linear-algebra/source/05-GT/03.ptx index 31988f936..4c256a6c4 100644 --- a/source/linear-algebra/source/05-GT/03.ptx +++ b/source/linear-algebra/source/05-GT/03.ptx @@ -173,7 +173,7 @@ is a vector \vec{x} \in \IR^n such that A\vec{x} is parallel to

                In other words, A\vec{x}=\lambda \vec{x} for some scalar \lambda. -If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectornontrivialeigenvector +If \vec x\not=\vec 0, then we say \vec x is a nontrivial eigenvectornontrivial eigenvector and we call this \lambda an eigenvalueeigenvalue of A.

                From 15fbe85178febaa63075785cc5805aa0435ecc2a Mon Sep 17 00:00:00 2001 From: Drew Lewis Date: Fri, 24 Apr 2026 16:51:11 +0000 Subject: [PATCH 6/6] Expand index --- source/linear-algebra/source/01-LE/01.ptx | 12 ++++++------ source/linear-algebra/source/01-LE/02.ptx | 10 +++++----- source/linear-algebra/source/01-LE/03.ptx | 2 +- source/linear-algebra/source/01-LE/04.ptx | 2 +- source/linear-algebra/source/02-EV/01.ptx | 2 +- source/linear-algebra/source/02-EV/02.ptx | 2 +- source/linear-algebra/source/02-EV/04.ptx | 4 ++-- source/linear-algebra/source/02-EV/05.ptx | 2 +- source/linear-algebra/source/02-EV/06.ptx | 4 ++-- source/linear-algebra/source/02-EV/07.ptx | 2 +- source/linear-algebra/source/03-AT/01.ptx | 2 +- source/linear-algebra/source/03-AT/02.ptx | 4 ++-- source/linear-algebra/source/03-AT/03.ptx | 10 +++++----- source/linear-algebra/source/03-AT/04.ptx | 10 +++++----- source/linear-algebra/source/03-AT/05.ptx | 4 ++-- source/linear-algebra/source/04-MX/02.ptx | 4 ++-- source/linear-algebra/source/04-MX/03.ptx | 2 +- 17 files changed, 39 insertions(+), 39 deletions(-) diff --git a/source/linear-algebra/source/01-LE/01.ptx b/source/linear-algebra/source/01-LE/01.ptx index 4dfc95871..89638228e 100644 --- a/source/linear-algebra/source/01-LE/01.ptx +++ b/source/linear-algebra/source/01-LE/01.ptx @@ -76,7 +76,7 @@ with m rows and n columns: \left[\begin{array}{cccc} \vec v_1 & \vec v_2 & \cdots & \vec v_n\end{array}\right] . Frequently we will use matrices to describe an ordered list of -its column vectors: +its column vectorsvector: \left[\begin{array}{c} a_{11} \\ @@ -260,7 +260,7 @@ Following are some examples of addition and scalar multiplication in \mathbb

                -A linear equationlinear equation is an equation of the variables x_i of the form +A linear equationlinearequation is an equation of the variables x_i of the form a_1x_1+a_2x_2+\dots+a_nx_n=b . @@ -298,7 +298,7 @@ variables, we will often write our variables as x_i, and assume

                A system of linear equations - system of linear equationslinear system (or a linear system for short) + system of linear equationslinearsystem (or a linear system for short) is a collection of one or more linear equations. @@ -315,7 +315,7 @@ is a collection of one or more linear equations.

                - Its solution setsolution set is given by + Its solution setsolutionset is given by \setBuilder { @@ -340,7 +340,7 @@ is a collection of one or more linear equations.

                When variables in a large linear system are missing, we prefer to - write the system in one of the following standard forms: + write the system in one of the following standard forms:standardform of a linear system

                @@ -425,7 +425,7 @@ system). Otherwise it is inconsistent.inconsistent linear s

                - All linear systems are one of the following: + All linear systems are one of the following:consistent linear systeminconsistent linear system

                1. Consistent with one solution: its solution set contains a single vector, e.g. diff --git a/source/linear-algebra/source/01-LE/02.ptx b/source/linear-algebra/source/01-LE/02.ptx index bfb115312..c7898d19d 100644 --- a/source/linear-algebra/source/01-LE/02.ptx +++ b/source/linear-algebra/source/01-LE/02.ptx @@ -360,7 +360,7 @@ solution set?

                  - The following three row operationsrow operations produce equivalent + The following three row operationsrowoperations produce equivalent augmented matrices.

                  1. @@ -556,10 +556,10 @@ matrix to the next:

                    -A matrix is in reduced row echelon form (RREF)reduced row echelon form if +A matrix is in reduced row echelon form (RREF)reduced row echelon form if pivot

                    1. The leftmost nonzero term of each row is 1. - We call these terms pivots.pivot + We call these terms pivots.
                    2. Each pivot is to the right of every higher pivot.
                    3. @@ -638,7 +638,7 @@ we use technology to do so. However, it is also important to understand the Gauss-Jordan eliminationGauss-Jordan elimination algorithm that a computer or calculator uses to convert a matrix (augmented or not) into reduced row echelon form. Understanding this algorithm will help us better understand how to interpret the results -in many applications we use it for in . +in many applications we use it for in .rowreduction

                      @@ -841,7 +841,7 @@ Select "Octave" for the Matlab-compatible syntax used by this text.

                      Type rref([1,4,6;2,5,7]) and then press the Evaluate button to compute the \RREF of -\left[\begin{array}{ccc} 1 & 4 & 6 \\ 2 & 5 & 7 \end{array}\right]. +\left[\begin{array}{ccc} 1 & 4 & 6 \\ 2 & 5 & 7 \end{array}\right].rowreduction

                      diff --git a/source/linear-algebra/source/01-LE/03.ptx b/source/linear-algebra/source/01-LE/03.ptx index dc7cbda7d..808f7e746 100644 --- a/source/linear-algebra/source/01-LE/03.ptx +++ b/source/linear-algebra/source/01-LE/03.ptx @@ -258,7 +258,7 @@ How many solutions must this system have?

                      By finding \RREF(A) from a linear system's corresponding augmented matrix A, -we can immediately tell how many solutions the system has. +we can immediately tell how many solutions the system has. consistent linear systeminconsistent linear systemsolutionset

                      • diff --git a/source/linear-algebra/source/01-LE/04.ptx b/source/linear-algebra/source/01-LE/04.ptx index 7af85e7b3..02645bf35 100644 --- a/source/linear-algebra/source/01-LE/04.ptx +++ b/source/linear-algebra/source/01-LE/04.ptx @@ -194,7 +194,7 @@ in the following set-builder notation.

                        -Don't forget to correctly express the solution set of a linear system. +Don't forget to correctly express the solution setsolutionset of a linear system. Systems with zero or one solutions may be written by listing their elements, while systems with infinitely-many solutions may be written using set-builder notation. diff --git a/source/linear-algebra/source/02-EV/01.ptx b/source/linear-algebra/source/02-EV/01.ptx index f96be75fd..64b84d13d 100644 --- a/source/linear-algebra/source/02-EV/01.ptx +++ b/source/linear-algebra/source/02-EV/01.ptx @@ -68,7 +68,7 @@ we refer to this real number as a scalar.

                        - A linear combination linear combinationof a set of vectors + A linear combination linearcombinationof a set of vectors \{\vec v_1,\vec v_2,\dots,\vec v_n\} is given by c_1\vec v_1+c_2\vec v_2+\dots+c_n\vec v_n for any choice of scalar multiples c_1,c_2,\dots,c_n. diff --git a/source/linear-algebra/source/02-EV/02.ptx b/source/linear-algebra/source/02-EV/02.ptx index 1c9c865b4..af6066a3f 100644 --- a/source/linear-algebra/source/02-EV/02.ptx +++ b/source/linear-algebra/source/02-EV/02.ptx @@ -347,7 +347,7 @@ D. The set \{\vec v_1,\dots,\vec v_n\} spans all of \IR^m exactly when the vector equation x_1 \vec{v}_1 + \cdots + x_n\vec{v}_n = \vec{w} - is consistent for every vector \vec{w}\in\IR^m. + is consistent for every vector \vec{w}\in\IR^m.span

                        Likewise, the set \{\vec v_1,\dots,\vec v_n\} fails to span all of \IR^m diff --git a/source/linear-algebra/source/02-EV/04.ptx b/source/linear-algebra/source/02-EV/04.ptx index 1c7032283..8d930b18f 100644 --- a/source/linear-algebra/source/02-EV/04.ptx +++ b/source/linear-algebra/source/02-EV/04.ptx @@ -79,9 +79,9 @@

                        - We say that a set of vectors is linearly dependentlinearly dependent if one vector + We say that a set of vectors is linearly dependentlineardependence if one vector in the set belongs to the span of the others. Otherwise, we say the set - is linearly independent.linearly independent + is linearly independent.linearindependence

                        A linearly dependent set of three vectors diff --git a/source/linear-algebra/source/02-EV/05.ptx b/source/linear-algebra/source/02-EV/05.ptx index 0a79b1f41..a4cb6920f 100644 --- a/source/linear-algebra/source/02-EV/05.ptx +++ b/source/linear-algebra/source/02-EV/05.ptx @@ -175,7 +175,7 @@ In this analogy, a recipe was defined to be a list of amounts of each i

                        - A basis basis of a vector space V is a set of vectors S contained in V for which + A basisbasisof a vector space of a vector space V is a set of vectors S contained in V for which

                        1. diff --git a/source/linear-algebra/source/02-EV/06.ptx b/source/linear-algebra/source/02-EV/06.ptx index 5217f1685..dfd2d91ed 100644 --- a/source/linear-algebra/source/02-EV/06.ptx +++ b/source/linear-algebra/source/02-EV/06.ptx @@ -128,7 +128,7 @@ is now independent.

                          - Let W be a subspace of a vector space. A basisbasis for + Let W be a subspacesubspace of a vector space. A basisbasisof a subspace for W is a linearly independent set of vectors that spans W (but not necessarily the entire vector space).

                          @@ -238,7 +238,7 @@ Thus a given basis for a subspace need not be unique.

                          Any nontrivial real vector space has infinitely-many different bases, but all the bases for a given vector space are exactly the same size. - So we say the dimension of a vector space or subspace is equal to the + So we say the dimensiondimension of a vector space or subspace is equal to the size of any basis for the vector space.

                          diff --git a/source/linear-algebra/source/02-EV/07.ptx b/source/linear-algebra/source/02-EV/07.ptx index b50569d66..8c27b0226 100644 --- a/source/linear-algebra/source/02-EV/07.ptx +++ b/source/linear-algebra/source/02-EV/07.ptx @@ -274,7 +274,7 @@ Find a basis for this solution space. \left[\begin{array}{c} 1 \\ \textcolor{blue}{0} \\ -2 \\ -6\\\textcolor{blue}{0}\\\textcolor{blue}{0}\\\textcolor{blue}{1} \end{array}\right] } - is a basis for the solution space. +is a basis for the solution space.basisof a solution space

                          diff --git a/source/linear-algebra/source/03-AT/01.ptx b/source/linear-algebra/source/03-AT/01.ptx index 51858133d..a8af810ba 100644 --- a/source/linear-algebra/source/03-AT/01.ptx +++ b/source/linear-algebra/source/03-AT/01.ptx @@ -51,7 +51,7 @@

                          -A linear transformation linear transformation(also called a linear map) +A linear transformation lineartransformation(also called a linear map) is a map between vector spaces that preserves the vector space operations. More precisely, if V and W are vector spaces, a map T:V\rightarrow W is called a linear transformation if diff --git a/source/linear-algebra/source/03-AT/02.ptx b/source/linear-algebra/source/03-AT/02.ptx index 2facf296c..5d4fd7a04 100644 --- a/source/linear-algebra/source/03-AT/02.ptx +++ b/source/linear-algebra/source/03-AT/02.ptx @@ -286,7 +286,7 @@ to compute

                          -Consider any basis \{\vec b_1,\dots,\vec b_n\} for V. Since every +Consider any basis \{\vec b_1,\dots,\vec b_n\} for V.basis Since every vector \vec v can be written as a linear combination of basis vectors, \vec v = x_1\vec b_1+\dots+ x_n\vec b_n, we may compute T(\vec v) as follows: @@ -308,7 +308,7 @@ Put another way, the images of the basis vectors completely determine the transf

                          Since a linear transformation T:\IR^n\to\IR^m is determined by its action on the standard basis \{\vec e_1,\dots,\vec e_n\}, it is convenient to -store this information in an m\times n matrix, called the standard matrixstandard matrix of T, given by +store this information in an m\times n matrix, called the standard matrixstandardmatrix of T, given by [T(\vec e_1) \,\cdots\, T(\vec e_n)].

                          diff --git a/source/linear-algebra/source/03-AT/03.ptx b/source/linear-algebra/source/03-AT/03.ptx index 8e1af21a1..d2a32b378 100644 --- a/source/linear-algebra/source/03-AT/03.ptx +++ b/source/linear-algebra/source/03-AT/03.ptx @@ -225,7 +225,7 @@ Let T: \IR^3 \rightarrow \IR^2 be the linear transformation with the foll

                          In particular, the kernel is a subspace of the transformation's domain, and has a basis which may be found as in - : + : basisof a kernel \ker T=\left\{\left[\begin{array}{c}3a\\-2a\\a\end{array}\right]\middle| a\in\IR\right\} \hspace{2em} @@ -532,7 +532,7 @@ spans \Im T, we can obtain a basis for \Im T by finding = \left[\begin{array}{cccc} 1 & 0 & 1 & -1\\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right] -and only using the vectors corresponding to pivot columns: +and only using the vectors corresponding to pivot columns: basisof an image \setList{ \left[\begin{array}{c}3\\-1\\2\end{array}\right], @@ -744,11 +744,11 @@ The dimension of the image is called the rankrankof

                          We can use our notation of span in relation to a matrix, not just in relation to a set of vectors. -Given a matrix M +Given a matrix Mcolumn spacerowspace

                            -
                          • the span of the set of all columns is the column spacecolumn space
                          • +
                          • the span of the set of all columns is the column space
                          • -
                          • the span of the set of all rows is the row spacerow space
                          • +
                          • the span of the set of all rows is the row space

                          \mbox{Let } M = \left[\begin{array}{ccc|c}1&-1&0\\2&2&4\\-1&0&-1\end{array}\right] diff --git a/source/linear-algebra/source/03-AT/04.ptx b/source/linear-algebra/source/03-AT/04.ptx index 4f35d1323..74ea551a9 100644 --- a/source/linear-algebra/source/03-AT/04.ptx +++ b/source/linear-algebra/source/03-AT/04.ptx @@ -39,7 +39,7 @@

                          Let T: V \rightarrow W be a linear transformation. -T is called injectiveinjective or one-to-oneinjective if T does not map two +T is called injectiveinjective or one-to-oneone-to-oneinjective if T does not map two distinct vectors to the same place. More precisely, T is injective if T(\vec{v}) \neq T(\vec{w}) whenever \vec{v} \neq \vec{w}.

                          @@ -208,7 +208,7 @@ Is T injective?

                          Let T: V \rightarrow W be a linear transformation. -T is called surjectivesurjective or ontosurjective if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}. +T is called surjectivesurjective or ontoontosurjective if every element of W is mapped to by an element of V. More precisely, for every \vec{w} \in W, there is some \vec{v} \in V with T(\vec{v})=\vec{w}.

                          A surjective transformation and a non-surjective transformation @@ -429,7 +429,7 @@ Let T: V \rightarrow W be a linear transformation where

                          A linear transformation T is injective if and only if \ker T = \{\vec{0}\}. Put another way, an injective linear transformation may be -recognized by its trivial kernel. +recognized by its trivial kernel.injective

                          A linear transformation with trivial kernel, which is therefore injective @@ -508,7 +508,7 @@ What can you conclude?

                          A linear transformation T:V \rightarrow W is surjective if and only if \Im T = W. Put another way, a surjective linear transformation may be -recognized by its identical codomain and image. +recognized by its identical codomain and image.surjective

                          @@ -742,7 +742,7 @@ means T is (A) injective, (B) surjective, or

                        2. If \dim(V)<\dim(W), then T is not surjective.

                        3. -
                      +
              dimension Basically, a linear transformation cannot reduce dimension without collapsing vectors into each other, and a linear transformation cannot increase dimension from its domain to its image. diff --git a/source/linear-algebra/source/03-AT/05.ptx b/source/linear-algebra/source/03-AT/05.ptx index 30afa27dd..6e726efbf 100644 --- a/source/linear-algebra/source/03-AT/05.ptx +++ b/source/linear-algebra/source/03-AT/05.ptx @@ -281,13 +281,13 @@ Which of the following properties of \IR^2 Euclidean vectors is NOT true?
            • An additive identity exists: There exists some \vec z - where \vec v\oplus \vec z=\vec v.additive identity + where \vec v\oplus \vec z=\vec v.

            • Additive inverses exist: There exists some -\vec v - where \vec v\oplus (-\vec v)=\vec z.additive inverse + where \vec v\oplus (-\vec v)=\vec z.

            • diff --git a/source/linear-algebra/source/04-MX/02.ptx b/source/linear-algebra/source/04-MX/02.ptx index 97c2a160f..0f04aac3d 100644 --- a/source/linear-algebra/source/04-MX/02.ptx +++ b/source/linear-algebra/source/04-MX/02.ptx @@ -108,7 +108,7 @@ Let T: \IR^n \rightarrow \IR^n be a linear bijection with standard matrix

              By item (B) from -we may define an inverse mapinverse mapinvertiblelinear transformation +we may define an inverse mapinversemapinvertiblelinear transformation T^{-1} : \IR^n \rightarrow \IR^n that defines T^{-1}(\vec b) as the unique solution \vec x satisfying T(\vec x)=\vec b, that is, T(T^{-1}(\vec b))=\vec b. @@ -117,7 +117,7 @@ that defines T^{-1}(\vec b) as the unique solution \vec x satisfyi Furthermore, let A^{-1}=[T^{-1}(\vec e_1)\hspace{1em}\cdots\hspace{1em}T^{-1}(\vec e_n)] be the standard matrix for T^{-1}. We call A^{-1} the -inverse matrixinverse matrix of A, +inverse matrixinversematrix of A, and we also say that A is an invertibleinvertiblematrix matrix.

              diff --git a/source/linear-algebra/source/04-MX/03.ptx b/source/linear-algebra/source/04-MX/03.ptx index 582acd20e..e957eb965 100644 --- a/source/linear-algebra/source/04-MX/03.ptx +++ b/source/linear-algebra/source/04-MX/03.ptx @@ -145,7 +145,7 @@

              - Given a basis \mathcal{B}=\setList{\vec{b}_1,\dots, \vec{b}_n} of \IR^n and corresponding matrix B=\begin{bmatrix}\vec b_1&\cdots&\vec b_n\end{bmatrix}, the change of basis/coordinate transformation from the standard basis to \mathcal{B} is the transformation C_\mathcal{B}\colon\IR^n\to\IR^n defined by the property that, for any vector \vec{v}\in\IR^n, the vector C_\mathcal{B}(\vec{v}) describes the unique way to write + Given a basis \mathcal{B}=\setList{\vec{b}_1,\dots, \vec{b}_n} of \IR^n and corresponding matrix B=\begin{bmatrix}\vec b_1&\cdots&\vec b_n\end{bmatrix}, the change of basis/coordinatechange of basis transformation from the standard basis to \mathcal{B} is the transformation C_\mathcal{B}\colon\IR^n\to\IR^n defined by the property that, for any vector \vec{v}\in\IR^n, the vector C_\mathcal{B}(\vec{v}) describes the unique way to write \vec v in terms of the basis, that is, the unique solution to the vector equation: \vec v=x_1\vec{b}_1+\dots+x_n\vec{b}_n.