Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions source/linear-algebra/source/01-LE/01.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ When order is irrelevant, we will use set notation:
<definition>
<statement>
<p>
A <term>Euclidean vector</term><idx>Euclidean vector</idx><idx><h>vector</h><h>Euclidean</h></idx> is an ordered
A <term>Euclidean vector</term><idx><h>Euclidean</h><h>vector</h></idx> is an ordered
list of real numbers
<me>
\left[\begin{array}{c}
Expand Down Expand Up @@ -266,7 +266,7 @@ a_1x_1+a_2x_2+\dots+a_nx_n=b
</me>.
</p>
<p>
A <term>solution</term><idx><h>linear equation</h><h>solution</h></idx> for a linear equation is a Euclidean vector
A <term>solution</term><idx><h>solution</h><h>of a linear equation</h></idx> for a linear equation is a Euclidean vector
<me>
\left[\begin{array}{c}
s_1 \\
Expand Down Expand Up @@ -415,9 +415,9 @@ is equivalent to the system of equations
<definition>
<statement>
<p>
A linear system is <term>consistent</term><idx><h>linear system</h><h>consistent</h></idx> if its solution set
A linear system is <term>consistent</term><idx><h>consistent linear system</h></idx> if its solution set
is non-empty (that is, there exists a solution for the
system). Otherwise it is <term>inconsistent</term>.<idx><h>linear system</h><h>inconsistent</h></idx>
system). Otherwise it is <term>inconsistent</term>.<idx><h>inconsistent linear system</h></idx>
</p>
</statement>
</definition>
Expand Down Expand Up @@ -782,7 +782,7 @@ Then use these to describe the solution set
Sometimes, we will find it useful to refer only to the coefficients of the linear system
(and ignore its constant terms).
We call the <m>m\times n</m> array consisting of these coefficients
a <term>coefficient matrix</term><idx>coeefficient matrix</idx>.
a <term>coefficient matrix</term><idx>coefficient matrix</idx>.
<me>
\left[\begin{array}{cccc}
a_{11} &amp; a_{12} &amp; \cdots &amp; a_{1n}\\
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/01-LE/02.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ matrix to the next:
<definition>
<statement>
<p>
A matrix is in <term>reduced row echelon form</term> (<term>RREF</term>)<idx>Reduced row echelon form</idx> if
A matrix is in <term>reduced row echelon form</term> (<term>RREF</term>)<idx>reduced row echelon form</idx> if
<ol>
<li> The leftmost nonzero term of each row is 1.
We call these terms <term>pivots</term>.<idx>pivot</idx>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/01-LE/03.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ different solutions. We'll learn how to find such solution sets in
<title>Mathematical Writing Explorations</title>
<exploration>
<statement>
<p> A system of equations with all constants equal to 0 is called <term> homogeneous</term>. These are addressed in detail in section <xref ref="EV7"></xref>
<p> A system of equations with all constants equal to 0 is called <term> homogeneous</term><idx>homogeneous</idx>. These are addressed in detail in section <xref ref="EV7"></xref>
<ul>
<li> Choose three systems of equations from this chapter that you have already solved. Replace the constants with 0 to make the systems homogeneous. Solve the homogeneous systems and make a conjecture about the relationship between the earlier solutions you found and the associated homogeneous systems.
</li>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/02-EV/01.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ we refer to this real number as a <term>scalar</term>.
<definition xml:id="EV1-definition-linear-combo">
<statement>
<p>
A <term>linear combination</term> <idx> linear combination</idx>of a set of vectors
A <term>linear combination</term> <idx>linear combination</idx>of a set of vectors
<m>\{\vec v_1,\vec v_2,\dots,\vec v_n\}</m> is given by
<m>c_1\vec v_1+c_2\vec v_2+\dots+c_n\vec v_n</m> for any choice of
scalar multiples <m>c_1,c_2,\dots,c_n</m>.
Expand Down
4 changes: 2 additions & 2 deletions source/linear-algebra/source/02-EV/03.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -317,12 +317,12 @@
</li>
<li>
<p>
the subset is <term>closed under addition</term><idx><h>vector space</h></idx><idx>closed under addition</idx>: for any <m>\vec{u},\vec{v} \in W</m>, the sum <m>\vec{u}+\vec{v}</m> is also in <m>W</m>.
the subset is <term>closed under addition</term><idx><h>closed</h><h>under addition</h></idx>: for any <m>\vec{u},\vec{v} \in W</m>, the sum <m>\vec{u}+\vec{v}</m> is also in <m>W</m>.
</p>
</li>
<li>
<p>
the subset is <term>closed under scalar multiplication</term><idx><h>vector space</h></idx><idx>closed under scalar multiplication</idx>: for any <m>\vec{u} \in W</m> and scalar <m>c \in \IR</m>, the product <m>c\vec{u}</m> is also in <m>W</m>.
the subset is <term>closed under scalar multiplication</term><idx><h>closed</h><h>under scalar multiplication</h></idx>: for any <m>\vec{u} \in W</m> and scalar <m>c \in \IR</m>, the product <m>c\vec{u}</m> is also in <m>W</m>.
</p>
</li>
</ul>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/02-EV/05.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ that equals the vector
<definition>
<statement>
<p>
The <term>standard basis</term><idx><h>basis</h></idx><idx>standard</idx> of <m>\IR^n</m> is the set <m>\{\vec{e}_1, \ldots, \vec{e}_n\}</m> where
The <term>standard basis</term><idx><h>standard</h><h>basis</h></idx> of <m>\IR^n</m> is the set <m>\{\vec{e}_1, \ldots, \vec{e}_n\}</m> where
<md>
<mrow>
\vec{e}_1 &amp;= \left[\begin{array}{c}1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \end{array}\right] &amp;
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/02-EV/06.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ is now independent.
<definition>
<statement>
<p>
Let <m>W</m> be a subspace of a vector space. A <term>basis</term> for
Let <m>W</m> be a subspace of a vector space. A <term>basis</term><idx>basis</idx> for
<m>W</m> is a linearly independent set of vectors that spans <m>W</m>
(but not necessarily the entire vector space).
</p>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/02-EV/07.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,7 @@ shape of the data that gets projected onto this single point of the screen?
<subsection>
<title>Mathematical Writing Explorations</title>
<exploration xml:id="non-singular">
<p> An <m>n \times n</m> matrix <m>M</m> is <term>non-singular</term><idx>non-singular</idx> if the associated homogeneous system with coefficient matrix <m>M</m> is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular.
<p> An <m>n \times n</m> matrix <m>M</m> is <term>non-singular</term><idx>non-singular matrix</idx> if the associated homogeneous system with coefficient matrix <m>M</m> is consistent with one solution. Assume the matrices in the writing explorations in this section are all non-singular.

<ul>
<li>Prove that the reduced row echelon form of <m>M</m> is the identity matrix.
Expand Down
4 changes: 2 additions & 2 deletions source/linear-algebra/source/03-AT/01.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,8 @@ can be applied before or after the transformation without affecting the result.
<statement>
<p>
Given a linear transformation <m>T:V\to W</m>,
<m>V</m> is called the <term>domain</term> of <m>T</m> and
<m>W</m> is called the <term>co-domain</term> of <m>T</m>.
<m>V</m> is called the <term>domain</term><idx><h>domain</h></idx> of <m>T</m> and
<m>W</m> is called the <term>co-domain</term><idx><h>co-domain</h></idx> of <m>T</m>.
</p>
<figure>
<caption>A linear transformation with a domain of <m>\IR^3</m> and a co-domain of <m>\IR^2</m></caption>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/03-AT/02.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ Therefore any linear transformation <m>T:V \rightarrow W</m> can be defined
by just describing the values of <m>T(\vec b_i)</m>.
</p>
<p>
Put another way, the images of the basis vectors completely <term>determine</term><idx>determine</idx> the transformation <m>T</m>.
Put another way, the images of the basis vectors completely determine the transformation <m>T</m>.
</p>
</statement>
</fact>
Expand Down
6 changes: 3 additions & 3 deletions source/linear-algebra/source/03-AT/03.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ the set of all vectors that transform into <m>\vec 0</m>?
<statement>
<p>
Let <m>T: V \rightarrow W</m> be a linear transformation, and let <m>\vec{z}</m> be the additive
identity (the <q>zero vector</q>) of <m>W</m>. The <term>kernel</term><idx>kernel</idx> of <m>T</m>
identity (the <q>zero vector</q>) of <m>W</m>. The <term>kernel</term><idx>kernel</idx>of <m>T</m>
(also known as the <term>null space</term><idx>null space</idx> of <m>T</m>)
is an important subspace of <m>V</m> defined by
<me>
Expand Down Expand Up @@ -656,15 +656,15 @@ Which of the following is equal to the dimension of the image of <m>T</m>?

<observation>
<p>
Combining these with the observation that the number of columns is the dimension of the domain of <m>T</m>, we have the <term>rank-nullity theorem</term>:
Combining these with the observation that the number of columns is the dimension of the domain of <m>T</m>, we have the <term>rank-nullity theorem</term><idx>rank-nullity theorem</idx>:
</p>
<blockquote>
<p>
The dimension of the domain of <m>T</m> equals <m>\dim(\ker T)+\dim(\Im T)</m>.
</p>
</blockquote>
<p>
The dimension of the image is called the <term>rank</term> of <m>T</m> (or <m>A</m>) and the dimension of the kernel is called the <term>nullity</term>.
The dimension of the image is called the <term>rank</term><idx><h>rank</h><h>of a linear transformation</h></idx> of <m>T</m> (or <m>A</m>) and the dimension of the kernel is called the <term>nullity</term><idx>nullity</idx>.
</p>
</observation>

Expand Down
4 changes: 2 additions & 2 deletions source/linear-algebra/source/03-AT/04.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
<statement>
<p>
Let <m>T: V \rightarrow W</m> be a linear transformation.
<m>T</m> is called <term>injective</term> or <term>one-to-one</term> if <m>T</m> does not map two
<m>T</m> is called <term>injective</term><idx>injective</idx> or <term>one-to-one</term><idx><seealso>injective</seealso></idx> if <m>T</m> does not map two
distinct vectors to the same place. More precisely, <m>T</m> is injective if
<m>T(\vec{v}) \neq T(\vec{w})</m> whenever <m>\vec{v} \neq \vec{w}</m>.
</p>
Expand Down Expand Up @@ -208,7 +208,7 @@ Is <m>T</m> injective?
<statement>
<p>
Let <m>T: V \rightarrow W</m> be a linear transformation.
<m>T</m> is called <term>surjective</term> or <term>onto</term> if every element of <m>W</m> is mapped to by an element of <m>V</m>. More precisely, for every <m>\vec{w} \in W</m>, there is some <m>\vec{v} \in V</m> with <m>T(\vec{v})=\vec{w}</m>.
<m>T</m> is called <term>surjective</term><idx>surjective</idx> or <term>onto</term><idx><seealso>surjective</seealso></idx> if every element of <m>W</m> is mapped to by an element of <m>V</m>. More precisely, for every <m>\vec{w} \in W</m>, there is some <m>\vec{v} \in V</m> with <m>T(\vec{v})=\vec{w}</m>.
</p>
<figure>
<caption>A surjective transformation and a non-surjective transformation</caption>
Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/03-AT/06.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -555,7 +555,7 @@ as a linear combination of polynomials from the set
<exploration>
<statement>
<p>
Given a matrix <m>M</m> the <term>rank</term><idx>rank</idx> of a matrix is the dimension of the column space.
Given a matrix <m>M</m> the <term>rank</term><idx><h>rank</h><h>of a matrix</h></idx> of a matrix is the dimension of the column space.



Expand Down
2 changes: 1 addition & 1 deletion source/linear-algebra/source/04-MX/01.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ What are the domain and codomain of the composition map <m>S \circ T</m>?
<definition>
<statement>
<p>
We define the <term>product</term> <m>AB</m> of a <m>m \times n</m> matrix <m>A</m> and a
We define the <term>product</term><idx><h>matrix</h><h>product</h></idx> <m>AB</m> of a <m>m \times n</m> matrix <m>A</m> and a
<m>n \times k</m>
matrix <m>B</m> to be the <m>m \times k</m> standard matrix of the composition map of the
two corresponding linear functions.
Expand Down
4 changes: 2 additions & 2 deletions source/linear-algebra/source/04-MX/02.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Let <m>T: \IR^n \rightarrow \IR^n</m> be a linear bijection with standard matrix
</p>
<p>
By item (B) from <xref ref="MX2-card-sort"/>
we may define an <term>inverse map</term><idx>inverse map</idx>
we may define an <term>inverse map</term><idx>inverse map</idx><idx><h>invertible</h><h>linear transformation</h></idx>
<m>T^{-1} : \IR^n \rightarrow \IR^n</m>
that defines <m>T^{-1}(\vec b)</m> as the unique solution <m>\vec x</m> satisfying
<m>T(\vec x)=\vec b</m>, that is, <m>T(T^{-1}(\vec b))=\vec b</m>.
Expand All @@ -118,7 +118,7 @@ Furthermore, let
<me>A^{-1}=[T^{-1}(\vec e_1)\hspace{1em}\cdots\hspace{1em}T^{-1}(\vec e_n)]</me>
be the standard matrix for <m>T^{-1}</m>. We call <m>A^{-1}</m> the
<term>inverse matrix</term><idx>inverse matrix</idx> of <m>A</m>,
and we also say that <m>A</m> is an <term>invertible</term><idx>invertible</idx>
and we also say that <m>A</m> is an <term>invertible</term><idx><h>invertible</h><h>matrix</h></idx>
matrix.
</p>
</statement>
Expand Down
6 changes: 3 additions & 3 deletions source/linear-algebra/source/05-GT/01.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ begins with area <m>1</m>).
<remark>
<statement>
<p>
We will define the <term>determinant</term> of a square matrix <m>B</m>,
We will define the <term>determinant</term><idx>determinant</idx> of a square matrix <m>B</m>,
or <m>\det(B)</m> for short, to be the factor by which <m>B</m> scales areas.
In order to figure out how to compute it, we first figure out the properties it must satisfy.
</p>
Expand Down Expand Up @@ -286,7 +286,7 @@ In order to figure out how to compute it, we first figure out the properties it
The images from <xref ref="fig-GT1-image-unit-square-transform2"/> and <xref ref="fig-GT1-image-scale-not-rotate"/> are shown side by side.
</description>
</image>
<caption>The linear transformation <m>B</m> scaling areas by a constant factor, which we call the <term>determinant</term></caption>
<caption>The linear transformation <m>B</m> scaling areas by a constant factor, which we call the <term>determinant</term><idx>determinant</idx></caption>
</figure>
</statement>
</remark>
Expand Down Expand Up @@ -429,7 +429,7 @@ the areas on the left may be partitioned and reorganized into the area on the ri
<definition>
<statement>
<p>
The <term>determinant</term> is the unique function
The <term>determinant</term><idx>determinant</idx> is the unique function
<m>\det:M_{n,n}\to\IR</m> satisfying these properties:
<ol>
<li><m>\det(I)=1</m></li>
Expand Down
4 changes: 2 additions & 2 deletions source/linear-algebra/source/05-GT/03.ptx
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ is a vector <m>\vec{x} \in \IR^n</m> such that <m>A\vec{x}</m> is parallel to <m
</figure>
<p>
In other words, <m>A\vec{x}=\lambda \vec{x}</m> for some scalar <m>\lambda</m>.
If <m>\vec x\not=\vec 0</m>, then we say <m>\vec x</m> is a <term>nontrivial eigenvector</term><idx><h>eigenvector</h></idx><idx>nontrivial</idx>
If <m>\vec x\not=\vec 0</m>, then we say <m>\vec x</m> is a <term>nontrivial eigenvector</term><idx><h>nontrivial</h><h>eigenvector</h></idx>
and we call this <m>\lambda</m> an <term>eigenvalue</term><idx>eigenvalue</idx> of <m>A</m>.
</p>
</statement>
Expand Down Expand Up @@ -272,7 +272,7 @@ And what else?
<statement>
<p>
The expression <m>\det(A-\lambda I)</m> is called the
<term>characteristic polynomial</term> of <m>A</m>.
<term>characteristic polynomial</term><idx>characteristic polynomial</idx> of <m>A</m>.
</p>
<p>
For example, when
Expand Down
Loading